Best Practices while implementing BPD using IBM BPM
Hi Guys in this post I would post some of the industry best practices that can be followed for a good process Design while designing using IBM BPM.
Below are some of the best Practices for development of a high performance business process using Process Designer.
- Clear
variables in exposed human services
Data from a task less
human service is not garbage-collected until the service reaches the endpoint. If
a human service is developed that is not intended to reach an endpoint, such
as a single page or redirect, then memory is not garbage-collected until
the Enterprise JavaBeans (EJB) timeout occurs (two hours by default). To
reduce memory use for these human services, set variables in the coach to null in
a custom HTML block.
- Do not
use multi-instance loops in system lane or batch activities
Where possible, avoid
using sub-BPDs as the activity of a multi-instance loop (MIL). This step is not
an issue if the first activity is a user task instead of a system lane task.
However, do not use MILs for batch or system lane activities.This pattern can
generate an excessive number of tokens for the BPD to process. Also,
activities in MILs in the system lane are run on a single thread, which is clearly not
optimal on multiprocessor core servers.
Below Image shows a poorly Designed BPD.
![]() |
Poor BPD Design |
![]() |
Good BPD Design |
- Use
conditional joins only when necessary
Simple joins use an “and” condition; all lines that head into the join must have an active token for the tokens to continue forward.
By contrast, for conditional joins, all possible tokens must reach the join before they proceed. Thus, if you have a conditional join with three incoming lines, but only two of them currently have tokens (or might have tokens by looking upstream), then those two tokens must arrive at the join to proceed. To determine this condition, the BPD engine must evaluate all possible upstream paths to determine whether the tokens can arrive at the join. This evaluation can be expensive for large, complex BPDs. Use this capability judiciously.
- Follow
guidelines for error handling
Avoid global error
handling in a service, which can use an excessive amount of server processor
utilization and can even result in infinite loops in coaches.
When catching errors on
an activity in a BPD, do not route the error back to the same activity. Doing so
causes the server to thrash between the BPD and service engine, using a
large amount of Process Server processor time and also database processing.
- Use
sequential system lane activities efficiently
Each system lane activity is considered a new Event Manager task, which adds a task transition in the Process Server. These task transitions are expensive. If your BPD contains multiple system lane service tasks in a row, use one system lane task that wraps the others to minimize the extra resources needed for these transitions. Using one system lane task also applies for multiple consecutive tasks in a participant lane, although that pattern is much less common because generally an action is necessary between each task in a participant lane.
Below image shows poor usage pattern (with multiple consecutive system lane activities)
![]() |
Poor BPD usage pattern |
Below image shows a more optimal usage pattern (one system lane activity that incorporates the multiple activities unlike in above BPD)
![]() |
Good BPD usage pattern |
- Prevent
WSDL validation from causing slow web service integration
The Process Designer web
service integration connector goes through several steps at run time to
start a web service. First, the system generates the SOAPrequest from the
metadata and the business objects (BOs). Then, the system validates the request
against the Web Services Description Language (WSDL),makes the actual SOAP
call over HTTP, and parses the results back into theBOs. Each of these steps
potentially has some latency. Therefore, an important step is to make sure
that the actual web service response time is fast and that the request can be
quickly validated against the WSDL. Speed is especially important for web
services that might be started frequently.
The two major causes of
delays in validation are as follows:
- Slow responses in retrieving the WSDL
- Deeply nested WSDL include structures
If the source of the
WSDL is a remote location, the latency of retrieving that WSDL over HTTP adds to
the overall latency. Thus, a slow connection, a slow proxy, or a slow server
can all potentially increase the latency of the complete web service call. If
that WSDL also nests additional WSDL or XML Schema Definition (XSD) files
through imports or includes, then after the main WSDL is retrieved, the subfiles
must also be retrieved. The validation continues to recurse through all WSDLs or
XSDs that are nested within the subfiles. Therefore, in a
case where there are
multiple levels of nesting, many HTTP calls must make many calls to retrieve
the complete WSDL document, and the overall latency can become high.
To alleviate this type
of latency, you can store a local copy of the WSDL either in the local file system or
somewhere where HTTP response is fast. For even better performance, this local
copy can be “flattened,” removing the nesting by manually replacing all of the include statements
with the actual content.
Will keep updating this when ever I come across more best practices. Happy reading !!!!!
Will keep updating this when ever I come across more best practices. Happy reading !!!!!
No comments:
Post a Comment