QUICK LINKS
View Jitterbit Developer Portal
Each operation can be configured with options such as when the operation will time out, what to log, and the timeframe for debug logging. Depending on the components used in an operation, it may also be able to be configured with options for whether a subsequent operation runs and whether to use chunking.
Operation options can be accessed from the operation settings that are accessible from the project pane in both the Workflows and Components tabs, and from the design canvas:
Once the operation settings screen is open, select the Options tab:
Each option available within the Options tab of the operation settings is described below.
Operation Time Out: The operation timeout is the maximum amount of time the operation will run for before being canceled. In the first field, enter a number and in the second field use the dropdown to select the units in Seconds, Minutes, or Hours. By default, this is set to 2 hours.
What to Log: Use the dropdown to select logging of Everything or Errors Only. By default, everything is logged. This includes success, canceled, pending, running, and error statuses. When only errors are logged, note that successful child operations are not displayed in operation logs. Parent (root level) operations are always displayed in the logs because they require logging to function properly. The operation logs are available within the Cloud Studio operation log screen (see Operation Logs) and Activities page of the Management Console.
Enable Debug Mode Until: Debug logging allows you to log debug messages to files on a Private Agent. This option is used mainly for debugging problems during testing and should not be turned on in production.
To turn on debug mode, select the checkbox and specify a date that debug mode will automatically be turned back off. This date is limited to 2 weeks from the current date. Debugging will be turned off at the beginning of that date (that is, 12:00 a.m.) using the time zone of the Private Agent. The debug log files are available from the Management Console on the Agents > Agents and Activities pages.
Run success Operation even if there are no matching source files: This option is present only if the operation contains a file-based activity that is used as a source within the operation, and applies only when the operation has "on success" operation actions configured. By default, any "on success" operations will run only if they have a matching source file to process.
To force the previous operation to be successful, select the checkbox. This will effectively let you kick off the "on success" operation even if the trigger failed.
Enable Chunking: This option is present only if the operation contains a transformation or a database, NetSuite, Salesforce, or SOAP activity, and is used for processing data to the target system in chunks. This allows for faster processing of large datasets and is also used to address record limits imposed by various web-service-based systems when making a request.
Note if you are using a Salesforce endpoint:
If a Salesforce activity is added to an operation that does not have chunking enabled, chunking becomes enabled with default settings specifically for Salesforce as described below.
If a Salesforce activity is added to an operation that already has chunking enabled, the chunking settings will not be changed. Likewise, if a Salesforce activity is removed from an operation, the chunking settings will not be changed.
Chunk Size: Enter a number of source records (nodes) to process for each thread. When chunking is enabled for operations that do not contain any Salesforce activities, the default chunk size is 1
. When a Salesforce activity is added to an operation that does not have chunking enabled, chunking automatically becomes enabled with a default chunk size of 200
. If using a Salesforce bulk activity, you should change this default to a much larger number, such as 10,000.
Number of Records per File: Enter a requested number of records to be placed in the target file. The default is 0
, meaning there is no limit on the number of records per file.
Max Number of Threads: Enter the number of concurrent threads to process. When chunking is enabled for operations that do not contain any Salesforce activities, the default number of threads is 1
. When a Salesforce activity is added to an operation that does not have chunking enabled, chunking automatically becomes enabled with a default number of threads of 2
.
Additional information and best practices for chunking are provided in the next section, Chunking.
Chunking is used to split the source data into multiple chunks based on the configured chunk size. The chunk size is the number of source records (nodes) for each chunk. The transformation is then performed on each chunk separately, with each source chunk producing one target chunk. The resulting target chunks combine to produce the final target.
Chunking can be used only if records are independent and from a non-LDAP source. We recommend using as large a chunk size as possible, making sure that the data for one chunk fits into available memory. For additional methods to limit the amount of memory a transformation uses, see Transformation Processing.
Many web service APIs (SOAP/REST) have size limitations. For example, a Salesforce upsert accepts only 200 records for each call. With sufficient memory, you could configure an operation to use a chunk size of 200. The source would be split into chunks of 200 records each, and each transformation would call the web service once with a 200-record chunk. This would be repeated until all records have been processed. The resulting target files would then be combined. (Note that you could also use Salesforce bulk activities to avoid the use of chunking.)
If you have a large source and a multi-CPU computer, chunking can be used to split the source for parallel processing. Since each chunk is processed in isolation, several chunks can be processed in parallel. This applies only if the source records are independent of each other at the chunk node level. Web services can be called in parallel using chunking, improving performance.
When using chunking on an operation where the target is a database, note that the target data will first be written to numerous temporary files (one for each chunk). These files will then be combined to one target file, which will be sent to the database for insert/update. If you set the Jitterbit variable jitterbit.target.db.commit_chunks
to 1
or true
when chunking is enabled, each chunk will instead be committed to the database as it becomes available. This can improve performance significantly as the database insert/updates will be performed in parallel.
As chunking can invoke multi-threading, its use can affect the behavior of variables that are not shared between the threads.
Global and project variables are segregated between the instances of chunking, and although the data is combined, changes to these variables are not. Only changes made to the initial thread are preserved at the end of the transformation.
For example, if an operation – with chunking and multiple threads – has a transformation that changes a global variable, the global variable's value after the operation ends will be that from the first thread. Any changes to the variable in other threads are independent and are discarded when the operation completes.
These global variables are passed to the other threads by value rather than by reference, ensuring that any changes to the variables are not reflected in other threads or operations. This is similar to the
RunOperation()
function when in asynchronous mode.
Last updated: Jul 05, 2019