This document serves as a general guide to using Jitterbit Harmony. It provides best practice guidance for common integration scenarios and recommendations using the available tools. This document is not comprehensive and does not cover all scenarios.
This document is for users of Jitterbit Harmony Cloud Studio, the web-based version of Jitterbit's project design application. For best practices using Design Studio, Jitterbit's desktop-based project design application, see Best Practices with Design Studio.
Prior to using this guide, you should already be familiar with Jitterbit Harmony and Cloud Studio. To become familiar:
- Complete the Jitterbit Admin Quick Start Tutorial.
- Enroll in and complete relevant Jitterbit University courses, such as Introduction to Jitterbit Harmony Cloud Studio and Introduction to Scripting in Cloud Studio.
- Review introductory material on Success Central, including Getting Started and Cloud Studio Terminology.
At that point, you should know the basic concepts and terms used in Jitterbit Harmony, and understand what we mean by projects, operations, endpoints, scripting, migration, and deployment.
See the Additional Resources section at the end of this page for links to videos and other documents that expand on these best practices with Jitterbit Harmony.
Using Support, Customer Success Managers, and Documentation
Access to Jitterbit Support is included as part of a Jitterbit Harmony customer license. When questions or technical issues arise, you can get expert assistance from Jitterbit Support by using the Jitterbit Support Portal or by email. The Getting Support page describes special instructions for production-down situations in order to escalate time-sensitive issues.
You can also contact your Customer Success Manager (CSM) with questions related to licensing or other topics.
To help you locate relevant material, each main topic on Success Central has a search bar that can be used to find pages only within that topic. For example, on the topic page Cloud Studio, you can search for a term such as Condition to return only pages within the Cloud Studio section of the documentation and exclude similar Design Studio material on the same topic:
Jitterbit Product Updates
Jitterbit Harmony updates are released frequently (see Release Schedule). Even a minor release contains new features and enhancements along with bug fixes.
Cloud applications accessed through the Jitterbit Harmony Portal are updated automatically and always run the latest released version. These applications include Cloud Studio, API Manager, App Builder, Marketplace, Management Console, and Citizen Integrator.
Cloud API Gateway and Cloud Agent Group updates are applied automatically. For the Cloud Agent Groups, there are two sets: Production and Sandbox. The latter set is used to test compatibility with pre-releases of agent software and is not a development environment.
Locally installed applications are upgraded by downloading and running an installer:
- Private Agent upgrades are manually applied using the installer. Specific upgrade instructions for each supported operating system are provided within the appropriate Private Agent installation instructions.
- Private API Gateway upgrades are manually applied using the installer. Detailed instructions are provided within the Private API Gateway installation instructions.
- For Design Studio, the upgrade process is to perform a new installation. Multiple distinct versions of Design Studio can co-exist on a single machine and share the same set of projects.
It is advisable to stay current with releases, particularly releases that include feature upgrades.
Project Design and Reusability
A typical scenario for reusing a project involves the development of a starter project with the extensive use of global variables and, especially, project variables. Configurable items — such as endpoint credentials, optional field mappings, parameterized queries, email addresses, and filenames — can be exposed as project variables. The starter project can also contain common functions such as error handling or the use of environment-wide caches. The starter project is exported and then imported into new projects to form a consistent basis for development.
Endpoints, created by configuring a connection and associated activities using connectors, are frequently used in operations. However, a unique endpoint doesn't necessarily need to be built for each operation. Since activity configurations accept variables for paths and filenames, generic endpoints can be built one time and then dynamically configured using global and project variables.
For example, assume an HTTP connection and an associated activity are created, and the activity configuration specifies a path defined by a global variable, such as
$gv_http_path. A controller script can be used to populate the
$gv_http_path as required.
Another example is a Database Query activity with a condition. Its
WHERE condition can be assigned to a global variable, such as
Most endpoints have the capability to be configured using variables.
Stand-alone scripts that perform a specific function, such as returning a database lookup or calculating a result from a series of arguments, can be candidates for reuse, particularly if used in multiple operations.
For example, if a script uses the
DBLookup function against a database table, and this function is used throughout the project, then a stand-alone script (separate from an operation) can be built. Using the
ArgumentList function or simple global variables, the script can accept arguments and return a result. Since every operation chain is a different scope, the same script can safely be called from multiple simultaneous operations.
Workflows are used as a means of project organization. A workflow typically holds related operations that process data from start to finish: Create Orders, Sync Customer Master, Process Confirmations, etc. Processes that are common across different workflows such as querying an endpoint or handling an operation error condition, can be held in their own workflow and referred to by other workflow operations.
You can also create custom groups where project components can be collected for ease of reference.
The numbers assigned to the operations that appear in the project designer are assigned automatically and are based on the display position of the operation in the project designer. Those numbers are not shown in the operation logs. If an operation numbering scheme is needed, then it can be implemented by incorporating numbering into the operation name.
Managing Asynchronous Operations
When using the
RunOperation function in its asynchronous mode, operations execute without returning control to the calling function. Use of asynchronous operations can lead to race conditions.
For example, if Operation A updates a database table and is chained to Operation B, which reads that same table (both are synchronous), no race conditions are encountered. But if Operation A is called asynchronously followed immediately by Operation B, then B may execute before A is finished.
In addition, the number of simultaneous asynchronous calls must be managed, as the number of simultaneous operations running on an agent is capped (see the
[OperationEngine] section of the Private Agent configuration file).
We recommend using a system ID with administration permissions for endpoint credentials, rather than a user-level ID. User IDs typically expire or have to be disabled when a user leaves a company.
By using project variables (whose values can be hidden) for credentials management, a Harmony organization administrator does not have to enter production credentials. By setting up the appropriate user permissions, a user can apply the production credentials through the Management Console Projects page.
Persisting Integration Data
There are multiple methods for storing data in the Harmony cloud, including using project variables, cloud caching functions, or temporary storage.
Project variables are preinitialized static variables that can be thought of as project constants. They can be edited from Cloud Studio (see Project Variables) or the Management Console (see Projects).
One example use of project variables is for endpoint credentials. By using project variables, different endpoint environments (which usually have different credentials) can be applied to different Harmony environments and edited through the Management Console. This enables a business process where a user with Management Console rights can change credentials without requiring access to Cloud Studio and the project designer.
A second example is to use project variables to hold flags used by integration logic that can customize the behavior of the integration. If a single project is developed but used for different endpoints, then a boolean project variable (such as Send_PO_Number) could be checked by the transformation's logic for the PO Number field. If the project variable's value is false, then the
UnMap function could be used to "turn off" that field and not include it in a relevant transformation.
Cloud Caching Functions
The cloud caching functions
WriteCache are used to assign data spaces that are available across projects and across environments. A cached value is visible to all operations running in the same scope until it expires, regardless of how that operation was started or which agent it runs on. By caching data in Jitterbit Harmony, rather than relying on local or agent-specific data stores such as Temporary Storage, data can be shared between separate operations and across projects.
These are additional uses of cloud caching:
- Data can be shared between asynchronous operations within a project.
- Errors that are generated across different operations could be stored to a common cache. By accumulating operation results in this manner, more comprehensive alerts can be built.
- Login tokens can be shared across operations.
Managing Temporary Storage
When using a Temporary Storage endpoint, temporary files are written to and read from the default operating system's temp directory on the agent that is performing the work:
- In the case of a single Private Agent, the temporary storage directory is that Private Agent server's default temp directory.
- If you are using more than one Private Agent, clustered in a Private Agent Group, the temporary storage directory is the default temp directory of the specific Private Agent server doing the work.
- As Cloud Agents are clustered in a Cloud Agent Group, the temporary storage directory is the default temp directory of the specific Cloud Agent server doing the work.
In a clustered Agent Group (Private or Cloud Agents), as long as the operations using the temporary storage are linked (chained) together, then all the temporary storage reads and writes will happen on the same agent server. However, if Chain A writes to its temp storage
myfile and Chain B reads from its temp storage
myfile, and the two chains are not themselves chained to one another, the temporary storage read action in Chain B may not be read from the same agent as Chain A.
When using temporary storage, keep these guidelines in mind:
When using Private Agents, to make your project upgrade-proof, use temporary storage in such a way that moving from a single Private Agent to a multiple-agent Agent Group does not require refactoring.
- When using a clustered Agent Group (Private or Cloud Agents), to ensure that the agent server where temporary storage is written is the same server where temporary storage will be read from, make sure that any references to the temporary storage Read and Write activities are in the same operation chain.
- Temporary storage on Private Agents is deleted after 24 hours by default by the Jitterbit File Cleanup Service. The cleanup service frequency can be configured through the Private Agent configuration file under the
[FileCleanup]section. However, on Cloud Agents, temporary files may be deleted immediately.
- Cloud Agents have a temporary storage file size limit of 50 GB per file. Temporary files larger than 50 GB are possible only when using Private Agents.
When writing to temporary storage, the default is to overwrite files. This can be changed with the Append to File checkbox in a Temporary Storage Write activity. Usually this then requires that after the source is read that the file be deleted or archived. A simple way to do this is to use the post-processing options Delete File or Rename File in a Temporary Storage Read activity.
Filename keywords are available that can be used when creating a filename.
A file in temporary storage can be read by building a script with the
ReadFilefunction. For example:
ReadFile("<TAG>activity:tempstorage/Temporary Storage/tempstorage_read/Read</TAG>"). Bear in mind that this works reliably only if there is a single Private Agent.
In some cases, it may be advantageous to use a Variable endpoint instead of a Temporary Storage endpoint. See Variable versus Temporary Storage in Global Variable versus Temporary Storage for a comparison of these two different types and for recommendations on when each is appropriate.
When to Use Scripting
Instead of using operation actions, a controller script uses the
RunOperation function to link operations together using a script.
To capture a failed operation, the
If function can be used in conjunction with
RunOperation. For example:
If(!RunOperation(<operation tag>),<condition>), where the condition can use
GetLastError to capture the error, and can elect to stop the entire process using
RaiseError, and/or run another process to accumulate error text.
A controller script can be beneficial in situations such as these:
- To run an operation that is dependent on external factors such as project variables or data.
- To call sub-operations from within a loop, where data is passed to the operation from a list.
- To trace operation chain activities. For example:
(WriteToOperationLog("count of records to process: " + cnt),
WriteToOperationLog("Starting update operation at: " + Now()),
WriteToOperationLog("Database query: " + sql), etc.)
Other areas where scripting is frequently used are within the mapped fields in transformations and in other stand-alone scripts. If the same script is being used within more than one transformation, consider setting up that script as a stand-alone script and calling it from within each transformation using
Naming Convention for Variables
Harmony has four types of variables:
- Project variables: Defined in the Cloud Studio UI and available across a project. Updateable through the Management Console.
- Jitterbit variables: Predefined in Jitterbit Harmony or defined in a Private Agent's configuration file. Available across a project.
As the scope of a local variable is limited to a single script, a naming convention for them can be very simple, such as all-lowercase letters or an initial word, such as
myVariable. Periods are not allowed in local variables.
Global variables, as their scope is larger (a global variable is available to be referenced in the same or downstream operations and scripts within an operation chain), should use a consistent naming convention to avoid confusion. For example, using multiple components for a variable name, separated by periods, you could follow a pattern such as this:
|A short abbreviation identifying the variable type, such as |
|A logical category for the variable, such as |
|A logical subcategory for the variable, such as |
Combining these components, these are possible variable names:
Since variables are sorted alphabetically in various places throughout the UI, organizing them hierarchically will assist with managing and using variables.
Whatever convention you choose to use, we recommend codifying and documenting it so that all team members can consistently use it in all projects.
Harmony enables software development lifecycle methodologies through the use of environments. You can set up both production and non-production environments.
For example, assume that a Development and a Production environment are set up in the Management Console and both are associated with the same Agent Group. Assume that a project is first developed in the Development environment.
Cloud Studio has a migration feature that will copy that project to the Production environment, after which the endpoint credentials are changed to the Production endpoint credentials using project variables. Other source and target endpoints are also changed. After the initial migration, any further migrations of the same project from Development to Production exclude migrating project variable values unless they are new project variables.
Harmony enables rapid integration development and unit testing by making visible the actual integration data during design time. The obvious advantage is to enable an iterative development process by showing the data before and after field transformations, rather than building the entire operation, running it, and inspecting the output. Data is made visible by using the preview feature in a transformation.
After the sample source data is imported or generated, the transformation will show the output of any mappings and embedded scripts.
A key concept for a healthy integration architecture is to recognize that there will be questions raised by the business concerning the accuracy of the integration work, particularly when discrepancies appear in the endpoint data. The integration may or may not be at fault. It is incumbent on the integration project to provide a high degree of transparency in order to help resolve questions about data accuracy.
For example, if data in a target endpoint appears to be incorrect, then typically integration support is called upon to provide details concerning any integration actions, such as times, sources, transformation logic, success or failure messages, etc. The troubleshooting process will benefit by making that information available as a standard part of the integration architecture. In Harmony, this is supported through logging and alerting features.
Operation logs capture key data by default, such as operation run times and success, failure, or cancellation messages. If there are failures and the endpoint returns failure information, then the log will capture that information.
With respect to failures, Harmony uses the response to make the status determination. For example, if an HTTP status code 400 or more is received in a response, Harmony considers this a failure. If the request has a status of 200 but the response contains data errors, Harmony treats this as a success.
When developing an integration project, use the
WriteToOperationLog function in the mappings and scripts to capture key data and steps in the process. This typically is as simple as:
WriteToOperationLog("The id is: "+sourcefieldid).
If capturing the entire output of a transformation is desired, this can be done by building an operation that reads the source, performs the transformation, and writes output to a Variable or Temporary Storage endpoint instead of the target endpoint. A post-operation script can read the output and log it. Then the "real" operation can be performed.
Logs can be viewed in either the Cloud Studio operation log screen or the Management Console Activities page. The Management Console Activities page can be accessed by support personnel without needing to navigate to the project.
Data in the logs is searchable. To filter just the logs you need, you can use the search syntax of message=%<your text>% in both the Cloud Studio and Management Console operation logs.
Frequently, APIs have an informative success or non-success response message. If debug logging is enabled for the API, the request body will be captured in the API logs (which are distinct from the operation logs).
Operation logs, including detailed log messages from both Cloud Agents and Private Agents, are retained for 30 days by Jitterbit Harmony.
Frequently, integration results not only need to be logged, but they also need to be escalated. Email notifications can easily be attached to operations and success/failure paths or called from scripts. You can alternatively use the Email connector to configure a Send Email activity as the target of an operation.
For additional information, refer to Setting Up Alerting, Logging, and Error Handling.
These sections and pages in the documentation provide additional best practices.
Jitterbit Tech Talks are video presentations that cover areas of interest to users of all levels:
|Tech Talk||Duration||Release Date|
|Complex Project Orchestration Best Practices||50:46||2018.10.16|
|Error Handling Best Practices||27:22||2018.03.13|
|Jitterbit Best Practices||1:04:38||2020.03.16|
|Logging Best Practices||1:03:02||2019.02.12|
|Open API Portal Manager||57:21||2019.11.05|
|Pass-Through Sources and Global Variables Best Practices||42:44||2018.12.05|
|Private Agents Best Practices||42:43||2018.07.05|
|Project Organization Best Practices||1:08:39||2018.06.08|
Jitterbit documentation has best practices included with our pages on using Jitterbit products:
- Jitterbit Security and Architecture White Paper
A description of the logical security and architecture, physical security, and organizational security provided by the Jitterbit Harmony platform.
- Security Best Practices for Administrators, Project Builders, and Integration Specialists
Security recommendations for those who integrate Jitterbit Harmony with other products such as Salesforce, NetSuite, and other endpoints.
Integration Project Methodology
- Integration Project Methodology
Addresses the key items a Project Manager for a Jitterbit Harmony project should know. It shows how to organize your team, gather and validate requirements clearly and concisely, and leverage the strengths of Jitterbit Harmony to deliver a successful project.
- Best Practices for SAP
Issues and considerations that can arise when integrating to and from SAP instances, particularly when creating a bidirectional integration.
- Capturing Data Changes with Table or File Changes
Best practices to follow in capturing data changes.
- Configuring Outbound Messages with Harmony API
The recommended approaches for configuring outbound messages.
- Database-Specific Information
Best practices for connecting to various databases.
- Operation Schedules
Best practices to follow when creating and running a schedule.
- Setting Up Alerting, Logging, and Error Handling
Best practices on how to alert users about integration issues.
- Setting Up a Team Collaboration Project
Best practices for supporting multiple developers working on the same project.
- Operation Debug Logging
Information on generating additional operation log data.
- Log File Locations
Locations of log files on Private Agents.
- Setting Up Alerting, Logging, and Error Handling
Best practices on how to alert users about integration issues.
- Agent Groups High Availability and Load Balancing
Recommendations that should be taken into consideration prior to installing Private Agents to allow for high availability (active/active) and load balancing.
- System Requirements for Private Agents
Best practices when creating, installing, and configuring a Private Agent.