The variable connector is accessed from the Connectivity tab of the design component palette:
This connector is used to first configure a variable connection to establish access to either a project variable or an in-memory global variable, and then to configure one or more variable activities associated with that connection to use as a source or target within an operation:
- Read: Reads data from a variable connection and is used as a source in an operation.
- Write: Writes data from a variable connection and is used as a target in an operation.
Together, a specific variable connection and its activities are referred to as a variable endpoint. Once a connection is configured, activities associated with the endpoint are available from the Endpoints filter:
For more information on using variables in scripts, transformations, and connection/activity configuration screens, see Variables.
Variable versus Temporary Storage
Two of the most common out-of-the-box temporary storage types in Jitterbit Harmony are variable endpoints and temporary storage endpoints. There are several considerations to take into account when choosing one over the other.
Variable endpoints (read and write activities, not to be confused with scripting global variables) are easy to code and reduce complexity, as described later on this page. However, they have certain limitations.
For a scenario where an integration is working with tiny data sets — typical of web service requests and responses, or small files of a few hundred records — we suggest using a variable endpoint.
When the data set is in the megabyte range, the variable endpoint becomes slower than the equivalent temporary storage endpoint. This starts to happen when the data becomes over 4 MB in size.
When the data set is in the larger multi-megabyte range, there is a risk of data truncation. We recommend a limit of 50 MB to be conservative and prevent any risk of truncation occurring.
Using variable endpoints in asynchronous operations is a use case that requires special consideration. There is a limit of 7 KB on the size of a data set used in a variable endpoint that is used in an asynchronous operation. In this scenario, exceeding that limit can result in truncation. See the
RunOperation() function for a description of calling an asynchronous operation.
Temporary Storage Endpoint
Larger data sets, such as those used in ETL scenarios with record counts in the thousands, should be handled using temporary storage endpoints.
Unlike variable endpoints, there is no degradation in performance or truncation when using temporary storage endpoints, even with very large data sets. However, using temporary storage endpoints may require additional scripting. By using temporary storage endpoints, you are not able to take advantage of the reuse and simplicity of variable endpoints, as described later on this page.
Note that Cloud Agents that are version 10.10 or higher have a temporary storage endpoint file size limit of 50 GB per file. Those who need to create temporary files larger than 50 GB will require a Private Agent.
Using Variable Endpoints Can Increase Reuse and Reduce Complexity
Using a variable endpoint for tiny data sets can increase reuse and reduce complexity. For example, when building operations chained with operation actions, each operation can have activities that function as sources (read activities) and targets (write activities). Instead of building individual source or target combinations for each operation, it is easy to use a common variable target and source (outlined in the example below in red):
To increase reusability and standardization, you can build a reusable script that logs the content of the variable (the script log.memory in the above example, outlined in green). This approach can also be accomplished using temporary storage, but additional scripting is needed to initialize the path and filename.
When using a variable endpoint, its scope is the chain — the thread — of operations. Thus, variable endpoint values are unique to a particular thread, and are destroyed when the thread is finished. This is not the case with temporary storage; as a result, it requires more handling to ensure uniqueness. The best practice is to initialize a GUID at the start of an operation chain and then pass that GUID to each of the temporary storage file names in the chain, as described in Persisting Data for Later Processing Using Temporary Storage. (Although that document is for Design Studio, the same concepts can be applied to Cloud Studio.)
When performing operation unit testing, it is helpful to load test data. Using a variable source or target makes this simple: you add a pre-operation script to write the test data to a target:
In contrast, writing data to a temporary storage file looks like this:
Likewise, reading data is simpler with a variable endpoint:
In contrast, this is how you read data from temporary storage:
In summary, using variable endpoints for reading, writing, and logging operation input and output is straightforward, but great caution needs to be given to make sure the data is appropriately sized.
Last updated: Dec 18, 2019
- No labels