Microsoft Azure Table Storage Connection Details
Introduction
Connector Version
This documentation is based on version 21.0.8662 of the connector.
Important
We recommend using the Microsoft Azure Table Storage v2 connector to connect to Microsoft Azure Table Storage as it allows for additional configuration properties.
Get Started
Azure Table Storage Version Support
Establish a Connection
Connect to Azure Table Storage APIs
The driver will connect to the Azure Table Storage account specified by Account
. By default, connections to the Azure Table Storage are secured via SSL, though this can be controlled through UseSSL
. The authentication method to the Azure Table Storage is determined by the AuthScheme
property.
Authenticate to Azure Table Storage
The following mechanisms may be used to authenticate.
Authentication with Access Key
Set Account
to the storage account name and set the AccessKey
of the storage account to connect. Follow the steps below to obtain these values:
If using Storage as the Backend
(default):
- Log into the Azure portal and select Storage Accounts in the services menu on the left.
- If you currently do not have any storage accounts, create one by clicking the Add button.
- Click the link for the storage account you want to use and select Access Keys under Settings. The Access Keys window contains the storage account name and key (you can use either key1 or key2 to connect) that you will need to use in the connector. These properties map to the
Account
andAccessKey
connector connection properties respectively.
If using CosmosDB as the Backend
:
- Log into the Azure portal and select Cosmos DB in the services menu on the left.
- Click the link for the Cosmos DB account you want to use and select Connection String under Settings. The Connection String window contains the Cosmos DB account name and primary key that you will need to use in the connector. These properties map to the
Account
andAccessKey
connector connection properties respectively.
Authenticate with Shared Access Signature
Set Account
to the storage account name and set the SharedAccessSignature
to a valid signature of a resource to connect to. The SharedAccessSignature
may be generated with a tool such as Azure Storage Explorer.
Typically when SharedAccessSignature
is used, the specific table to work with must also be specified via the Tables
connection property. If no table is specified, a table listing will be attemped, but may fail due to a lack of permissions.
Important Notes
Configuration Files and Their Paths
- All references to adding configuration files and their paths refer to files and locations on the Jitterbit agent where the connector is installed. These paths are to be adjusted as appropriate depending on the agent and the operating system. If multiple agents are used in an agent group, identical files will be required on each agent.
Advanced Features
This section details a selection of advanced features of the Azure Table Storage connector.
User Defined Views
The connector allows you to define virtual tables, called user defined views, whose contents are decided by a pre-configured query. These views are useful when you cannot directly control queries being issued to the drivers. See User Defined Views for an overview of creating and configuring custom views.
SSL Configuration
Use SSL Configuration to adjust how connector handles TLS/SSL certificate negotiations. You can choose from various certificate formats; see the SSLServerCert
property under "Connection String Options" for more information.
Proxy
To configure the connector using private agent proxy settings, select the Use Proxy Settings
checkbox on the connection configuration screen.
User Defined Views
The Jitterbit Connector for Azure Table Storage allows you to define a virtual table whose contents are decided by a pre-configured query. These are called User Defined Views, which are useful in situations where you cannot directly control the query being issued to the driver, e.g. when using the driver from Jitterbit. The User Defined Views can be used to define predicates that are always applied. If you specify additional predicates in the query to the view, they are combined with the query already defined as part of the view.
There are two ways to create user defined views:
- Create a JSON-formatted configuration file defining the views you want.
- DDL statements.
Define Views Using a Configuration File
User Defined Views are defined in a JSON-formatted configuration file called UserDefinedViews.json
. The connector automatically detects the views specified in this file.
You can also have multiple view definitions and control them using the UserDefinedViews
connection property. When you use this property, only the specified views are seen by the connector.
This User Defined View configuration file is formatted as follows:
- Each root element defines the name of a view.
- Each root element contains a child element, called
query
, which contains the custom SQL query for the view.
For example:
{
"MyView": {
"query": "SELECT * FROM NorthwindProducts WHERE MyColumn = 'value'"
},
"MyView2": {
"query": "SELECT * FROM MyTable WHERE Id IN (1,2,3)"
}
}
Use the UserDefinedViews
connection property to specify the location of your JSON configuration file. For example:
"UserDefinedViews", "C:\Users\yourusername\Desktop\tmp\UserDefinedViews.json"
Schema for User Defined Views
User Defined Views are exposed in the UserViews
schema by default. This is done to avoid the view's name clashing with an actual entity in the data model. You can change the name of the schema used for UserViews by setting the UserViewsSchemaName
property.
Work with User Defined Views
For example, a SQL statement with a User Defined View called UserViews.RCustomers
only lists customers in Raleigh:
SELECT * FROM Customers WHERE City = 'Raleigh';
An example of a query to the driver:
SELECT * FROM UserViews.RCustomers WHERE Status = 'Active';
Resulting in the effective query to the source:
SELECT * FROM Customers WHERE City = 'Raleigh' AND Status = 'Active';
That is a very simple example of a query to a User Defined View that is effectively a combination of the view query and the view definition. It is possible to compose these queries in much more complex patterns. All SQL operations are allowed in both queries and are combined when appropriate.
SSL Configuration
Customize the SSL Configuration
By default, the connector attempts to negotiate SSL/TLS by checking the server's certificate against the system's trusted certificate store.
To specify another certificate, see the SSLServerCert
property for the available formats to do so.
Data Model
The connector allows you to access data in Azure Table storage through a standard database-like interface. Azure tables are highly scalable NoSQL cloud databases that are very different from a regular database. In this section we describe how we model schemaless Azure tables as regular tables. See Tables, Table Columns, and Stored Procedures.
Tables
The list of tables is dynamically retrieved from your Azure account, so that any additions or deletions are immediately reflected when you connect with the connector. You can use the CreateTable stored procedure to create a new table.
Azure tables require both a Partition Key and a Row Key. Both will need to be set for all queries besides SELECT queries.
Table Columns
Since Azure Table Storage tables are schemaless, the connector offers the following two mechanisms to uncover the schema.
Dynamic Schemas
The columns of a table are dynamically determined by scanning data in the first few rows. You can adjust the number of rows that are used by modifying the RowScanDepth
property. When TypeDetectionScheme
is set to "RowScan", the row scan also determines the data type. The following table shows how the different data types supported by Azure Table Storage are modeled in the connector.
Azure Table Storage Type | Modeled Type | Sample Value |
---|---|---|
Edm.Binary | String | 10 |
Edm.Boolean | Boolean | True |
Edm.DateTime | Datetime | 2014-09-23 08:23:45 PM |
Edm.Double | Double | 3.14159265 |
Edm.Int32 | Integer | 123 |
Edm.Int64 | Long | 25000000000 |
Edm.String | String | USA |
Static Schemas
Instead of using dynamically discovered schemas, you can define your own schemas. This will give you more control over the projected columns and also enable you to manually define data types. Refer to the sample schema included with the connector to define your own schema. You can simply change the name of the sample schema file, which must match with the name of the Azure Table Storage table, and edit the column listing to use it for your own table.
Schemaless Operations
The schema of the table is necessary to retrieve data, whether you define it manually or obtain it dynamically. However, data may be inserted, updated, or deleted from columns that do not exist in the schema. Columns that do not already exist in the table schema will have their data types set to String.
Stored Procedures
Stored procedures are available to complement the data available from the Data Model. It may be necessary to update data available from a view using a stored procedure because the data does not provide for direct, table-like, two-way updates. In these situations, the retrieval of the data is done using the appropriate view or table, while the update is done by calling a stored procedure. Stored procedures take a list of parameters and return back a dataset that contains the collection of tuples that constitute the response.
Jitterbit Connector for Azure Table Storage Stored Procedures
Name | Description |
---|---|
CreateEntityWithTypes | Creates a new record with the given data types. If a column is not found on an existing entity, you will need to use this procedure to set the type of the property to something other than string. |
CreateSchema | Creates a schema file for the specified table or view. |
CreateTable | Creates a new table in the Azure system. |
DropTable | Drops an existing table in the Azure system. |
CreateEntityWithTypes
Creates a new record with the given data types. If a column is not found on an existing entity, you will need to use this procedure to set the type of the property to something other than string.
Note
This procedure makes use of indexed parameters
. These input parameters are denoted with a #
character at the end of their names.
Indexed parameters facilitate providing multiple instances a single parameter as inputs for the procedure.
Suppose there is an input parameter named Param#. Input multiple instances of an indexed parameter like this:
EXEC ProcedureName Param#1 = "value1", Param#2 = "value2", Param#3 = "value3"
Input
Name | Type | Required | Description |
---|---|---|---|
TableName | String | True | The name of the table. |
PartitionKey | String | False | The partition key for the new entity. |
RowKey | String | False | The row key for the new entity. |
PropertyName# | String | False | The name for the property. |
PropertyType# | String | False | The data type for the property. The allowed values are string, integer, datetime, boolean, double, long, binary. |
PropertyValue# | String | False | The value for the property. |
Result Set Columns
Name | Type | Description |
---|---|---|
Success | String | Returns true if the entity was created and False otherwise |
CreateSchema
Creates a schema file for the specified table or view.
Input
Name | Type | Required | Description |
---|---|---|---|
TableName | String | True | The name of the table or view. |
FileName | String | True | The full file path and name of the schema to generate. Ex : 'C:\Users\User\Desktop\SmartSheet\sheet.rsd' |
Result Set Columns
Name | Type | Description |
---|---|---|
Result | String | Returns Success or Failure. |
CreateTable
Creates a new table in the Azure system.
Input
Name | Type | Required | Description |
---|---|---|---|
TableName | String | True | The name of the table. |
Result Set Columns
Name | Type | Description |
---|---|---|
Success | String | Returns true if the table was created and False otherwise |
DropTable
Drops an existing table in the Azure system.
Input
Name | Type | Required | Description |
---|---|---|---|
TableName | String | True | The name of the table. |
Result Set Columns
Name | Type | Description |
---|---|---|
Success | String | Returns true if the table was deleted and False otherwise |
System Tables
You can query the system tables described in this section to access schema information, information on data source functionality, and batch operation statistics.
Schema Tables
The following tables return database metadata for Azure Table Storage:
- sys_catalogs: Lists the available databases.
- sys_schemas: Lists the available schemas.
- sys_tables: Lists the available tables and views.
- sys_tablecolumns: Describes the columns of the available tables and views.
- sys_procedures: Describes the available stored procedures.
- sys_procedureparameters: Describes stored procedure parameters.
- sys_keycolumns: Describes the primary and foreign keys.
- sys_indexes: Describes the available indexes.
Data Source Tables
The following tables return information about how to connect to and query the data source:
- sys_connection_props: Returns information on the available connection properties.
- sys_sqlinfo: Describes the SELECT queries that the connector can offload to the data source.
Query Information Tables
The following table returns query statistics for data modification queries:
- sys_identity: Returns information about batch operations or single updates.
sys_catalogs
Lists the available databases.
The following query retrieves all databases determined by the connection string:
SELECT * FROM sys_catalogs
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The database name. |
sys_schemas
Lists the available schemas.
The following query retrieves all available schemas:
SELECT * FROM sys_schemas
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The database name. |
SchemaName | String | The schema name. |
sys_tables
Lists the available tables.
The following query retrieves the available tables and views:
SELECT * FROM sys_tables
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The database containing the table or view. |
SchemaName | String | The schema containing the table or view. |
TableName | String | The name of the table or view. |
TableType | String | The table type (table or view). |
Description | String | A description of the table or view. |
IsUpdateable | Boolean | Whether the table can be updated. |
sys_tablecolumns
Describes the columns of the available tables and views.
The following query returns the columns and data types for the NorthwindProducts table:
SELECT ColumnName, DataTypeName FROM sys_tablecolumns WHERE TableName='NorthwindProducts'
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The name of the database containing the table or view. |
SchemaName | String | The schema containing the table or view. |
TableName | String | The name of the table or view containing the column. |
ColumnName | String | The column name. |
DataTypeName | String | The data type name. |
DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
Length | Int32 | The storage size of the column. |
DisplaySize | Int32 | The designated column's normal maximum width in characters. |
NumericPrecision | Int32 | The maximum number of digits in numeric data. The column length in characters for character and date-time data. |
NumericScale | Int32 | The column scale or number of digits to the right of the decimal point. |
IsNullable | Boolean | Whether the column can contain null. |
Description | String | A brief description of the column. |
Ordinal | Int32 | The sequence number of the column. |
IsAutoIncrement | String | Whether the column value is assigned in fixed increments. |
IsGeneratedColumn | String | Whether the column is generated. |
IsHidden | Boolean | Whether the column is hidden. |
IsArray | Boolean | Whether the column is an array. |
sys_procedures
Lists the available stored procedures.
The following query retrieves the available stored procedures:
SELECT * FROM sys_procedures
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The database containing the stored procedure. |
SchemaName | String | The schema containing the stored procedure. |
ProcedureName | String | The name of the stored procedure. |
Description | String | A description of the stored procedure. |
ProcedureType | String | The type of the procedure, such as PROCEDURE or FUNCTION. |
sys_procedureparameters
Describes stored procedure parameters.
The following query returns information about all of the input parameters for the CreateTable stored procedure:
SELECT * FROM sys_procedureparameters WHERE ProcedureName='CreateTable' AND Direction=1 OR Direction=2
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The name of the database containing the stored procedure. |
SchemaName | String | The name of the schema containing the stored procedure. |
ProcedureName | String | The name of the stored procedure containing the parameter. |
ColumnName | String | The name of the stored procedure parameter. |
Direction | Int32 | An integer corresponding to the type of the parameter: input (1), input/output (2), or output(4). input/output type parameters can be both input and output parameters. |
DataTypeName | String | The name of the data type. |
DataType | Int32 | An integer indicating the data type. This value is determined at run time based on the environment. |
Length | Int32 | The number of characters allowed for character data. The number of digits allowed for numeric data. |
NumericPrecision | Int32 | The maximum precision for numeric data. The column length in characters for character and date-time data. |
NumericScale | Int32 | The number of digits to the right of the decimal point in numeric data. |
IsNullable | Boolean | Whether the parameter can contain null. |
IsRequired | Boolean | Whether the parameter is required for execution of the procedure. |
IsArray | Boolean | Whether the parameter is an array. |
Description | String | The description of the parameter. |
Ordinal | Int32 | The index of the parameter. |
sys_keycolumns
Describes the primary and foreign keys.
The following query retrieves the primary key for the NorthwindProducts table:
SELECT * FROM sys_keycolumns WHERE IsKey='True' AND TableName='NorthwindProducts'
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The name of the database containing the key. |
SchemaName | String | The name of the schema containing the key. |
TableName | String | The name of the table containing the key. |
ColumnName | String | The name of the key column. |
IsKey | Boolean | Whether the column is a primary key in the table referenced in the TableName field. |
IsForeignKey | Boolean | Whether the column is a foreign key referenced in the TableName field. |
PrimaryKeyName | String | The name of the primary key. |
ForeignKeyName | String | The name of the foreign key. |
ReferencedCatalogName | String | The database containing the primary key. |
ReferencedSchemaName | String | The schema containing the primary key. |
ReferencedTableName | String | The table containing the primary key. |
ReferencedColumnName | String | The column name of the primary key. |
sys_foreignkeys
Describes the foreign keys.
The following query retrieves all foreign keys which refer to other tables:
SELECT * FROM sys_foreignkeys WHERE ForeignKeyType = 'FOREIGNKEY_TYPE_IMPORT'
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The name of the database containing the key. |
SchemaName | String | The name of the schema containing the key. |
TableName | String | The name of the table containing the key. |
ColumnName | String | The name of the key column. |
PrimaryKeyName | String | The name of the primary key. |
ForeignKeyName | String | The name of the foreign key. |
ReferencedCatalogName | String | The database containing the primary key. |
ReferencedSchemaName | String | The schema containing the primary key. |
ReferencedTableName | String | The table containing the primary key. |
ReferencedColumnName | String | The column name of the primary key. |
ForeignKeyType | String | Designates whether the foreign key is an import (points to other tables) or export (referenced from other tables) key. |
sys_indexes
Describes the available indexes. By filtering on indexes, you can write more selective queries with faster query response times.
The following query retrieves all indexes that are not primary keys:
SELECT * FROM sys_indexes WHERE IsPrimary='false'
Columns
Name | Type | Description |
---|---|---|
CatalogName | String | The name of the database containing the index. |
SchemaName | String | The name of the schema containing the index. |
TableName | String | The name of the table containing the index. |
IndexName | String | The index name. |
ColumnName | String | The name of the column associated with the index. |
IsUnique | Boolean | True if the index is unique. False otherwise. |
IsPrimary | Boolean | True if the index is a primary key. False otherwise. |
Type | Int16 | An integer value corresponding to the index type: statistic (0), clustered (1), hashed (2), or other (3). |
SortOrder | String | The sort order: A for ascending or D for descending. |
OrdinalPosition | Int16 | The sequence number of the column in the index. |
sys_connection_props
Returns information on the available connection properties and those set in the connection string.
When querying this table, the config connection string should be used:
jdbc:cdata:azuretables:config:
This connection string enables you to query this table without a valid connection.
The following query retrieves all connection properties that have been set in the connection string or set through a default value:
SELECT * FROM sys_connection_props WHERE Value <> ''
Columns
Name | Type | Description |
---|---|---|
Name | String | The name of the connection property. |
ShortDescription | String | A brief description. |
Type | String | The data type of the connection property. |
Default | String | The default value if one is not explicitly set. |
Values | String | A comma-separated list of possible values. A validation error is thrown if another value is specified. |
Value | String | The value you set or a preconfigured default. |
Required | Boolean | Whether the property is required to connect. |
Category | String | The category of the connection property. |
IsSessionProperty | String | Whether the property is a session property, used to save information about the current connection. |
Sensitivity | String | The sensitivity level of the property. This informs whether the property is obfuscated in logging and authentication forms. |
PropertyName | String | A camel-cased truncated form of the connection property name. |
Ordinal | Int32 | The index of the parameter. |
CatOrdinal | Int32 | The index of the parameter category. |
Hierarchy | String | Shows dependent properties associated that need to be set alongside this one. |
Visible | Boolean | Informs whether the property is visible in the connection UI. |
ETC | String | Various miscellaneous information about the property. |
sys_sqlinfo
Describes the SELECT query processing that the connector can offload to the data source.
Collaborative Query Processing
When working with data sources that do not support SQL-92, you can query the sys_sqlinfo view to determine the query capabilities of the underlying APIs, expressed in SQL syntax. The connector offloads as much of the SELECT statement processing as possible to the server and then processes the rest of the query in memory.
Discovering the Data Source's SELECT Capabilities
Below is an example data set of SQL capabilities. The following result set indicates the SELECT functionality that the connector can offload to the data source or process client side. Your data source may support additional SQL syntax. Some aspects of SELECT functionality are returned in a comma-separated list if supported; otherwise, the column contains NO.
Name | Description | Possible Values |
---|---|---|
AGGREGATE_FUNCTIONS | Supported aggregation functions. | AVG , COUNT , MAX , MIN , SUM , DISTINCT |
COUNT | Whether COUNT function is supported. | YES , NO |
IDENTIFIER_QUOTE_OPEN_CHAR | The opening character used to escape an identifier. | [ |
IDENTIFIER_QUOTE_CLOSE_CHAR | The closing character used to escape an identifier. | ] |
SUPPORTED_OPERATORS | A list of supported SQL operators. | = , > , < , >= , <= , <> , != , LIKE , NOT LIKE , IN , NOT IN , IS NULL , IS NOT NULL , AND , OR |
GROUP_BY | Whether GROUP BY is supported, and, if so, the degree of support. | NO , NO_RELATION , EQUALS_SELECT , SQL_GB_COLLATE |
STRING_FUNCTIONS | Supported string functions. | LENGTH , CHAR , LOCATE , REPLACE , SUBSTRING , RTRIM , LTRIM , RIGHT , LEFT , UCASE , SPACE , SOUNDEX , LCASE , CONCAT , ASCII , REPEAT , OCTET , BIT , POSITION , INSERT , TRIM , UPPER , REGEXP , LOWER , DIFFERENCE , CHARACTER , SUBSTR , STR , REVERSE , PLAN , UUIDTOSTR , TRANSLATE , TRAILING , TO , STUFF , STRTOUUID , STRING , SPLIT , SORTKEY , SIMILAR , REPLICATE , PATINDEX , LPAD , LEN , LEADING , KEY , INSTR , INSERTSTR , HTML , GRAPHICAL , CONVERT , COLLATION , CHARINDEX , BYTE |
NUMERIC_FUNCTIONS | Supported numeric functions. | ABS , ACOS , ASIN , ATAN , ATAN2 , CEILING , COS , COT , EXP , FLOOR , LOG , MOD , SIGN , SIN , SQRT , TAN , PI , RAND , DEGREES , LOG10 , POWER , RADIANS , ROUND , TRUNCATE |
TIMEDATE_FUNCTIONS | Supported date/time functions. | NOW , CURDATE , DAYOFMONTH , DAYOFWEEK , DAYOFYEAR , MONTH , QUARTER , WEEK , YEAR , CURTIME , HOUR , MINUTE , SECOND , TIMESTAMPADD , TIMESTAMPDIFF , DAYNAME , MONTHNAME , CURRENT_DATE , CURRENT_TIME , CURRENT_TIMESTAMP , EXTRACT |
REPLICATION_SKIP_TABLES | Indicates tables skipped during replication. | |
REPLICATION_TIMECHECK_COLUMNS | A string array containing a list of columns which will be used to check for (in the given order) to use as a modified column during replication. | |
IDENTIFIER_PATTERN | String value indicating what string is valid for an identifier. | |
SUPPORT_TRANSACTION | Indicates if the provider supports transactions such as commit and rollback. | YES , NO |
DIALECT | Indicates the SQL dialect to use. | |
KEY_PROPERTIES | Indicates the properties which identify the uniform database. | |
SUPPORTS_MULTIPLE_SCHEMAS | Indicates if multiple schemas may exist for the provider. | YES , NO |
SUPPORTS_MULTIPLE_CATALOGS | Indicates if multiple catalogs may exist for the provider. | YES , NO |
DATASYNCVERSION | The Data Sync version needed to access this driver. | Standard , Starter , Professional , Enterprise |
DATASYNCCATEGORY | The Data Sync category of this driver. | Source , Destination , Cloud Destination |
SUPPORTSENHANCEDSQL | Whether enhanced SQL functionality beyond what is offered by the API is supported. | TRUE , FALSE |
SUPPORTS_BATCH_OPERATIONS | Whether batch operations are supported. | YES , NO |
SQL_CAP | All supported SQL capabilities for this driver. | SELECT , INSERT , DELETE , UPDATE , TRANSACTIONS , ORDERBY , OAUTH , ASSIGNEDID , LIMIT , LIKE , BULKINSERT , COUNT , BULKDELETE , BULKUPDATE , GROUPBY , HAVING , AGGS , OFFSET , REPLICATE , COUNTDISTINCT , JOINS , DROP , CREATE , DISTINCT , INNERJOINS , SUBQUERIES , ALTER , MULTIPLESCHEMAS , GROUPBYNORELATION , OUTERJOINS , UNIONALL , UNION , UPSERT , GETDELETED , CROSSJOINS , GROUPBYCOLLATE , MULTIPLECATS , FULLOUTERJOIN , MERGE , JSONEXTRACT , BULKUPSERT , SUM , SUBQUERIESFULL , MIN , MAX , JOINSFULL , XMLEXTRACT , AVG , MULTISTATEMENTS , FOREIGNKEYS , CASE , LEFTJOINS , COMMAJOINS , WITH , LITERALS , RENAME , NESTEDTABLES , EXECUTE , BATCH , BASIC , INDEX |
PREFERRED_CACHE_OPTIONS | A string value specifies the preferred cacheOptions. | |
ENABLE_EF_ADVANCED_QUERY | Indicates if the driver directly supports advanced queries coming from Entity Framework. If not, queries will be handled client side. | YES , NO |
PSEUDO_COLUMNS | A string array indicating the available pseudo columns. | |
MERGE_ALWAYS | If the value is true, The Merge Mode is forcibly executed in Data Sync. | TRUE , FALSE |
REPLICATION_MIN_DATE_QUERY | A select query to return the replicate start datetime. | |
REPLICATION_MIN_FUNCTION | Allows a provider to specify the formula name to use for executing a server side min. | |
REPLICATION_START_DATE | Allows a provider to specify a replicate startdate. | |
REPLICATION_MAX_DATE_QUERY | A select query to return the replicate end datetime. | |
REPLICATION_MAX_FUNCTION | Allows a provider to specify the formula name to use for executing a server side max. | |
IGNORE_INTERVALS_ON_INITIAL_REPLICATE | A list of tables which will skip dividing the replicate into chunks on the initial replicate. | |
CHECKCACHE_USE_PARENTID | Indicates whether the CheckCache statement should be done against the parent key column. | TRUE , FALSE |
CREATE_SCHEMA_PROCEDURES | Indicates stored procedures that can be used for generating schema files. |
The following query retrieves the operators that can be used in the WHERE clause:
SELECT * FROM sys_sqlinfo WHERE Name='SUPPORTED_OPERATORS'
Note that individual tables may have different limitations or requirements on the WHERE clause; refer to the Data Model section for more information.
Columns
Name | Type | Description |
---|---|---|
NAME | String | A component of SQL syntax, or a capability that can be processed on the server. |
VALUE | String | Detail on the supported SQL or SQL syntax. |
sys_identity
Returns information about attempted modifications.
The following query retrieves the Ids of the modified rows in a batch operation:
SELECT * FROM sys_identity
Columns
Name | Type | Description |
---|---|---|
Id | String | The database-generated ID returned from a data modification operation. |
Batch | String | An identifier for the batch. 1 for a single operation. |
Operation | String | The result of the operation in the batch: INSERTED, UPDATED, or DELETED. |
Message | String | SUCCESS or an error message if the update in the batch failed. |
Data Type Mapping
Data Type Mappings
The connector maps types from the data source to the corresponding data type available in the schema. The table below documents these mappings.
Azure Table Storage (OData V3) | Schema |
---|---|
Edm.Binary | binary |
Edm.Boolean | bool |
Edm.DateTime | datetime |
Edm.Double | double |
Edm.Guid | guid |
Edm.Int32 | int |
Edm.Int64 | long |
Edm.String | string |
Edm.TimeOfDay | time |
Advanced Configurations Properties
The advanced configurations properties are the various options that can be used to establish a connection. This section provides a complete list of the options you can configure. Click the links for further details.
Property | Description |
---|---|
Account | The Windows Azure Storage account name. |
UseSSL | This field sets whether SSL is enabled. The default is true. |
AccessKey | The key for the storage account. |
AuthScheme | The scheme used for authentication. Accepted entries are AccessToken and SharedAccessSignature. |
Backend | The backend where data is stored. |
SharedAccessSignature | A shared access key signature that may be used for authentication. |
Property | Description |
---|---|
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
Property | Description |
---|---|
Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC. |
Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC. |
Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC. |
Property | Description |
---|---|
GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
Other | These hidden properties are used only in specific use cases. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TypeDetectionScheme | Determines how to determine the data type of columns. |
Authentication
This section provides a complete list of authentication properties you can configure.
Property | Description |
---|---|
Account | The Windows Azure Storage account name. |
UseSSL | This field sets whether SSL is enabled. The default is true. |
AccessKey | The key for the storage account. |
AuthScheme | The scheme used for authentication. Accepted entries are AccessToken and SharedAccessSignature. |
Backend | The backend where data is stored. |
SharedAccessSignature | A shared access key signature that may be used for authentication. |
Account
The Windows Azure Storage account name.
Data Type
string
Default Value
""
Remarks
The Windows Azure storage account name. To access your Azure storage accounts or create one, log into the Azure portal Click Storage Accounts in the services menu on the left.
If Backend is set to AzureStack, set the Account as the complete host for your data. For example: <account>.nm.azs.sandia.gov.
If Backend is set to Emulator, set the Account as the complete host. For example: http://127.0.0.1:10002/devstoreaccount1
.
UseSSL
This field sets whether SSL is enabled. The default is true.
Data Type
bool
Default Value
true
Remarks
This field sets whether SSL is enabled. The default is true.
AccessKey
The key for the storage account.
Data Type
string
Default Value
""
Remarks
To obtain the Access Key, log into the Azure portal and click Storage Accounts in the services menu on the left to list the storage accounts. Select a storage account and click Access Keys in the Settings section. Either key1 or key2 can be used.
If using CosmosDB as the Backend, instead navigate to Cosmos DB service and select your Azure Cosmos DB Account. From the resource menu, go to the Connection String page. Find the PRIMARY KEY value and set AccessKey to this value.
AuthScheme
The scheme used for authentication. Accepted entries are AccessToken and SharedAccessSignature.
Possible Values
AccessToken
, SharedAccessSignature
Data Type
string
Default Value
SharedAccessSignature
Remarks
Together with Account, this field is used to authenticate against the service. The default is AccessToken:
- AccessToken: Indicates authentication will be done via AccessKey and Account.
- SharedAccessSignature: Indicates authentication will be done via SharedAccessSignature and Account. Generally Tables must also be set.
Backend
The backend where data is stored.
Possible Values
AzureStack
, CosmosDB
, Storage
, Emulator
Data Type
string
Default Value
Storage
Remarks
Data may retrieved from either the classic Azure Table Storage repository, Azure Storage Emulator, or it may be retrieved from CosmosDB. Alternatively, you may specify AzureStack if you are using Azure Stack Hub. If you choose to do so, specify the Account as the complete host for your data. For example: <account>.nm.azs.sandia.gov.
If you choose Emulator, specify the Account and SharedAccessSignature connection property.
SharedAccessSignature
A shared access key signature that may be used for authentication.
Data Type
string
Default Value
""
Remarks
A shared access signature. May be used instead of a AccessKey to access resources.
SSL
This section provides a complete list of SSL properties you can configure.
Property | Description |
---|---|
SSLServerCert | The certificate to be accepted from the server when connecting using TLS/SSL. |
SSLServerCert
The certificate to be accepted from the server when connecting using TLS/SSL.
Data Type
string
Default Value
""
Remarks
If using a TLS/SSL connection, this property can be used to specify the TLS/SSL certificate to be accepted from the server. Any other certificate that is not trusted by the machine is rejected.
This property can take the following forms:
Description | Example |
---|---|
A full PEM Certificate (example shortened for brevity) | -----BEGIN CERTIFICATE----- MIIChTCCAe4CAQAwDQYJKoZIhv......Qw== -----END CERTIFICATE----- |
A path to a local file containing the certificate | C:\cert.cer |
The public key (example shortened for brevity) | -----BEGIN RSA PUBLIC KEY----- MIGfMA0GCSq......AQAB -----END RSA PUBLIC KEY----- |
The MD5 Thumbprint (hex values can also be either space or colon separated) | ecadbdda5a1529c58a1e9e09828d70e4 |
The SHA1 Thumbprint (hex values can also be either space or colon separated) | 34a929226ae0819f2ec14b4a3d904f801cbb150d |
If not specified, any certificate trusted by the machine is accepted.
Certificates are validated as trusted by the machine based on the System's trust store. The trust store used is the 'javax.net.ssl.trustStore' value specified for the system. If no value is specified for this property, Java's default trust store is used (for example, JAVA_HOME\lib\security\cacerts).
Use '*' to signify to accept all certificates. Note that this is not recommended due to security concerns.
Schema
This section provides a complete list of schema properties you can configure.
Property | Description |
---|---|
Location | A path to the directory that contains the schema files defining tables, views, and stored procedures. |
BrowsableSchemas | This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA, SchemaB, SchemaC. |
Views | Restricts the views reported to a subset of the available tables. For example, Views=ViewA, ViewB, ViewC. |
Tables | This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA, TableB, TableC. |
Location
A path to the directory that contains the schema files defining tables, views, and stored procedures.
Data Type
string
Default Value
%APPDATA%\AzureTables Data Provider\Schema
Remarks
The path to a directory which contains the schema files for the connector (.rsd files for tables and views, .rsb files for stored procedures). The folder location can be a relative path from the location of the executable. The Location
property is only needed if you want to customize definitions (for example, change a column name, ignore a column, and so on) or extend the data model with new tables, views, or stored procedures.
If left unspecified, the default location is "%APPDATA%\AzureTables Data Provider\Schema" with %APPDATA%
being set to the user's configuration directory:
Platform | %APPDATA% |
---|---|
Windows | The value of the APPDATA environment variable |
Mac | ~/Library/Application Support |
Linux | ~/.config |
BrowsableSchemas
This property restricts the schemas reported to a subset of the available schemas. For example, BrowsableSchemas=SchemaA,SchemaB,SchemaC.
Data Type
string
Default Value
""
Remarks
Listing the schemas from databases can be expensive. Providing a list of schemas in the connection string improves the performance.
Views
Restricts the views reported to a subset of the available tables. For example, Views=ViewA,ViewB,ViewC.
Data Type
string
Default Value
""
Remarks
Listing the views from some databases can be expensive. Providing a list of views in the connection string improves the performance of the connector.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the views you want in a comma-separated list. Each view should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Views=ViewA,[ViewB/WithSlash],WithCatalog.WithSchema.`ViewC With Space`.
Note that when connecting to a data source with multiple schemas or catalogs, you will need to provide the fully qualified name of the table in this property, as in the last example here, to avoid ambiguity between tables that exist in multiple catalogs or schemas.
Tables
This property restricts the tables reported to a subset of the available tables. For example, Tables=TableA,TableB,TableC.
Data Type
string
Default Value
""
Remarks
When using SharedAccessSignature, in general the Tables
should be set to specify which resource the signature is for.
This property can also be used as an alternative to automatically listing views if you already know which ones you want to work with and there would otherwise be too many to work with.
Specify the tables you want in a comma-separated list. Each table should be a valid SQL identifier with any special characters escaped using square brackets, double-quotes or backticks. For example, Tables=TableA,[TableB/WithSlash],WithCatalog.WithSchema.`TableC With Space`.
Miscellaneous
This section provides a complete list of miscellaneous properties you can configure.
Property | Description |
---|---|
GenerateSchemaFiles | Indicates the user preference as to when schemas should be generated and saved. |
MaxRows | Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time. |
Other | These hidden properties are used only in specific use cases. |
PseudoColumns | This property indicates whether or not to include pseudo columns as columns to the table. |
RowScanDepth | The maximum number of rows to scan to look for the columns available in a table. |
Timeout | The value in seconds until the timeout error is thrown, canceling the operation. |
TypeDetectionScheme | Determines how to determine the data type of columns. |
GenerateSchemaFiles
Indicates the user preference as to when schemas should be generated and saved.
Possible Values
Never
, OnUse
, OnStart
, OnCreate
Data Type
string
Default Value
Never
Remarks
This property outputs schemas to .rsd files in the path specified by Location.
Available settings are the following:
- Never: A schema file will never be generated.
- OnUse: A schema file will be generated the first time a table is referenced, provided the schema file for the table does not already exist.
- OnStart: A schema file will be generated at connection time for any tables that do not currently have a schema file.
- OnCreate: A schema file will be generated by when running a CREATE TABLE SQL query.
Note that if you want to regenerate a file, you will first need to delete it.
Generate Schemas with SQL
When you set GenerateSchemaFiles
to OnUse
, the connector generates schemas as you execute SELECT queries. Schemas are generated for each table referenced in the query.
When you set GenerateSchemaFiles
to OnCreate
, schemas are only generated when a CREATE TABLE query is executed.
Generate Schemas on Connection
Another way to use this property is to obtain schemas for every table in your database when you connect. To do so, set GenerateSchemaFiles
to OnStart
and connect.
MaxRows
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
Data Type
int
Default Value
-1
Remarks
Limits the number of rows returned rows when no aggregation or group by is used in the query. This helps avoid performance issues at design time.
Other
These hidden properties are used only in specific use cases.
Data Type
string
Default Value
""
Remarks
The properties listed below are available for specific use cases. Normal driver use cases and functionality should not require these properties.
Specify multiple properties in a semicolon-separated list.
Integration and Formatting
Property | Description |
---|---|
DefaultColumnSize | Sets the default length of string fields when the data source does not provide column length in the metadata. The default value is 2000. |
ConvertDateTimeToGMT | Determines whether to convert date-time values to GMT, instead of the local time of the machine. |
RecordToFile=filename | Records the underlying socket data transfer to the specified file. |
PseudoColumns
This property indicates whether or not to include pseudo columns as columns to the table.
Data Type
string
Default Value
""
Remarks
This setting is particularly helpful in Entity Framework, which does not allow you to set a value for a pseudo column unless it is a table column. The value of this connection setting is of the format "Table1=Column1, Table1=Column2, Table2=Column3". You can use the "*" character to include all tables and all columns; for example, "*=*".
RowScanDepth
The maximum number of rows to scan to look for the columns available in a table.
Data Type
int
Default Value
50
Remarks
The columns in a table must be determined by scanning table rows. This value determines the maximum number of rows that will be scanned.
Setting a high value may decrease performance. Setting a low value may prevent the data type from being determined properly, especially when there is null data.
Timeout
The value in seconds until the timeout error is thrown, canceling the operation.
Data Type
int
Default Value
60
Remarks
If Timeout
= 0, operations do not time out. The operations run until they complete successfully or until they encounter an error condition.
If Timeout
expires and the operation is not yet complete, the connector throws an exception.
TypeDetectionScheme
Determines how to determine the data type of columns.
Possible Values
None
, RowScan
Data Type
string
Default Value
RowScan
Remarks
Property | Description |
---|---|
None | Setting TypeDetectionScheme to None will return all columns as the string type. |
RowScan | Setting TypeDetectionScheme to RowScan will scan rows to heuristically determine the data type. The RowScanDepth determines the number of rows to be scanned. |