File Settings for SQL Connectors¶
When a recipe node is configured with a SQL database source or sink—to write to or read from a file—the system needs the file format and instruction on how to map and format the data.
Note
There are no settings for JSON output files used with SQL data sources.
How to¶
The Node Editor has file settings to define mappings for container nodes and DataMapper nodes.
SQL data source¶
- Container node: Navigate to the Inputs tab. Click Add Mapping. Fill in the required fields and the Output File Settings section.
- DataMapper node: Navigate to the Mappings tab. Click Add Mapping. Fill in the required fields and the Output File Settings section.
SQL data sinks¶
- Container node: Navigate to the Outputs tab. Click Add Mapping > Sink. Fill in the required fields and the Sink > Input File Settings section.
- DataMapper node: Navigate to the Mappings tab. Click Add Mapping. Fill in the required fields and the File Settings section.
Column names file settings¶
| Option | Source output or sink input | Description |
|---|---|---|
Insert column names as first row (insert-column-names) |
source output | If checked, this setting instructs the system to create a header row using database column names when writing data to the delimited file. |
Skip first row as column names (first-row-column-names) |
sink input | If checked, this setting instructs the system to ignore the first line of the delimited file when writing the contents to the SQL database. |
Delimited text file settings¶
For a SQL data source, Automation retrieves data from the database and stores it in Python objects. When the system writes to a delimited file, it has to translate the format per certain settings.
Similarly, when Automation retrieves data from a delimited file to write to a SQL data sink, the system must translate the format to store in Python objects.
| Field | Source output or sink input | Description |
|---|---|---|
Column Delimiter (col-delimiter) |
both | The character used in the delimited file to separate each data field at each database table column. Default value , separates fields with commas. |
Row Delimiter (row-delimiter) |
source output | The character used in the delimited file to separate each line at each database table row. Default value \r\n separates rows with carriage returns or line feeds. |
Date Format (date-format) |
both | Python datetime format for dates in the delimited file. Default value %Y-%m-%d might be written to a file as 2020-08-27. See https://strftime.org/ for syntax. |
Datetime Format (datetime-format) |
both | Python datetime format for dates in the delimited file. Default value %Y-%m-%d %H:%M:%S might be written to a file as 2020-08-27 14:30:22. See https://strftime.org/ for syntax. |
Time Format (time-format) |
both | Python datetime type for dates in the delimited file. Default value %H:%M:%S might be written to a file as 14:30:22. See https://strftime.org/ for syntax. |
Integer Format (int-format) |
both | Python string format for integers in the delimited file. Default value is %d for integers. |
Decimal Format (decimal-format) |
both | Python string format for decimal numbers in the delimited file. Default value is %f for floating point numbers. |
Null Value (null-value) |
both | How null values from a SQL source should be written to a CSV file, and which strings from a CSV should be created as null in a SQL sink. Default value is an empty string, but users could specify NULL or None instead. |
True Value (bool-format) |
both | How a value stored as True in a SQL source should be written to a CSV file, and which strings from a CSV should be created as True in a SQL sink. Default value is True, but users could specify Yes or 1 instead. |
False Value (bool-format) |
both | How a value stored as False in a SQL source should be written to a CSV file, and which strings from a CSV should be created as False in a SQL sink. Default value is False, but users could specify No or 0 instead. |
Note
The settings above are available for sources and sinks using DB2, Microsoft SQL Server/Azure Synapse, MySQL, Oracle, PostgreSQL, Salesforce, Snowflake, and Teradata. Note that BigQuery only supports the Column Delimiter setting.
Visual example in Automation¶
A container node can be configured with a mapping on the Inputs tab that defines settings for the source's output file.
When Delimited Text is chosen for the Output File Settings, a SQL data source can create a comma-delimited text file (CSV) as the output:

Binary file input to SQL data sink¶
Note
There are no data mapping settings for binary files used as output from SQL data sources and input to SQL data sinks.
You can select Binary as the File Settings file format. Binary files are non-text, typically machine-readable data files like executables, configuration files, drivers, and libraries; application data can also be stored in binary files.
Visual example in Automation¶
A DataMapper node can be configured with a mapping on the Mappings tab that defines settings for the sink's input file:
