|
| 1 | +# Table of contents |
| 2 | + |
| 3 | +* [`Spec`](#Spec) |
| 4 | + * [`Spec`](#Spec-1) |
| 5 | + * [`Spec`](#Spec-2) |
| 6 | + * [`Spec`](#Spec-3) |
| 7 | + * [`Duration`](#Duration) |
| 8 | + |
| 9 | +## <a name="Spec"></a>Spec |
| 10 | + |
| 11 | +* `format` (`string`) (required) (possible values: `csv`, `json`, `parquet`) |
| 12 | + |
| 13 | + Output format. |
| 14 | + |
| 15 | +* `format_spec` ([`Spec`](#Spec-1), [`Spec`](#Spec-2) or [`Spec`](#Spec-3)) (nullable) |
| 16 | + |
| 17 | +* `compression` (`string`) (possible values: ` `, `gzip`) |
| 18 | + |
| 19 | + Compression type. |
| 20 | + Empty or missing stands for no compression. |
| 21 | + |
| 22 | +* `path` (`string`) (required) |
| 23 | + |
| 24 | + Path template string that determines where files will be written. |
| 25 | + |
| 26 | + The path supports the following placeholder variables: |
| 27 | + - `{{TABLE}}` will be replaced with the table name |
| 28 | + - `{{FORMAT}}` will be replaced with the file format, such as `csv`, `json` or `parquet`. If compression is enabled, the format will be `csv.gz`, `json.gz` etc. |
| 29 | + - `{{UUID}}` will be replaced with a random UUID to uniquely identify each file |
| 30 | + - `{{YEAR}}` will be replaced with the current year in `YYYY` format |
| 31 | + - `{{MONTH}}` will be replaced with the current month in `MM` format |
| 32 | + - `{{DAY}}` will be replaced with the current day in `DD` format |
| 33 | + - `{{HOUR}}` will be replaced with the current hour in `HH` format |
| 34 | + - `{{MINUTE}}` will be replaced with the current minute in `mm` format |
| 35 | + |
| 36 | + **Note** that timestamps are in `UTC` and will be the current time at the time the file is written, not when the sync started. |
| 37 | + |
| 38 | +* `no_rotate` (`boolean`) (default: `false`) |
| 39 | + |
| 40 | + If set to `true`, the plugin will write to one file per table. |
| 41 | + Otherwise, for every batch a new file will be created with a different `.<UUID>` suffix. |
| 42 | + |
| 43 | +* `batch_size` (`integer`) (nullable) (range: `[1,+∞)`) (default: `10000`) |
| 44 | + |
| 45 | + This parameter controls the maximum amount of items may be grouped together to be written in a single write. |
| 46 | + |
| 47 | + Defaults to `10000` unless `no_rotate` is `true` (will be `0` then). |
| 48 | + |
| 49 | +* `batch_size_bytes` (`integer`) (nullable) (range: `[1,+∞)`) (default: `52428800`) |
| 50 | + |
| 51 | + This parameter controls the maximum size of items that may be grouped together to be written in a single write. |
| 52 | + |
| 53 | + Defaults to `52428800` (50 MiB) unless `no_rotate` is `true` (will be `0` then). |
| 54 | + |
| 55 | +* `batch_timeout` ([`Duration`](#Duration)) (nullable) (default: `30s`) |
| 56 | + |
| 57 | + This parameter controls the maximum interval between batch writes. |
| 58 | + |
| 59 | + Defaults to `30s` unless `no_rotate` is `true` (will be `0s` then). |
| 60 | + |
| 61 | +### <a name="Spec-1"></a>Spec |
| 62 | + |
| 63 | + CloudQuery CSV file output spec. |
| 64 | + |
| 65 | +* `skip_header` (`boolean`) (default: `false`) |
| 66 | + |
| 67 | + Specifies if the first line of a file should be the header. |
| 68 | + |
| 69 | +* `delimiter` (`string`) ([pattern](https://json-schema.org/draft/2020-12/json-schema-validation#section-6.3.3): `^.$`) (default: `,`) |
| 70 | + |
| 71 | + Character that will be used as the delimiter. |
| 72 | + |
| 73 | +### <a name="Spec-2"></a>Spec |
| 74 | + |
| 75 | + CloudQuery JSON file output spec. |
| 76 | + |
| 77 | +(`object`) |
| 78 | + |
| 79 | +### <a name="Spec-3"></a>Spec |
| 80 | + |
| 81 | + CloudQuery Parquet file output spec. |
| 82 | + |
| 83 | +(`object`) |
| 84 | + |
| 85 | +### <a name="Duration"></a>Duration |
| 86 | + |
| 87 | +CloudQuery configtype.Duration |
| 88 | + |
| 89 | +(`string`) ([pattern](https://json-schema.org/draft/2020-12/json-schema-validation#section-6.3.3): `^[-+]?([0-9]*(\\.[0-9]*)?[a-z]+)+$`) |
0 commit comments