Skip to content

Commit f1d902b

Browse files
committed
Update endpoint docs, add HEAD rows doc, data versioning doc
1 parent 2542407 commit f1d902b

File tree

5 files changed

+75
-8
lines changed

5 files changed

+75
-8
lines changed

api-reference/v2/tables/get-table-rows.mdx

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,7 @@
22
title: Get Rows
33
openapi: get /tables/{tableID}/rows
44
---
5+
6+
You can use the `limit` query parameter to specify the maximum number of rows to return for each request.
7+
8+
Whether or not you supply a `limit`, you may need to make multiple requests to retrieve all the rows in the table. If there are more rows to fetch, the `continuation` field will be set in the response. To retrieve the next page of rows, make another request with the `continuation` as a query parameter.
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
---
2+
title: Get Rows Version
3+
openapi: head /tables/{tableID}/rows
4+
---
5+
6+
Returns an `ETag` header containing the current version of the table data.
7+
8+
This endpoint may be polled to detect changes in the table's data.
9+
10+
<Tip>
11+
To learn more about versioning and how to detect changes, please see our guide on [data versioning](/api-reference/v2/tables/versioning).
12+
</Tip>

api-reference/v2/tables/put-tables.mdx

Lines changed: 36 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,27 @@ title: Overwrite Table
33
openapi: put /tables/{tableID}
44
---
55

6-
Overwrite an existing Big Table by clearing all rows and adding new data. You can also update the table schema.
6+
Overwrite an existing Big Table by clearing all rows and adding new data.
77

8-
When using a CSV or TSV request body, you cannot pass a schema. The current schema will be used. If you need to update the schema, use the `onSchemaError=updateSchema` query parameter, or [stash](/api-reference/v2/stashing/introduction) the CSV/TSV data and pass a JSON request body.
8+
### Updating the Schema
99

10-
<Warning>
11-
This is a destructive operation that cannot be undone.
12-
</Warning>
10+
You can optionally update the table schema at the same time by providing a new schema in the JSON request body. If you do not provide a schema, the existing schema will be used.
11+
12+
When using a CSV or TSV request body, you cannot pass a schema. If you need to update the schema, use the `onSchemaError=updateSchema` query parameter, or [stash](/api-reference/v2/stashing/introduction) the CSV/TSV data and pass a JSON request body referencing the stash ID.
1313

1414
## Examples
1515

1616
<AccordionGroup>
17-
<Accordion title="Overwrite Table w/ Row Data">
17+
<Accordion title="Clear table data">
18+
To clear all rows from a table, send a PUT request with an empty array in the `rows` field:
19+
20+
```json
21+
{
22+
"rows": []
23+
}
24+
```
25+
</Accordion>
26+
<Accordion title="Reset table data">
1827
If you want to reset a table's data with a small number of rows, you can do so by providing the data inline in the `rows` field (being sure that row object structure matches the table schema):
1928

2029
```json
@@ -30,15 +39,34 @@ This is a destructive operation that cannot be undone.
3039
However, this is only appropriate for relatively small initial datasets (around a few hundred rows or less, depending on schema complexity). If you need to work with a larger dataset you should utilize stashing.
3140
</Accordion>
3241

33-
<Accordion title="Overwrite Table from Stash">
42+
<Accordion title="Reset table data from Stash">
3443
[Stashing](/api-reference/v2/stashing/introduction) is our process for handling the upload of large datasets. Break down your dataset into smaller, more manageable, pieces and [upload them to a single stash ID](/api-reference/v2/stashing/post-stashes-serial).
3544

3645
Then, to reset a table's data from the stash, use the `$stashID` reference in the `rows` field instead of providing the data inline:
3746

3847
```json
3948
{
40-
"$stashID": "20240215-job32"
49+
"rows": { "$stashID": "20240215-job32" }
4150
}
4251
```
4352
</Accordion>
53+
<Accordion title="Batch update table rows">
54+
This endpoint can be combined with the [Get Rows](/api-reference/v2/tables/get-table-rows) endpoint and [stashing](/api-reference/v2/stashing/introduction) to fetch the current table data, modify it, and then overwrite the table with the updated data:
55+
56+
1. Call the [Get Rows](/api-reference/v2/tables/get-table-rows) endpoint to fetch the current table data. Pass a reasonably high `limit` query parameter because you want to fetch all rows in as few requests as possible.
57+
2. Modify the data as desired, and then stash the modified rows using the [Stash Data](/api-reference/v2/stashing/put-stashes-serial) endpoint.
58+
3. If a `continuation` was returned in step 1, repeat steps 1-3, passing the `continuation` query parameter to [Get Rows](/api-reference/v2/tables/get-table-rows) until all rows have been fetched, modified, and stashed.
59+
4. Finally, call this endpoint with same stash ID used in step 2. This will overwrite the table with the updated data:
60+
61+
```json
62+
{
63+
"rows": { "$stashID": "20240215-job32" }
64+
}
65+
```
66+
<Warning>
67+
**Risk of Data Loss**
68+
69+
We strongly recommend you use the `If-Match` header to ensure the table is not modified between steps 1 and 4. See the [Data Versioning](/api-reference/v2/tables/versioning) guide for more information.
70+
</Warning>
71+
</Accordion>
4472
</AccordionGroup>
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
title: Data Versioning
3+
description: Detect updates and prevent data loss with versioning
4+
---
5+
6+
Every Glide Big Table has a version number that increases whenever its data changes. Additionally, each row in a Big Table keeps track of the table version when it was last modified. This information can be used to detect changes to data and prevent mid-air collisions that can result in data loss.
7+
8+
## Getting the Current Version
9+
10+
Most API endpoints that work with Big Tables return the current version of the table in the `ETag` response header.
11+
12+
If you only want to get the current version of a table without performing any other operation, use the [Get Rows Version](/api-reference/v2/tables/head-table-rows) endpoint. You can poll this endpoint to detect changes to the table.
13+
14+
## Preventing Data Loss
15+
16+
When updating data in a Big Table, you can specify the version of the table you expect to be working with using the `If-Match` header. How this value is interpreted depends on the endpoint:
17+
18+
- The [Overwrite Table](/api-reference/v2/tables/put-tables) endpoint rejects the request if the current table version is newer than the version specified in the `If-Match` header.
19+
- The [Update Row](/api-reference/v2/tables/patch-table-row) endpoint rejects the request if the row has been modified since the version specified in the `If-Match` header. The update is still allowed if another row in the table was modified since the specified version.
20+
21+
In both cases, an HTTP `412 Precondition Failed` response is returned if the condition is not met.

mint.json

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,8 +34,10 @@
3434
{
3535
"group": "Tables",
3636
"pages": [
37+
"api-reference/v2/tables/versioning",
3738
"api-reference/v2/tables/get-tables",
3839
"api-reference/v2/tables/get-table-rows",
40+
"api-reference/v2/tables/head-table-rows",
3941
"api-reference/v2/tables/post-tables",
4042
"api-reference/v2/tables/post-table-rows",
4143
"api-reference/v2/tables/put-tables",

0 commit comments

Comments
 (0)