-
Notifications
You must be signed in to change notification settings - Fork 3.2k
Support Redshift COPY for bulk loads #24546
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
hi @brendanstennett, #24117 just improves the read path by implementing redshift UNLOAD. |
What's the use case for moving data into Redshift using Trino ? |
Thanks @mayankvadariya, this is great functionality. Thank you for implementing! @raunaqmorarka For multi-cloud ETL jobs coming out of a relational database into Redshift to support other operational use cases. I definitely understand you typically see this going the other direction but all sorts of interesting use cases open up for easy movement of data in either direction. I'll see where I can devote some cycles to look into this. |
When loading data into Redshift using a CTAS statement, the Redshift connector defaults to BaseJdbcClient behaviour of using batched insert statements. While this behaviour works well for most database systems, Redshift's handling of INSERT statements is very slow and, according to this article by AWS, considered an anti-pattern. This is also the case for most OLAP systems. In real world performance, we see about 300 rows per second per Redshift node.
I was wondering if there was an appetite to improve this using the Redshift
COPY
statement? I think it would work as follows:s3://my-bucket/my-prefix/trino_query_id/parts
COPY ... FROM ...
I can put my hand up to implement this if it is felt that this is something that would be useful to others. It seems like the way to do so based on looking through similar implementations would be to implement the PageSink and then the PageSinkProvider for this operation, as well as any surrounding credential providers. Please let me know if I am overlooking anything thinking about it this way or not considering anything major.
I have noticed that @mayankvadariya is implementing an adjacent feature in #24117 though so I don't want to step on any toes or duplicate effort if this is already being worked on.
The text was updated successfully, but these errors were encountered: