Skip to content

Commit 7a07b75

Browse files
committed
docs: Added Keys Source Connector documentation
1 parent 93bd8cc commit 7a07b75

File tree

5 files changed

+78
-66
lines changed

5 files changed

+78
-66
lines changed
Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
11
:link_redis_enterprise: link:https://redis.com/redis-enterprise-software/overview/[Redis Enterprise]
22
:link_lettuce_uri: link:https://github.com/lettuce-io/lettuce-core/wiki/Redis-URI-and-connection-details#uri-syntax[Redis URI Syntax]
33
:link_redis_notif: link:https://redis.io/docs/manual/keyspace-notifications[Redis Keyspace Notifications]
4-
:link_manual_install: link:https://docs.confluent.io/home/connect/community.html#manually-installing-community-connectors/[Manually Installing Community Connectors]
4+
:link_manual_install: link:https://docs.confluent.io/home/connect/community.html#manually-installing-community-connectors/[Manually Installing Community Connectors]
5+
:link_redis_keys: https://redis.io/commands/keys/[Redis KEYS]

docs/guide/src/docs/asciidoc/docker.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[_docker]]
22
= Quick Start with Docker
33

4-
This guide provides a hands-on look at the functionality of the Redis Kafka Source and Sink Connectors:
4+
This section provides a hands-on look at the functionality of the Redis Kafka Source and Sink Connectors:
55

66
* The *redis-sink* connector reads data from a Kafka topic and writes it to a Redis stream
77
* The *redis-source* connector reads data from a Redis stream and writes it to a Kafka topic

docs/guide/src/docs/asciidoc/keyreader.adoc

Lines changed: 0 additions & 26 deletions
This file was deleted.

docs/guide/src/docs/asciidoc/sink.adoc

Lines changed: 13 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Use the following properties to write Kafka records as Redis hashes:
4343

4444
[source,properties]
4545
----
46-
redis.type=HASH
46+
redis.command=HSET
4747
key.converter=<string or bytes> <1>
4848
value.converter=<Avro or JSON> <2>
4949
----
@@ -58,7 +58,7 @@ Use the following properties to write Kafka records as Redis strings:
5858

5959
[source,properties]
6060
----
61-
redis.type=STRING
61+
redis.command=SET
6262
key.converter=<string or bytes> <1>
6363
value.converter=<string or bytes> <2>
6464
----
@@ -73,7 +73,7 @@ Use the following properties to write Kafka records as RedisJSON documents:
7373

7474
[source,properties]
7575
----
76-
redis.type=JSON
76+
redis.command=JSONSET
7777
key.converter=<string, bytes, or Avro> <1>
7878
value.converter=<string, bytes, or Avro> <2>
7979
----
@@ -88,7 +88,7 @@ Use the following properties to store Kafka records as Redis stream messages:
8888

8989
[source,properties]
9090
----
91-
redis.type=STREAM
91+
redis.command=XADD
9292
redis.key=<stream key> <1>
9393
value.converter=<Avro or JSON> <2>
9494
----
@@ -102,15 +102,14 @@ Use the following properties to add Kafka record keys to a Redis list:
102102

103103
[source,properties]
104104
----
105-
redis.type=LIST
106-
redis.key=<key name> <1>
107-
key.converter=<string or bytes> <2>
108-
redis.push.direction=<LEFT or RIGHT> <3>
105+
redis.command=<LPUSH or RPUSH> <1>
106+
redis.key=<key name> <2>
107+
key.converter=<string or bytes> <3>
109108
----
110109

111-
<1><<_collection_key,List key>>
112-
<2> <<_key_string,String>> or <<_key_bytes,bytes>>: Kafka record keys to push to the list
113-
<3> `LEFT`: LPUSH (default), `RIGHT`: RPUSH
110+
<1> `LPUSH` or `RPUSH`
111+
<2> <<_collection_key,List key>>
112+
<3> <<_key_string,String>> or <<_key_bytes,bytes>>: Kafka record keys to push to the list
114113

115114
The Kafka record value can be any format.
116115
If a value is null then the member is removed from the list (instead of pushed to the list).
@@ -121,7 +120,7 @@ Use the following properties to add Kafka record keys to a Redis set:
121120

122121
[source,properties]
123122
----
124-
redis.type=SET
123+
redis.command=SADD
125124
redis.key=<key name> <1>
126125
key.converter=<string or bytes> <2>
127126
----
@@ -138,7 +137,7 @@ Use the following properties to add Kafka record keys to a Redis sorted set:
138137

139138
[source,properties]
140139
----
141-
redis.type=ZSET
140+
redis.command=ZADD
142141
redis.key=<key name> <1>
143142
key.converter=<string or bytes> <2>
144143
----
@@ -155,7 +154,7 @@ Use the following properties to write Kafka records as RedisTimeSeries samples:
155154

156155
[source,properties]
157156
----
158-
redis.type=TIMESERIES
157+
redis.command=TSADD
159158
redis.key=<key name> <1>
160159
----
161160

docs/guide/src/docs/asciidoc/source.adoc

Lines changed: 62 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,30 @@
22
= Source Connector Guide
33
:name: Redis Kafka Source Connector
44

5-
The {name} reads from a Redis stream and publishes messages to a Kafka topic.
5+
The {name} includes 2 source connectors:
6+
7+
* <<_stream_source,Stream>>
8+
* <<_keys_source,Keys>>
9+
10+
[[_stream_source]]
11+
== Stream
12+
13+
The stream source connector reads from a Redis stream and publishes messages to a Kafka topic.
614
It includes the following features:
715

8-
* <<_source_at_least_once_delivery,At least once delivery>>
9-
* <<_source_at_most_once_delivery,At most once delivery>>
10-
* <<_source_tasks,Multiple tasks>>
11-
* <<_stream_reader,Stream Reader>>
16+
* <<_stream_source_at_least_once_delivery,At least once delivery>>
17+
* <<_stream_source_at_most_once_delivery,At most once delivery>>
18+
* <<_stream_source_tasks,Multiple tasks>>
19+
* <<_stream_source_schema,Schema>>
20+
* <<_stream_source_config,Configuration>>
1221

13-
== Delivery Guarantees
22+
=== Delivery Guarantees
1423

1524
The {name} can be configured to ack stream messages either automatically (at-most-once delivery) or explicitly (at-least-once delivery).
1625
The default is at-least-once delivery.
1726

18-
[[_source_at_least_once_delivery]]
19-
=== At-Least-Once
27+
[[_stream_source_at_least_once_delivery]]
28+
==== At-Least-Once
2029

2130
In this mode, each stream message is acknowledged after it has been written to the corresponding topic.
2231

@@ -25,8 +34,8 @@ In this mode, each stream message is acknowledged after it has been written to t
2534
redis.stream.delivery=at-least-once
2635
----
2736

28-
[[_source_at_most_once_delivery]]
29-
=== At-Most-Once
37+
[[_stream_source_at_most_once_delivery]]
38+
==== At-Most-Once
3039

3140
In this mode, stream messages are acknowledged as soon as they are read.
3241

@@ -35,19 +44,14 @@ In this mode, stream messages are acknowledged as soon as they are read.
3544
redis.stream.delivery=at-most-once
3645
----
3746

38-
[[_source_tasks]]
39-
== Multiple Tasks
40-
Use configuration property `tasks.max` to have the change stream handled by multiple tasks.
47+
[[_stream_source_tasks]]
48+
=== Multiple Tasks
49+
Reading from the stream is done through a consumer group so that multiple instances of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
50+
4151
The connector splits the work based on the number of configured key patterns.
4252
When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
4353

44-
45-
[[_stream_reader]]
46-
== Stream Reader
47-
The {name} reads messages from a stream and publishes to a Kafka topic.
48-
Reading is done through a consumer group so that <<_source_tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
49-
50-
54+
[[_stream_source_schema]]
5155
=== Message Schema
5256

5357
==== Key Schema
@@ -66,16 +70,19 @@ The value schema defines the following fields:
6670
|body |Map of STRING|Stream message body
6771
|====
6872

73+
[[_stream_source_config]]
6974
=== Configuration
7075

7176
[source,properties]
7277
----
78+
connector.class=com.redis.kafka.connect.RedisStreamSourceConnector
7379
redis.stream.name=<name> <1>
7480
redis.stream.offset=<offset> <2>
7581
redis.stream.block=<millis> <3>
7682
redis.stream.consumer.group=<group> <4>
7783
redis.stream.consumer.name=<name> <5>
78-
topic=<name> <6>
84+
redis.stream.delivery=<mode> <6>
85+
topic=<name> <7>
7986
----
8087

8188
<1> Name of the stream to read from.
@@ -85,10 +92,41 @@ topic=<name> <6>
8592
<5> Name of the stream consumer (default: `consumer-${task}`).
8693
May contain `${task}` as a placeholder for the task id.
8794
For example, `foo${task}` and task `123` => consumer `foo123`.
88-
<6> Destination topic (default: `${stream}`).
95+
<6> Delivery mode: `at-least-once`, `at-most-once` (default: `at-least-once`).
96+
<7> Destination topic (default: `${stream}`).
8997
May contain `${stream}` as a placeholder for the originating stream name.
9098
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
9199

92-
//[[key-reader]]
93-
//include::_keyreader.adoc[]
100+
[[_keys_source]]
101+
== Keys
102+
103+
The keys source connector captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
104+
The data structure key will be mapped to the record key, and the value will be mapped to the record value.
105+
106+
[WARNING]
107+
====
108+
The keys source connector does not guarantee data consistency because it relies on Redis keyspace notifications which have no delivery guarantees.
109+
It is possible for some notifications to be missed, for example in case of network failures.
110+
111+
Also, depending on the type, size, and rate of change of data structures on the source it is possible the source connector cannot keep up with the change stream.
112+
For example if a big set is repeatedly updated the connector will need to read the whole set on each update and transfer it over to the target database.
113+
With a big-enough set the connector could fall behind and the internal queue could fill up leading up to updates being dropped.
114+
Some preliminary sizing using Redis statistics and `bigkeys`/`memkeys` is recommended.
115+
If you need assistance please contact your Redis account team.
116+
====
117+
118+
[[_keys_source_config]]
119+
=== Configuration
120+
[source,properties]
121+
----
122+
connector.class=com.redis.kafka.connect.RedisKeysSourceConnector
123+
redis.keys.patterns=<glob> <1>
124+
redis.keys.timeout=<millis> <2>
125+
topic=<name> <3>
126+
----
127+
<1> Key pattern to subscribe to. This is the key portion of the pattern that will be used to listen to keyspace events. See {link_redis_keys} for pattern details.
128+
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
129+
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
130+
<2> Idle timeout in millis. Duration after which the connector will stop if no activity is encountered.
131+
<3> Name of the destination topic.
94132

0 commit comments

Comments
 (0)