Skip to content

Commit 9293082

Browse files
author
Julien Ruaux
committed
docs: separated key reader doc and fixed sync to sink
1 parent a71c08b commit 9293082

File tree

3 files changed

+37
-36
lines changed

3 files changed

+37
-36
lines changed

src/docs/asciidoc/_keyreader.adoc

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,24 @@
1+
=== Key Reader
2+
In key reader mode, the source connector captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
3+
The data structure key will be mapped to the record key, and the value will be mapped to the record value.
4+
5+
[IMPORTANT]
6+
.Supported Data Structures
7+
====
8+
The source connector supports the following data structures:
9+
10+
* String: the Kafka record values will be strings
11+
* Hash: the Kafka record values will be maps (string key/value pairs)
12+
13+
====
14+
15+
[source,properties]
16+
----
17+
redis.keys.patterns=<glob> <1>
18+
topic=<topic> <2>
19+
----
20+
21+
<1> Key portion of the pattern that will be used to listen to keyspace events.
22+
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
23+
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
24+
<2> Name of the destination topic.

src/docs/asciidoc/_sink.adoc

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -28,19 +28,19 @@ You can specify the number of tasks with the `tasks.max` configuration property.
2828
The {name} supports the following Redis data-structure types as targets:
2929

3030
[[collection-key]]
31-
* Collections: <<sync-stream,stream>>, <<sync-list,list>>, <<sync-set,set>>, <<sync-zset,sorted set>>, <<sync-timeseries,time series>>
31+
* Collections: <<sink-stream,stream>>, <<sink-list,list>>, <<sink-set,set>>, <<sink-zset,sorted set>>, <<sink-timeseries,time series>>
3232
+
3333
Collection keys are generated using the `redis.key` configuration property which may contain `${topic}` (default) as a placeholder for the originating topic name.
3434
+
3535
For example with `redis.key = ${topic}` and topic `orders` the Redis key is `set:orders`.
3636

37-
* <<sync-hash,Hash>>, <<sync-string,string>>, <<sync-json,JSON>>
37+
* <<sink-hash,Hash>>, <<sink-string,string>>, <<sink-json,JSON>>
3838
+
3939
For other data-structures the key is in the form `<keyspace>:<record_key>` where `keyspace` is generated using the `redis.key` configuration property like above and `record_key` is the sink record key.
4040
+
4141
For example with `redis.key = ${topic}`, topic `orders`, and sink record key `123` the Redis key is `orders:123`.
4242

43-
[[sync-hash]]
43+
[[sink-hash]]
4444
==== Hash
4545
Use the following properties to write Kafka records as Redis hashes:
4646

@@ -55,7 +55,7 @@ value.converter=<Avro or JSON> <2>
5555
<2> <<avro,Avro>> or <<kafka-json,JSON>>.
5656
If value is null the key is deleted.
5757

58-
[[sync-string]]
58+
[[sink-string]]
5959
==== String
6060
Use the following properties to write Kafka records as Redis strings:
6161

@@ -70,7 +70,7 @@ value.converter=<string or bytes> <2>
7070
<2> <<value-string,String>> or <<value-bytes,bytes>>.
7171
If value is null the key is deleted.
7272

73-
[[sync-json]]
73+
[[sink-json]]
7474
==== JSON
7575
Use the following properties to write Kafka records as RedisJSON documents:
7676

@@ -85,7 +85,7 @@ value.converter=<string, bytes, or Avro> <2>
8585
<2> <<value-string,String>>, <<value-bytes,bytes>>, or <<avro,Avro>>.
8686
If value is null the key is deleted.
8787

88-
[[sync-stream]]
88+
[[sink-stream]]
8989
==== Stream
9090
Use the following properties to store Kafka records as Redis stream messages:
9191

@@ -99,7 +99,7 @@ value.converter=<Avro or JSON> <2>
9999
<1> <<collection-key,Stream key>>
100100
<2> <<avro,Avro>> or <<kafka-json,JSON>>
101101

102-
[[sync-list]]
102+
[[sink-list]]
103103
==== List
104104
Use the following properties to add Kafka record keys to a Redis list:
105105

@@ -118,7 +118,7 @@ redis.push.direction=<LEFT or RIGHT> <3>
118118
The Kafka record value can be any format.
119119
If a value is null then the member is removed from the list (instead of pushed to the list).
120120

121-
[[sync-set]]
121+
[[sink-set]]
122122
==== Set
123123
Use the following properties to add Kafka record keys to a Redis set:
124124

@@ -135,7 +135,7 @@ key.converter=<string or bytes> <2>
135135
The Kafka record value can be any format.
136136
If a value is null then the member is removed from the set (instead of added to the set).
137137

138-
[[sync-zset]]
138+
[[sink-zset]]
139139
==== Sorted Set
140140
Use the following properties to add Kafka record keys to a Redis sorted set:
141141

@@ -152,7 +152,7 @@ key.converter=<string or bytes> <2>
152152
The Kafka record value should be `float64` and is used for the score.
153153
If the score is null then the member is removed from the sorted set (instead of added to the sorted set).
154154

155-
[[sync-timeseries]]
155+
[[sink-timeseries]]
156156
==== Time Series
157157

158158
Use the following properties to write Kafka records as RedisTimeSeries samples:

src/docs/asciidoc/_source.adoc

Lines changed: 3 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -42,32 +42,6 @@ Use configuration property `tasks.max` to have the change stream handled by mult
4242
The connector splits the work based on the number of configured key patterns.
4343
When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
4444

45-
//
46-
//[[key-reader]]
47-
//=== Key Reader
48-
//In key reader mode, the {name} captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
49-
//The data structure key will be mapped to the record key, and the value will be mapped to the record value.
50-
//
51-
//[IMPORTANT]
52-
//.Supported Data Structures
53-
//====
54-
//The {name} supports the following data structures:
55-
//
56-
//* String: the Kafka record values will be strings
57-
//* Hash: the Kafka record values will be maps (string key/value pairs)
58-
//
59-
//====
60-
//
61-
//[source,properties]
62-
//----
63-
//redis.keys.patterns=<glob> <1>
64-
//topic=<topic> <2>
65-
//----
66-
//
67-
//<1> Key portion of the pattern that will be used to listen to keyspace events.
68-
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
69-
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
70-
//<2> Name of the destination topic.
7145

7246
[[stream-reader]]
7347
=== Stream Reader
@@ -115,3 +89,6 @@ For example, `foo${task}` and task `123` => consumer `foo123`.
11589
<6> Destination topic (default: `${stream}`).
11690
May contain `${stream}` as a placeholder for the originating stream name.
11791
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
92+
93+
//[[key-reader]]
94+
//include::_keyreader.adoc[]

0 commit comments

Comments
 (0)