You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
:link_manual_install: link:https://docs.confluent.io/home/connect/community.html#manually-installing-community-connectors/[Manually Installing Community Connectors]
4
+
:link_manual_install: link:https://docs.confluent.io/home/connect/community.html#manually-installing-community-connectors/[Manually Installing Community Connectors]
Copy file name to clipboardExpand all lines: docs/guide/src/docs/asciidoc/source.adoc
+62-24Lines changed: 62 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -2,21 +2,30 @@
2
2
= Source Connector Guide
3
3
:name: Redis Kafka Source Connector
4
4
5
-
The {name} reads from a Redis stream and publishes messages to a Kafka topic.
5
+
The {name} includes 2 source connectors:
6
+
7
+
* <<_stream_source,Stream>>
8
+
* <<_keys_source,Keys>>
9
+
10
+
[[_stream_source]]
11
+
== Stream
12
+
13
+
The stream source connector reads from a Redis stream and publishes messages to a Kafka topic.
6
14
It includes the following features:
7
15
8
-
* <<_source_at_least_once_delivery,At least once delivery>>
9
-
* <<_source_at_most_once_delivery,At most once delivery>>
10
-
* <<_source_tasks,Multiple tasks>>
11
-
* <<_stream_reader,Stream Reader>>
16
+
* <<_stream_source_at_least_once_delivery,At least once delivery>>
17
+
* <<_stream_source_at_most_once_delivery,At most once delivery>>
18
+
* <<_stream_source_tasks,Multiple tasks>>
19
+
* <<_stream_source_schema,Schema>>
20
+
* <<_stream_source_config,Configuration>>
12
21
13
-
== Delivery Guarantees
22
+
=== Delivery Guarantees
14
23
15
24
The {name} can be configured to ack stream messages either automatically (at-most-once delivery) or explicitly (at-least-once delivery).
16
25
The default is at-least-once delivery.
17
26
18
-
[[_source_at_least_once_delivery]]
19
-
=== At-Least-Once
27
+
[[_stream_source_at_least_once_delivery]]
28
+
==== At-Least-Once
20
29
21
30
In this mode, each stream message is acknowledged after it has been written to the corresponding topic.
22
31
@@ -25,8 +34,8 @@ In this mode, each stream message is acknowledged after it has been written to t
25
34
redis.stream.delivery=at-least-once
26
35
----
27
36
28
-
[[_source_at_most_once_delivery]]
29
-
=== At-Most-Once
37
+
[[_stream_source_at_most_once_delivery]]
38
+
==== At-Most-Once
30
39
31
40
In this mode, stream messages are acknowledged as soon as they are read.
32
41
@@ -35,19 +44,14 @@ In this mode, stream messages are acknowledged as soon as they are read.
35
44
redis.stream.delivery=at-most-once
36
45
----
37
46
38
-
[[_source_tasks]]
39
-
== Multiple Tasks
40
-
Use configuration property `tasks.max` to have the change stream handled by multiple tasks.
47
+
[[_stream_source_tasks]]
48
+
=== Multiple Tasks
49
+
Reading from the stream is done through a consumer group so that multiple instances of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
50
+
41
51
The connector splits the work based on the number of configured key patterns.
42
52
When the number of tasks is greater than the number of patterns, the number of patterns will be used instead.
43
53
44
-
45
-
[[_stream_reader]]
46
-
== Stream Reader
47
-
The {name} reads messages from a stream and publishes to a Kafka topic.
48
-
Reading is done through a consumer group so that <<_source_tasks,multiple instances>> of the connector configured via the `tasks.max` can consume messages in a round-robin fashion.
49
-
50
-
54
+
[[_stream_source_schema]]
51
55
=== Message Schema
52
56
53
57
==== Key Schema
@@ -66,16 +70,19 @@ The value schema defines the following fields:
May contain `${stream}` as a placeholder for the originating stream name.
90
98
For example, `redis_${stream}` and stream `orders` => topic `redis_orders`.
91
99
92
-
//[[key-reader]]
93
-
//include::_keyreader.adoc[]
100
+
[[_keys_source]]
101
+
== Keys
102
+
103
+
The keys source connector captures changes happening to keys in a Redis database and publishes keys and values to a Kafka topic.
104
+
The data structure key will be mapped to the record key, and the value will be mapped to the record value.
105
+
106
+
[WARNING]
107
+
====
108
+
The keys source connector does not guarantee data consistency because it relies on Redis keyspace notifications which have no delivery guarantees.
109
+
It is possible for some notifications to be missed, for example in case of network failures.
110
+
111
+
Also, depending on the type, size, and rate of change of data structures on the source it is possible the source connector cannot keep up with the change stream.
112
+
For example if a big set is repeatedly updated the connector will need to read the whole set on each update and transfer it over to the target database.
113
+
With a big-enough set the connector could fall behind and the internal queue could fill up leading up to updates being dropped.
114
+
Some preliminary sizing using Redis statistics and `bigkeys`/`memkeys` is recommended.
115
+
If you need assistance please contact your Redis account team.
<1> Key pattern to subscribe to. This is the key portion of the pattern that will be used to listen to keyspace events. See {link_redis_keys} for pattern details.
128
+
For example `foo:*` translates to pubsub channel `$$__$$keyspace@0$$__$$:foo:*` and will capture changes to keys `foo:1`, `foo:2`, etc.
129
+
Use comma-separated values for multiple patterns (`foo:*,bar:*`)
130
+
<2> Idle timeout in millis. Duration after which the connector will stop if no activity is encountered.
0 commit comments