Skip to content

[DP-1901] - Convert Wurstmeister Kafka image to Bitnami for Kafka-go #1255

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 36 commits into from
Jan 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
5ea0c8b
[DP-1901] - Convert Wurstmeister Kafka image to Bitnami for Kafka-go
ssingudasu Jan 12, 2024
6120d22
[DP-1901] - removing duplicate env in config
ssingudasu Jan 12, 2024
eda1043
[DP-1901] - adding KAFKA_VERSION
ssingudasu Jan 12, 2024
4484bfe
[DP-1901] - FIXING KAFKA_VERSION
ssingudasu Jan 12, 2024
c327f90
[DP-1901] - minor fixtures to KAFKA_VERSION
ssingudasu Jan 12, 2024
d58dc69
[DP-1901] - minor fixtures in lint
ssingudasu Jan 12, 2024
d72d85d
[DP-1901] - fixing KAFKA_VERSION to 0.10.2.1
ssingudasu Jan 12, 2024
a5b4c89
[DP-1901] - minor fixtures to KAFKA_VERSION
ssingudasu Jan 12, 2024
9c2155b
[DP-1901] - fixing zookeeper connect
ssingudasu Jan 12, 2024
d807919
[DP-1901] - fixing KAFKA_VERSION to 0.10.2.1
ssingudasu Jan 12, 2024
675047b
[DP-1901] - fixing kafka-011
ssingudasu Jan 12, 2024
acebcf6
[DP-1901] - fixing kafka-011 environment
ssingudasu Jan 12, 2024
ea9cccc
[DP-1901] - fixing zookeeper kafka-011
ssingudasu Jan 12, 2024
dbaaebc
[DP-1901] - fixing KAFKA_VERSION kafka-011
ssingudasu Jan 12, 2024
12e43f0
[DP-1901] - fixing KAFKA_VERSION kafka-011
ssingudasu Jan 12, 2024
8ed2955
[DP-1901] - fixing KAFKA_VERSION kafka-011
ssingudasu Jan 12, 2024
2a1ac3c
[DP-1901] - Adding AUTHORIZER kafka-011
ssingudasu Jan 12, 2024
bdbce25
[DP-1901] - reset kafka-011
ssingudasu Jan 12, 2024
ac8a6e0
[DP-1901] - bitnami for kafka-011
ssingudasu Jan 13, 2024
9e0b64c
[DP-1901] - bitnami for kafka-011 zookeeper fixtures
ssingudasu Jan 13, 2024
79f01ff
[DP-1901] - fixtures to circleci and creating docker_compose_versions…
ssingudasu Jan 14, 2024
e4616b2
[DP-1901] - zookeeper fix
ssingudasu Jan 14, 2024
6615962
[DP-1901] - fixtures to circleci. removed unsupported kafka
ssingudasu Jan 18, 2024
95bd46c
[DP-1901] - fixtures to circleci 2.3.1. fixing examples folder
ssingudasu Jan 18, 2024
bd4650a
[DP-1901] - examples docker-compose fix to bitnami
ssingudasu Jan 18, 2024
97cb9c6
[DP-1901] - minor README.md fixtures
ssingudasu Jan 18, 2024
ca3bbca
[DP-1901] - minor README.md fixtures
ssingudasu Jan 18, 2024
a419923
[DP-1901] - minor README.md fixtures
ssingudasu Jan 18, 2024
17c74fc
[DP-1901] - minor README.md fixtures
ssingudasu Jan 19, 2024
ac1a205
[DP-1901] - Grammatical fixtures in README.md
ssingudasu Jan 19, 2024
d8b4cdc
[DP-1901] - Adding support for v281 and v361 in circleci
ssingudasu Jan 22, 2024
a1d728d
[DP-1901] - touch README.md for circleci trigger
ssingudasu Jan 22, 2024
5f02387
[DP-1901] - Creating v361docker and modify circleci
ssingudasu Jan 22, 2024
7e9228b
[DP-1901] - Creating v361 docker and modify circleci
ssingudasu Jan 22, 2024
73938ba
[DP-1901] - touch README.md for circleci trigger
ssingudasu Jan 22, 2024
4ec2448
[DP-1901] - removing v361 from circleci
ssingudasu Jan 22, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
267 changes: 125 additions & 142 deletions .circleci/config.yml

Large diffs are not rendered by default.

7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ if err := conn.Close(); err != nil {
```

### To Create Topics
By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_AUTO_CREATE_TOPICS_ENABLE='true'` in the wurstmeister/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
By default kafka has the `auto.create.topics.enable='true'` (`KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE='true'` in the bitnami/kafka kafka docker image). If this value is set to `'true'` then topics will be created as a side effect of `kafka.DialLeader` like so:
```go
// to create topics when auto.create.topics.enable='true'
conn, err := kafka.DialLeader(context.Background(), "tcp", "localhost:9092", "my-topic", 0)
Expand Down Expand Up @@ -797,3 +797,8 @@ KAFKA_VERSION=2.3.1 \
KAFKA_SKIP_NETTEST=1 \
go test -race ./...
```

(or) to clean up the cached test results and run tests:
```
go clean -cache && make test
```
32 changes: 0 additions & 32 deletions docker-compose-241.yml

This file was deleted.

29 changes: 0 additions & 29 deletions docker-compose.010.yml

This file was deleted.

64 changes: 36 additions & 28 deletions docker-compose.yml
Original file line number Diff line number Diff line change
@@ -1,34 +1,42 @@
version: "3"
# See https://hub.docker.com/r/bitnami/kafka/tags for the complete list.
version: '3'
services:
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: bitnami/zookeeper:latest
ports:
- 2181:2181
environment:
ALLOW_ANONYMOUS_LOGIN: yes
kafka:
image: wurstmeister/kafka:2.12-2.3.1
container_name: kafka
image: bitnami/kafka:2.3.1-ol-7-r61
restart: on-failure:3
links:
- zookeeper
- zookeeper
ports:
- 9092:9092
- 9093:9093
- 9092:9092
- 9093:9093
environment:
KAFKA_VERSION: '2.3.1'
KAFKA_BROKER_ID: '1'
KAFKA_CREATE_TOPICS: 'test-writer-0:3:1,test-writer-1:3:1'
KAFKA_DELETE_TOPIC_ENABLE: 'true'
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
KAFKA_ADVERTISED_PORT: '9092'
KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_MESSAGE_MAX_BYTES: '200000000'
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
KAFKA_AUTHORIZER_CLASS_NAME: 'kafka.security.auth.SimpleAclAuthorizer'
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf"
CUSTOM_INIT_SCRIPT: |-
echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/kafka/config/kafka_server_jaas.conf;
/opt/kafka/bin/kafka-configs.sh --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram

zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
KAFKA_CFG_BROKER_ID: 1
KAFKA_CFG_DELETE_TOPIC_ENABLE: 'true'
KAFKA_CFG_ADVERTISED_HOST_NAME: 'localhost'
KAFKA_CFG_ADVERTISED_PORT: '9092'
KAFKA_CFG_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_CFG_MESSAGE_MAX_BYTES: '200000000'
KAFKA_CFG_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
KAFKA_CFG_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
KAFKA_CFG_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
KAFKA_CFG_AUTHORIZER_CLASS_NAME: 'kafka.security.auth.SimpleAclAuthorizer'
KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
KAFKA_INTER_BROKER_USER: adminplain
KAFKA_INTER_BROKER_PASSWORD: admin-secret
KAFKA_BROKER_USER: adminplain
KAFKA_BROKER_PASSWORD: admin-secret
ALLOW_PLAINTEXT_LISTENER: yes
entrypoint:
- "/bin/bash"
- "-c"
- /opt/bitnami/kafka/bin/kafka-configs.sh --zookeeper zookeeper:2181 --alter --add-config "SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]" --entity-type users --entity-name adminscram; exec /entrypoint.sh /run.sh
152 changes: 152 additions & 0 deletions docker_compose_versions/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,152 @@
# Bitnami Kafka

This document outlines how to create a docker-compose file for a specific Bitnami Kafka version.


## Steps to create docker-compose

- Refer to [docker-hub Bitnami Kafka tags](https://hub.docker.com/r/bitnami/kafka/tags) and sort by NEWEST to locate the image preferred, for example: `2.7.0`
- There is documentation in the (main branch)[https://github.com/bitnami/containers/blob/main/bitnami/kafka/README.md] for environment config setup information. Refer to the `Notable Changes` section.
- Sometimes there is a need to understand how the set up is being done. To locate the appropriate Kafka release in the repo [bitnami/containers](https://github.com/bitnami/containers), go through the [kafka commit history](https://github.com/bitnami/containers/commits/main/bitnami/kafka).
- Once a commit is located, Refer to README.md, Dockerfile, entrypoint and various init scripts to understand the environment variables to config server.properties mapping conventions. Alternatively, you can spin up the required Kafka image and refer the mapping inside the container.
- Ensure you follow the environment variable conventions in your docker-compose. Without proper environment variables, the Kafka cluster cannot start or can start with undesired configs. For example, Since Kafka version 2.3, all server.properties docker-compose environment configs start with `KAFKA_CFG_<config_with_underscore>`
- Older versions of Bitnami Kafka have different conventions and limited docker-compose environment variables exposed for configs needed in server.properties


In kafka-go, for all the test cases to succeed, Kafka cluster should have following server.properties along with a relevant kafka_jaas.conf mentioned in the KAFKA_OPTS. Goal is to ensure that the docker-compose file generates below server.properties.


server.properties
```
advertised.host.name=localhost
advertised.listeners=PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093
advertised.port=9092
auto.create.topics.enable=true
broker.id=1
delete.topic.enable=true
group.initial.rebalance.delay.ms=0
listeners=PLAINTEXT://:9092,SASL_PLAINTEXT://:9093
log.dirs=/kafka/kafka-logs-1d5951569d78
log.retention.check.interval.ms=300000
log.retention.hours=168
log.segment.bytes=1073741824
message.max.bytes=200000000
num.io.threads=8
num.network.threads=3
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
port=9092
sasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
transaction.state.log.min.isr=1
transaction.state.log.replication.factor=1
zookeeper.connect=zookeeper:2181
zookeeper.connection.timeout.ms=6000
```


## run docker-compose and test cases

run docker-compose
```
# docker-compose -f ./docker_compose_versions/docker-compose-<kafka_version>.yml up -d
```


run test cases
```
# go clean -cache; KAFKA_SKIP_NETTEST=1 KAFKA_VERSION=<a.b.c> go test -race -cover ./...;
```


## Various Bitnami Kafka version issues observed in circleci


### Kafka v101, v111, v201, v211 and v221


In kafka-go repo, all the tests require sasl.enabled.mechanisms as PLAIN,SCRAM-SHA-256,SCRAM-SHA-512 for the Kafka cluster.


It has been observed for Kafka v101, v111, v201, v211 and v221 which are used in the circleci for build have issues with SCRAM.


There is no way to override the config sasl.enabled.mechanisms causing Kafka cluster to start up as PLAIN.


There has been some attempts made to override sasl.enabled.mechanisms
- Modified entrypoint in docker-compose to append the server.properties with relevant configs sasl.enabled.mechanisms before running entrypoint.sh. This resulted in failures for Kafka v101, v111, v201, v211 and v221. Once Kafka server starts, server.properties gets appended with default value of sasl.enabled.mechanisms there by cluster to start with out PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
- Mounted a docker-compose volume for server.propeties. However, This also resulted in failures for Kafka v101, v111, v201, v211 and v221. Once Kafka server starts, server.properties gets appended with default value of sasl.enabled.mechanisms there by cluster to start with out PLAIN,SCRAM-SHA-256,SCRAM-SHA-512


NOTE:
- Kafka v101, v111, v201, v211 and v221 have no docker-compose files since we need SCRAM for kafka-go test cases to succeed.
- There is no Bitnami Kafka image for v222 hence testing has been performed on v221


### Kafka v231

In Bitnami Kafka v2.3, all server.properties docker-compose environment configs start with `KAFKA_CFG_<config_with_underscore>`. However, it is not picking the custom populated kafka_jaas.conf.


After a lot of debugging, it has been noticed that there aren't enough privileges to create the kafka_jaas.conf. Hence the environment variables below need to be added in docker-compose to generate the kafka_jaas.conf. This issue is not noticed after kafka v2.3


```
KAFKA_INTER_BROKER_USER: adminplain
KAFKA_INTER_BROKER_PASSWORD: admin-secret
KAFKA_BROKER_USER: adminplain
KAFKA_BROKER_PASSWORD: admin-secret
```

There is a docker-compose file `docker-compose-231.yml` in the folder `kafka-go/docker_compose_versions` for reference.


## References


For user reference, please find the some of the older kafka versions commits from the [kafka commit history](https://github.com/bitnami/containers/commits/main/bitnami/kafka). For Kafka versions with no commit history, data is populated with the latest version available for the tag.


### Kafka v010: docker-compose reference: `kafka-go/docker_compose_versions/docker-compose-010.yml`
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=0.10.2.1)
- [kafka commit](https://github.com/bitnami/containers/tree/c4240f0525916a418245c7ef46d9534a7a212c92/bitnami/kafka)


### Kafka v011: docker-compose reference: `kafka-go/docker_compose_versions/docker-compose-011.yml`
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=0.11.0)
- [kafka commit](https://github.com/bitnami/containers/tree/7724adf655e4ca9aac69d606d41ad329ef31eeca/bitnami/kafka)


### Kafka v101: docker-compose reference: N/A
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=1.0.1)
- [kafka commit](https://github.com/bitnami/containers/tree/44cc8f4c43ead6edebd3758c8df878f4f9da82c2/bitnami/kafka)


### Kafka v111: docker-compose reference: N/A
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=1.1.1)
- [kafka commit](https://github.com/bitnami/containers/tree/cb593dc98c2eb7a39f2792641e741d395dbe50e7/bitnami/kafka)


### Kafka v201: docker-compose reference: N/A
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=2.0.1)
- [kafka commit](https://github.com/bitnami/containers/tree/9ff8763df265c87c8b59f8d7ff0cf69299d636c9/bitnami/kafka)


### Kafka v211: docker-compose reference: N/A
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=2.1.1)
- [kafka commit](https://github.com/bitnami/containers/tree/d3a9d40afc2b7e7de53486538a63084c1a565d43/bitnami/kafka)


### Kafka v221: docker-compose reference: N/A
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=2.2.1)
- [kafka commit](https://github.com/bitnami/containers/tree/f132ef830d1ba9b78392ec4619174b4640c276c9/bitnami/kafka)


### Kafka v231: docker-compose reference: `kafka-go/docker_compose_versions/docker-compose-231.yml`
- [tag](https://hub.docker.com/r/bitnami/kafka/tags?page=1&ordering=last_updated&name=2.3.1)
- [kafka commit](https://github.com/bitnami/containers/tree/ae572036b5281456b0086345fec0bdb74f7cf3a3/bitnami/kafka)

39 changes: 39 additions & 0 deletions docker_compose_versions/docker-compose-010.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# See https://hub.docker.com/r/bitnami/kafka/tags for the complete list.
version: '3'
services:
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: bitnami/zookeeper:latest
ports:
- 2181:2181
environment:
ALLOW_ANONYMOUS_LOGIN: yes
kafka:
container_name: kafka
image: bitnami/kafka:0.10.2.1
restart: on-failure:3
links:
- zookeeper
ports:
- 9092:9092
- 9093:9093
environment:
KAFKA_BROKER_ID: 1
KAFKA_DELETE_TOPIC_ENABLE: 'true'
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
KAFKA_ADVERTISED_PORT: '9092'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true'
KAFKA_MESSAGE_MAX_BYTES: '200000000'
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
KAFKA_SASL_ENABLED_MECHANISMS: 'PLAIN,SCRAM-SHA-256,SCRAM-SHA-512'
KAFKA_AUTHORIZER_CLASS_NAME: 'kafka.security.auth.SimpleAclAuthorizer'
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/bitnami/kafka/config/kafka_server_jaas.conf"
ALLOW_PLAINTEXT_LISTENER: yes
entrypoint:
- "/bin/bash"
- "-c"
- echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/bitnami/kafka/config/kafka_server_jaas.conf; /opt/bitnami/kafka/bin/kafka-configs.sh --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram; exec /app-entrypoint.sh /start-kafka.sh
36 changes: 36 additions & 0 deletions docker_compose_versions/docker-compose-011.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
# See https://hub.docker.com/r/bitnami/kafka/tags for the complete list.
version: '3'
services:
zookeeper:
container_name: zookeeper
hostname: zookeeper
image: bitnami/zookeeper:latest
ports:
- 2181:2181
environment:
ALLOW_ANONYMOUS_LOGIN: yes
kafka:
container_name: kafka
image: bitnami/kafka:0.11.0-1-r1
restart: on-failure:3
links:
- zookeeper
ports:
- 9092:9092
- 9093:9093
environment:
KAFKA_BROKER_ID: 1
KAFKA_DELETE_TOPIC_ENABLE: 'true'
KAFKA_ADVERTISED_HOST_NAME: 'localhost'
KAFKA_ADVERTISED_PORT: '9092'
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_LISTENERS: 'PLAINTEXT://:9092,SASL_PLAINTEXT://:9093'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://localhost:9092,SASL_PLAINTEXT://localhost:9093'
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: 'true'
KAFKA_OPTS: "-Djava.security.auth.login.config=/opt/bitnami/kafka/config/kafka_server_jaas.conf"
ALLOW_PLAINTEXT_LISTENER: "yes"
entrypoint:
- "/bin/bash"
- "-c"
# 0.11.0 image is not honoring some configs required in server.properties
- echo -e '\nsasl.enabled.mechanisms=PLAIN,SCRAM-SHA-256,SCRAM-SHA-512\nmessage.max.bytes=200000000\nauto.create.topics.enable=true\nport=9092' >> /opt/bitnami/kafka/config/server.properties; echo -e 'KafkaServer {\norg.apache.kafka.common.security.scram.ScramLoginModule required\n username="adminscram"\n password="admin-secret";\n org.apache.kafka.common.security.plain.PlainLoginModule required\n username="adminplain"\n password="admin-secret"\n user_adminplain="admin-secret";\n };' > /opt/bitnami/kafka/config/kafka_server_jaas.conf; /opt/bitnami/kafka/bin/kafka-configs.sh --zookeeper zookeeper:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret-256],SCRAM-SHA-512=[password=admin-secret-512]' --entity-type users --entity-name adminscram; exec /app-entrypoint.sh /run.sh
Loading