Kubernetes搭建Zookeeper和Kafka集群

主要参考了https://www.cnblogs.com/00986014w/p/9561901.html 这篇博文,但他zookeeper使用的不是官方镜像

我使用的是3个节点的Zookeeper集群,可以参照着修改。

搭建Zookeeper集群

集群Service的zookeeper-svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster1
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-1
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster2
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-2
---
apiVersion: v1
kind: Service
metadata:
labels:
app: zookeeper-cluster-service-3
name: zookeeper-cluster3
spec:
ports:
- name: client
port: 2181
protocol: TCP
- name: follower
port: 2888
protocol: TCP
- name: leader
port: 3888
protocol: TCP
selector:
app: zookeeper-cluster-service-3

通过sudo kubectl create -f zookeeper-svc.yaml创建3个Service。

集群Deployment的zookeeper-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-1
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-1
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-1
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: "server.1=0.0.0.0:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=zookeeper-cluster3:2888:3888"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster-2
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-2
name: zookeeper-cluster-2
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-2
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "2"
- name: ZOO_SERVERS
value: "server.1=zookeeper-cluster1:2888:3888 server.2=0.0.0.0:2888:3888 server.3=zookeeper-cluster3:2888:3888"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: zookeeper-cluster-service-1
name: zookeeper-cluster-3
spec:
replicas: 1
template:
metadata:
labels:
app: zookeeper-cluster-service-3
name: zookeeper-cluster-3
spec:
containers:
- image: zookeeper
imagePullPolicy: IfNotPresent
name: zookeeper-cluster-3
ports:
- containerPort: 2181
env:
- name: ZOO_MY_ID
value: "3"
- name: ZOO_SERVERS
value: "server.1=zookeeper-cluster1:2888:3888 server.2=zookeeper-cluster2:2888:3888 server.3=0.0.0.0:2888:3888"

通过sudo kubectl create -f zookeeper-deployment.yaml创建3个Deployment。

检查集群是否启动成功

等3个Pod成为Running状态后,通过sudo kubectl log zookeeper-cluster-1-xxxxxx检查日志中是否有错误。

然后通过sudo kubectl exec -it zookeeper-cluster-1-676df4686f-c7b6d /bin/bash分别进入两个Pod,执行/bin/zkCli.sh分别创建查看,试试是否能成功。

搭建Kafka集群

集群Service的kafka-svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster1
labels:
app: kafka-cluster-1
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-1
targetPort: 9092
nodePort: 30091
protocol: TCP
selector:
app: kafka-cluster-1
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster2
labels:
app: kafka-cluster-2
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-2
targetPort: 9092
nodePort: 30092
protocol: TCP
selector:
app: kafka-cluster-2
---
apiVersion: v1
kind: Service
metadata:
name: kafka-cluster3
labels:
app: kafka-cluster-3
spec:
type: NodePort
ports:
- port: 9092
name: kafka-cluster-3
targetPort: 9092
nodePort: 30093
protocol: TCP
selector:
app: kafka-cluster-3

通过sudo kubectl create -f kafka-svc.yaml创建3个Service。

集群Deployment的kafka-deployment.yaml

这里需要注意,要把envKAFKA_ADVERTISED_HOST_NAME改成各个Pod对应Service的ClusterIP。

PS: 如果上面zookeeper的service和我定义的不同,就对应着修改KAFKA_ZOOKEEPER_CONNECT

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-1
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-1
template:
metadata:
labels:
name: kafka-cluster-1
app: kafka-cluster-1
spec:
containers:
- name: kafka-cluster-1
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster1 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "1"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-2
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-2
template:
metadata:
labels:
name: kafka-cluster-2
app: kafka-cluster-2
spec:
containers:
- name: kafka-cluster-2
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster2 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "2"
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: kafka-cluster-3
spec:
replicas: 1
selector:
matchLabels:
name: kafka-cluster-3
template:
metadata:
labels:
name: kafka-cluster-3
app: kafka-cluster-3
spec:
containers:
- name: kafka-cluster-3
image: wurstmeister/kafka
imagePullPolicy: IfNotPresent
ports:
- containerPort: 9092
env:
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_ADVERTISED_HOST_NAME
value: "[zookeeper-cluster3 的 ClusterIP]"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zookeeper-cluster1:2181,zookeeper-cluster2:2181,zookeeper-cluster3:2181
- name: KAFKA_BROKER_ID
value: "3"

通过sudo kubectl create -f zookeeper-deployment.yaml创建3个Deployment。

检查集群是否启动成功

等3个Pod成为Running状态后,通过sudo kubectl log kafka-cluster-1-xxxxxx检查日志中是否有错误。

然后通过sudo kubectl exec -it -sudo kubectl exec -it kafka-cluster-1-558747bc7d-5n94p /bin/bash进入Pod,执行kafka-console-producer.sh --broker-list [zookeeper-cluster1 的 ClusterIP]:9092 --topic test创建了topic test

通过sudo kubectl exec -it kafka-cluster-2-66c88f759b-8wlvp /bin/bash进入Pod,执行kafka-console-consumer.sh --bootstrap-server [zookeeper-cluster2 的 ClusterIP]:9092 --topic test --from-beginning接收topic test的消息。

然后试着在cluster-1中发送消息,看看在cluster-2中是否能接收。

  • 2018-11-23 PS:hhhh,其实Kafka用了nodeport还是没用的,因为client端最终获取到的Kafka地址列表是pod的IP,所以还是只能用在集群机器里面。