仁智实验室

仁者见仁、智者见智

0%

一、安装mysql
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
# 设置要安装命名空间
kubens infra

# 添加helm仓库
helm repo add bitnami https://charts.bitnami.com/bitnami

# 安装mysql
helm install mysql bitnami/mysql

#output
WARNING: This chart is deprecated
NAME: mysql
LAST DEPLOYED: Mon May 31 15:04:17 2021
NAMESPACE: infra
STATUS: deployed
REVISION: 1
NOTES:
MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
mysql.infra.svc.cluster.local

To get your root password run:

MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace infra mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il

2. Install the mysql client:

$ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
$ mysql -h mysql -p

To connect to your database directly from outside the K8s cluster:
MYSQL_HOST=127.0.0.1
MYSQL_PORT=3306

# Execute the following command to route the connection:
kubectl port-forward svc/mysql 3306

mysql -h ${MYSQL_HOST} -P${MYSQL_PORT} -u root -p${MYSQL_ROOT_PASSWORD}

配置MySQL代理网关

Traffic-mysql.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRouteTCP
metadata:
name: ingressmysql
namespace: infra
spec:
entryPoints:
- mysql
routes:
- match: HostSNI(`*`)
services:
- name: mysql
port: 3306

创建mysql的traefik路由

1
kubectl apply -f traefik-mysql.yaml
6、安装redis
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
# 安装Redis
helm install redis bitnami/redis

#output
NAME: redis
LAST DEPLOYED: Mon May 31 15:07:07 2021
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Redis(TM) can be accessed on the following DNS names from within your cluster:

redis-master.infra.svc.cluster.local for read/write operations (port 6379)
redis-replicas.infra.svc.cluster.local for read-only operations (port 6379)



To get your password run:

export REDIS_PASSWORD=$(kubectl get secret --namespace infra redis -o jsonpath="{.data.redis-password}" | base64 --decode)

To connect to your Redis(TM) server:

1. Run a Redis(TM) pod that you can use as a client:

kubectl run --namespace infra redis-client --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:6.2.3-debian-10-r22 --command -- sleep infinity

Use the following command to attach to the pod:

kubectl exec --tty -i redis-client \
--namespace infra -- bash

2. Connect using the Redis(TM) CLI:
redis-cli -h redis-master -a $REDIS_PASSWORD
redis-cli -h redis-replicas -a $REDIS_PASSWORD

To connect to your database from outside the cluster execute the following commands:

kubectl port-forward --namespace infra svc/redis-master 6379:6379 &
redis-cli -h 127.0.0.1 -p 6379 -a $REDIS_PASSWORD
7、安装kafka
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# 安装kafka
helm install kafka bitnami/kafka

# output
NAME: kafka
LAST DEPLOYED: Mon May 31 15:09:13 2021
NAMESPACE: infra
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

kafka.infra.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

kafka-0.kafka-headless.infra.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.8.0-debian-10-r27 --namespace infra --command -- sleep infinity
kubectl exec --tty -i kafka-client --namespace infra -- bash

PRODUCER:
kafka-console-producer.sh \
--broker-list kafka-0.kafka-headless.infra.svc.cluster.local:9092 \
--topic test

CONSUMER:
kafka-console-consumer.sh \
--bootstrap-server kafka.infra.svc.cluster.local:9092 \
--topic sms_callback_aliyun \
--from-beginning
8、安装cmak
1
helm repo add cmak https://eshepelyuk.github.io/cmak-operator

1、设置环境变量
1
2
echo 'export GOPRIVATE="gitlab.imind.tech"' >> ~/.zprofile  #mac系统,其他系统类似
source ~/.zprofile
2、设置git强制走ssh协议get私有包代码,并替换域名
1
git config --global url."git@gitlab.imind.tech:".insteadOf https://gitlab.imind.tech
3、git账号配置ssh秘钥
  1. 如果没有ssh秘钥,运行如下命令生成ssh公钥和私钥对

    1
    ssh-keygen -t rsa -C 'xxx@imind.tech' # -C 参数是你的邮箱地址
  2. 打开~/.ssh/id_rsa.pub,复制其中的内容

  3. 登录gitlab,在Edit profile -> SSH Keys 设置ssh秘钥

    image-20210620101955916

    将上一步复制的公钥内容粘贴到框3中,然后点击框4添加key

4、拉取私有仓库包
1
go get gitlab.imind.tech/micro/pkg@latest

1、安装docker

1
2
3
4
sudo apt-get update && apt-get install -y apt-transport-https
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker

2、关闭swap

1
2
3
4
#临时关闭swap
sudo swapoff -a
# 永久关闭swap分区
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

3、添加k8s源

1
2
3
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

4、导入k8s密钥并更新软件源,安装 kubeadm, kubelet 和 kubectl

1
2
3
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
sudo apt-get update
sudo apt-get install kubelet kubeadm kubectl

5、设置阿里云镜像加速

1
2
3
4
5
cat <<EOF >/etc/docker/daemon.json
{
"registry-mirrors": ["https://754jn7no.mirror.aliyuncs.com"]
}
EOF

6、拉取镜像

1
2
3
4
5
6
7
8
9
10
11
12
# 从阿里云拉取镜像并转换tag
for i in `kubeadm config images list`; do
imageName=${i#k8s.gcr.io/}
docker pull registry.aliyuncs.com/google_containers/$imageName
docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.aliyuncs.com/google_containers/$imageName
done;

# coredns错误处理
docker pull registry.aliyuncs.com/google_containers/coredns:1.8.0
docker tag registry.aliyuncs.com/google_containers/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0
docker rmi registry.aliyuncs.com/google_containers/coredns:1.8.0

7、kubeadm初始化

1
2
3
4
5
6
7
8
9
10
11
12
13
kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU 
# --ignore-preflight-errors=NumCPU 忽略cpu错误,如果cpu核心数足够,可以不加.
# 必须要带上--pod-network-cidr=10.244.0.0/16,不然设置网络的时候会报错

# 如果初始化出错或者想重新初始化,可以使用如下命令
kubeadm reset

# 出现 Your Kubernetes master has initialized successfully!,安装成功
# 安装成功后执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、安装网络插件

1
kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml

9、设置mater节点为可调度,因为默认情况下K8s的master节点是不能运行Pod

1
kubectl taint nodes --all node-role.kubernetes.io/master-

1、下载
1
wget https://mirrors.tuna.tsinghua.edu.cn/apache/flume/1.9.0/apache-flume-1.9.0-bin.tar.gz
2、解压
1
tar zxvf apache-flume-1.9.0-bin.tar.gz -C /opt/modules
3、改名
1
mv apache-flume-1.9.0-bin flume-1.9.0
4、解决包冲突
1
2
3
cd /opt/modules/flume-1.9.0/lib

mv guava-11.0.2.jar guava-11.0.2.jar.bak
5、设置环境变量
1
2
3
4
5
cd ../conf/

cp flume-env.sh.template flume-env.sh

vi flume-env.sh

导入java_home环境变量

一、准备环境

1、编辑各节点配置文件,添加节点间的映射关系。
1
2
3
4
vi /etc/hosts
172.16.50.2 hadoop01
172.16.50.3 hadoop02
172.16.50.4 hadoop03
2、配置Java环境(每个节点)

有关【配置java环境方法】,请参考这里。

3、搭建zookeeper集群

有关【搭建zookeeper集群】,请参考这里。

二、搭建kafka集群

1、下载kafka安装包,解压,配置kafka环境变量

有关【kafka安装包下载方法】,请参考这里。

本文下载的kafka版本是kafka_2.12-2.6.0.tgz,解压到指定一个目录(比如:/opt/modules),配置kafka环境变量,并使其生效。实现命令如下:

1
2
3
4
5
6
7
8
9
tar -zxvf kafka_2.12-2.6.0.tgz -C /opt/modules

vi /etc/profile.d/env.sh

#set kafka environment
export KAFKA_HOME=/opt/modules/kafka_2.12-2.6.0
export PATH=$PATH:$KAFKA_HOME/bin

source /etc/profile.d/env.sh
2、编辑配置文件
1
2
3
4
5
6
vi /opt/modules/kafka_2.12-2.6.0/config/server.properties

broker.id=0
listeners=PLAINTEXT://172.16.50.2:9092
log.dirs=/opt/modules/kafka_2.12-2.6.0/logs
zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181
3、将kafka同步到其他服务器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
xsync /opt/modules/kafka_2.12-2.6.0
xsync /etc/profile.d/env.sh

#hadoop02上执行
vi /opt/modules/kafka_2.12-2.6.0/config/server.properties

broker.id=1
listeners=PLAINTEXT://172.16.50.3:9092
log.dirs=/opt/modules/kafka_2.12-2.6.0/logs
zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181

source /etc/profile.d/env.sh

#hadoop03上执行
vi /opt/modules/kafka_2.12-2.6.0/config/server.properties

broker.id=2
listeners=PLAINTEXT://172.16.50.4:9092
log.dirs=/opt/modules/kafka_2.12-2.6.0/logs
zookeeper.connect=hadoop01:2181,hadoop02:2181,hadoop03:2181

source /etc/profile.d/env.sh

一、创建Traefik的自定义资源(CRD)文件

traefik-crd.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
apiVersion: v1
kind: Namespace
metadata:
name: infra

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutes.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRoute
plural: ingressroutes
singular: ingressroute
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: middlewares.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: Middleware
plural: middlewares
singular: middleware
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressroutetcps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteTCP
plural: ingressroutetcps
singular: ingressroutetcp
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: ingressrouteudps.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: IngressRouteUDP
plural: ingressrouteudps
singular: ingressrouteudp
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsoptions.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSOption
plural: tlsoptions
singular: tlsoption
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: tlsstores.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TLSStore
plural: tlsstores
singular: tlsstore
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: traefikservices.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: TraefikService
plural: traefikservices
singular: traefikservice
scope: Namespaced

---
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: serverstransports.traefik.containo.us
spec:
group: traefik.containo.us
version: v1alpha1
names:
kind: ServersTransport
plural: serverstransports
singular: serverstransport
scope: Namespaced

---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io
resources:
- ingresses
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- traefik.containo.us
resources:
- middlewares
- ingressroutes
- traefikservices
- ingressroutetcps
- ingressrouteudps
- tlsoptions
- tlsstores
- serverstransports
verbs:
- get
- list
- watch

---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: infra

创建Traefik自定义资源

1
kubectl apply -f traefik-crd.yaml

二、创建Taefik Deployment部署

traefik-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: infra
name: traefik-ingress-controller

---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: infra
name: traefik
labels:
app: traefik

spec:
replicas: 1
selector:
matchLabels:
app: traefik
template:
metadata:
labels:
app: traefik
spec:
serviceAccountName: traefik-ingress-controller
containers:
- name: traefik
image: traefik:v2.4
args:
- --accesslog
- --entrypoints.web.Address=:8000
- --entrypoints.websecure.Address=:4443
- --entrypoints.mysql.Address=:3306
- --providers.kubernetescrd
- --certificatesresolvers.myresolver.acme.tlschallenge
- --certificatesresolvers.myresolver.acme.email=foo@you.com
- --certificatesresolvers.myresolver.acme.storage=acme.json
# Please note that this is the staging Let's Encrypt server.
# Once you get things working, you should remove that whole line altogether.
- --certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory
ports:
- name: web
containerPort: 8000
- name: websecure
containerPort: 4443
- name: admin
containerPort: 8080
- name: mysql
containerPort: 3306

创建Traefik部署

1
kubectl apply -f traefik-deployment.yaml

三、创建Traefik 服务

traefik-svc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
apiVersion: v1
kind: Service
metadata:
name: traefik
namespace: infra
spec:
type: NodePort
ports:
- protocol: TCP
name: web
port: 8000
nodePort: 80
- protocol: TCP
name: admin
port: 8080
nodePort: 8080
- protocol: TCP
name: websecure
port: 443
nodePort: 443
- protocol: TCP
name: mysql
port: 3306
nodePort: 3306
selector:
app: traefik

创建K8s Traefik服务

1
kubectl apply -f traefik-svc.yaml

错误提示:

The Service “traefik” is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767

四、更改Kubernetes服务节点端口范围:

1)登录Docker VM:

1
docker run --rm -it --privileged --pid=host walkerlee/nsenter -t 1 -m -u -i -n sh

2)编辑kube-apiserver.yaml

1
vi /etc/kubernetes/manifests/kube-apiserver.yaml

3)在Pod spec中添加

1
2
3
4
5
6
7
8
9
spec:
containers:
- command:
- kube-apiserver
...
- --service-cluster-ip-range=10.96.0.0/12
- --service-node-port-range=1-65535 #添加本行
- --tls-cert-file=/run/config/pki/apiserver.crt
...

4)保存并退出

5)重启Kubernetes

创建Traefik服务

1
kubectl apply -f traefik-svc.yaml

访问Traefik WebUI

http://lk.julive.com:8080

Traefik安装配置成功

1、安装JDK1.8
2、下载 Zookeeper

进入 https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/ 下载合适的版本,准备安装。
注意:这里需要下载 Linux 版本。这里以apache-zookeeper-3.6.2-bin.tar.gz为例

3、解压 Zookeeper

把下载的文件 apache-zookeeper-3.6.2-bin.tar.gz 并放在/opt/modules 目录下。

1
2
3
tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz /opt/modules/
cd /opt/modules/
mv apache-zookeeper-3.6.2-bin zookeeper-3.6.2
4、修改zoo.cfg配置
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
cd /opt/modules/zookeeper-3.6.2/conf
cp zoo_sample.cfg zoo.cfg

mkdir /opt/modules/zookeeper-3.6.2/data
mkdir /opt/modules/zookeeper-3.6.2/logs

vi zoo.cfg

dataDir=/opt/modules/zookeeper-3.6.2/data
maxClientCnxns=1024
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
# 在尾部添加
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888
5、添加myid文件
1
echo 1 > /opt/modules/zookeeper-3.6.2/data/myid
6、修改服务器系统环境变量
1
2
3
4
5
6
7
8
9
vi /etc/profile.d/env.sh

#添加如下配置
#ZooKeeper
export ZK_HOME=/opt/modules/zookeeper-3.6.2
export PATH=$PATH:$ZK_HOME/bin

#配置生效
source /etc/profile.d/env.sh
7、将zookeeper同步到其他服务器
1
2
3
4
5
6
7
8
9
10
xsync /opt/modules/zookeeper-3.6.2
xsync /etc/profile.d/env.sh

#hadoop02上执行
echo 2 > /opt/modules/zookeeper-3.6.2/data/myid
source /etc/profile.d/env.sh

#hadoop03上执行
echo 3 > /opt/modules/zookeeper-3.6.2/data/myid
source /etc/profile.d/env.sh

1、将当前上下文保存所有后续kubectl命令的namespace
1
$ kubectl config set-context $(kubectl config current-context) —namespace=micro
2、列出所有namespace中具有状态的所有Pod
1
$ kubectl get pods --all-namespaces
3、列出指定namespace中具有状态的所有Pod
1
$ kubectl get po -o wide -n <namspace1> -n <namespace2> -n <namespace3>
4、显示当前默认的namespace
1
$ kubectl config view --minify | grep namespace
5、k8s常用的aliases别名
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
alias k='kubectl'
alias kc='kubectl config view --minify | grep name'
alias kdp='kubectl describe pod'
alias krh='kubectl run --help | more'
alias ugh='kubectl get --help | more'
alias c='clear'
alias ke='kubectl explain'
alias kf='kubectl create -f'
alias kg='kubectl get pods --show-labels'
alias kr='kubectl replace -f'
alias kh='kubectl --help | more'
alias krh='kubectl run --help | more'
alias ks='kubectl get namespaces'
alias kga='k get pod --all-namespaces'
alias kgaa='kubectl get all --show-labels'

6、VI配置,便于使用vi编辑YAML

创建 ~/.vimrc 并添加以下内容

1
2
3
4
5
set smarttab
set expandtab
set shiftwidth=4
set tabstop=4
set number
7、从kubectl命令创建YAML模板文件
1
2
3
4
5
6
7
8
9
10
11
kubectl run busybox --image=busybox --dry-run=client -o yaml --restart=Never > yamlfile.yaml

kubectl create job my-job --dry-run=client -o yaml --image=busybox -- date > yamlfile.yaml

kubectl get -o yaml deploy/nginx > 1.yaml (Ensure that you have a deployment named as nginx)

kubectl run busybox --image=busybox --dry-run=client -o yaml --restart=Never -- /bin/sh -c "while true; do echo hello; echo hello again;done" > yamlfile.yaml

kubectl run wordpress --image=wordpress –-expose –-port=8989 --restart=Never -o yaml

kubectl run test --image=busybox --restart=Never --dry-run=client -o yaml -- bin/sh -c 'echo test;sleep 100' > yamlfile.yaml (最后的增加 --bin 。这将创建yaml文件。)

创建YAML文件的另一个好办法是使用wget 命令直接从Internet获得文件

8、使用kubectx和kubens分别管理上下文和namespace

https://github.com/ahmetb/kubectx