Kubernetes 集群安装

有了 Docker Swarm 基础,再来学习 Kubernetes 会相对容易一些,不过安装 Kubernetes 还是挺繁琐的,对官方文档中需要访问谷歌外网的部分,还需要替换成国内源。如果目的为了学习,还可以使用 minikube 或 Vagrant 快速创建集群。

1   环境

Swarm 对主机(节点)的要求并不高,但 Kubernetes 会进行预检,一般注意以下几点:

  • master 节点需至少 2CPUs 和至少 2GB 内存
  • 关闭 swap 交换区

安装环境以官方文档为准 install-kubeadm

2   工具准备

2.1   Docker 安装

yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2
yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce docker-ce-cli containerd.io

usermod -aG docker yhdodo19
systemctl start docker
systemctl enable docker
//重新登录
docker version

2.2   kube 相关工具

采用 aliyun 源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

yum install -y kubelet kubeadm kubectl

systemctl enable --now kubelet

此时 kubelet 处于 activating (auto-restart)状态,当运行 kubeadm init 或者 kubeadm join 时才会变成 active (running) 状态。

注意,必须关闭 selinux,确保容器可访问宿主机文件系统;注意 setenforce 为能设置为[ Enforcing | Permissive | 1 | 0 ],修改文件可以设置为 disable。 另外一些 RHEL/CentOS 7 的用户曾经遇到过:由于 iptables 被绕过导致网络请求被错误的路由。您得保证 在您的 sysctl 配置中 net.bridge.bridge-nf-call-iptables 被设为 1。

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

3   启动集群

3.1   初始化集群

[root@instance-2 ~]# kubeadm init
[init] Using Kubernetes version: v1.14.1
[preflight] Running pre-flight checks
  [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [instance-2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.150.0.2]
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [instance-2 localhost] and IPs [10.150.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [instance-2 localhost] and IPs [10.150.0.2 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.002951 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node instance-2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node instance-2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rtbkg0.b6xgzvz2x9xekegi
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.150.0.2:6443 --token rtbkg0.b6xgzvz2x9xekegi \
    --discovery-token-ca-cert-hash sha256:b4659d6b5d8e6015a5a31da172225b625336c095760191b51f246bf34b113864

3.2   访问集群

有了 Kubernetes cluster,那么接下就是访问集群,官方文档《通过 Kubernetes API 访问集群》列有多种方式,当然都需要 Kubernetes API 的 endpoint(IP 和端口)、token 和证书,以下可以获取到:

$ APISERVER=$(kubectl config view | grep server | cut -f 2- -d ":" | tr -d " ")
$ TOKEN=$(kubectl describe secret $(kubectl get secrets | grep default | cut -f1 -d ' ') | grep -E '^token' | cut -f2 -d':' | tr -d '\t')
  • 直接访问 REST API,比如 curl、wget 等,curl $APISERVER/api --header "Authorization: Bearer $TOKEN" --insecure
  • 通过编程方式访问 API,比如有 Go、Python 客户端工具
  • kubectl command-line tool,通过以下方式配置 Kubernetes API endpoint、token 和证书,且 master 节点和 worker 节点都可以操作集群,如果是 worker 节点,把需要把 /etc/kubernetes/admin.conf 分发到 worker 节点上,然后执行以下操作:
  • 对普通用户:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • 对 root 用户: export KUBECONFIG=/etc/kubernetes/admin.conf

3.4   选择安装网络插件

这里选择 weave net,kubectl apply -f [podnetwork].yaml: kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

3.5   查看信息

> kubectl get pods --all-namespaces
NAMESPACE     NAME                                 READY   STATUS    RESTARTS   AGE
kube-system   coredns-fb8b8dccf-6r95r              1/1     Running   0          113m
kube-system   coredns-fb8b8dccf-lhmvl              1/1     Running   0          113m
kube-system   etcd-instance-2                      1/1     Running   0          112m
kube-system   kube-apiserver-instance-2            1/1     Running   0          112m
kube-system   kube-controller-manager-instance-2   1/1     Running   0          112m
kube-system   kube-proxy-pnc9h                     1/1     Running   0          113m
kube-system   kube-scheduler-instance-2            1/1     Running   0          112m
kube-system   weave-net-l9hps                      2/2     Running   0          61s

> kubectl get nodes
NAME         STATUS   ROLES    AGE    VERSION
instance-2   Ready    master   113m   v1.14.1

一般来说,coredns 是 Kubernetes 集群的关键核心服务,会使用 Deployment 进行自动扩缩容;kube-proxy 和 weave-net 会分别在各节点上运行。

4   创建节点

node 的依赖关系就没那么多,没有要求 CPU 和 内存数,没有要求 selinux,不需要选择网络插件,一般也不需要安装 kubectl。 即只需要 docker 环境、kubelet、kubeadm,其中 kubelet 启动后一样会进入 activating (auto-restart)状态,kubeadm join 后才会转变成 active (running) 状态。

[root@instance-4 ~]# kubeadm join 10.150.0.2:6443 --token rtbkg0.b6xgzvz2x9xekegi \
>     --discovery-token-ca-cert-hash sha256:b4659d6b5d8e6015a5a31da172225b625336c095760191b51f246bf34b113864
[preflight] Running pre-flight checks
  [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5   用 minikube 创建集群

可以参考官方的教程文档 Tutorials; 原理就是:给一个启动一个安装有 Docker 和 minikube CLI 的 Ubuntu 虚拟机,使用 minikube start 就可以安装以上所有步骤。

$ minikube start
o   minikube v0.34.1 on linux (amd64)
>   Configuring local host environment ...
>   Creating none VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
-   "minikube" IP address is 172.17.0.86
-   Configuring Docker as the container runtime ...
-   Preparing Kubernetes environment ...
@   Downloading kubeadm v1.13.3
@   Downloading kubelet v1.13.3
-   Pulling images required by Kubernetes v1.13.3 ...
-   Launching Kubernetes v1.13.3 using kubeadm ...
-   Configuring cluster permissions ...
-   Verifying component health .....
+   kubectl is now configured to use "minikube"
=   Done! Thank you for using minikube!

6   用 Vagrant 创建集群

参考文献 [1] 龚正. 等. Kubernetes 权威指南. 版次:2017年9月第1版 [2] Docker Swarm or Kubernetes — Help me decide. https://stackshare.io/stackups/docker-swarm-vs-kubernetes