基于k8s的部署

注意:

A. 本部分内容需要学员完成本节之前的内容。

B. She平台服务端必须具备硬件虚拟化条件(可咨询老师)。

  1. 删除petclinic容器,保证干净的环境
docker stop petclinic1 && docker rm petclinic1
docker stop petclinic2 && docker rm petclinic2
  1. 删除mysql容器
docker stop mysql && docker rm mysql
  1. 启动k8s集群
cd /opt/tools/installK8s

然后运行启动脚本,

./installK8s.sh

一定保证以上命令执行完成才能关闭ssh连接,且以上命令仅可以执行一次,如果出错请重建此workspace然后从零开始。

执行过程如下,

root@ssxy:/opt/tools/installK8s# cd /opt/tools/installK8s
root@ssxy:/opt/tools/installK8s# ls
installK8s.sh  srcyamls
root@ssxy:/opt/tools/installK8s# ./installK8s.sh 
W0809 15:24:15.314228    3125 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.3
[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ssxy kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.15]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [ssxy localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [ssxy localhost] and IPs [192.168.1.15 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0809 15:24:24.656946    3125 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0809 15:24:24.658420    3125 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 30.011551 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ssxy as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ssxy as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 72cix0.diw72o8y0kmrlgua
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.15:6443 --token 72cix0.diw72o8y0kmrlgua \
    --discovery-token-ca-cert-hash sha256:a3e1223f80c1bd9e21866276f5cdd86c5e2c15842443b1ba3316fc30427c8cd1 
node/ssxy untainted
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
node/ssxy condition met
deployment.apps/coredns scaled
pod/coredns-66bff467f8-9gmft condition met
pod/coredns-66bff467f8-kdvk9 condition met
deployment.apps/coredns scaled
pod/coredns-66bff467f8-wjpsd condition met
pod/calico-kube-controllers-598fbdf98b-nhmmq condition met
pod/calico-node-pnl5h condition met
namespace/local-path-storage created
serviceaccount/local-path-provisioner-service-account created
clusterrole.rbac.authorization.k8s.io/local-path-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/local-path-provisioner-bind created
deployment.apps/local-path-provisioner created
storageclass.storage.k8s.io/local-path created
configmap/local-path-config created
storageclass.storage.k8s.io/local-path patched
namespace/ingress-nginx created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created
limitrange/ingress-nginx created
service/ingress-nginx created
namespace/metallb-system created
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
secret/memberlist created
configmap/config created
NAME            TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   LoadBalancer   10.108.8.115   <pending>     80:31151/TCP,443:30487/TCP   5s
All Finished.
root@ssxy:/opt/tools/installK8s# 
  1. 部署

A. 先部署petclinic应用,

cd ~/petclinic
touch petclinic.yaml

向petclinic.yaml输入

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
  labels:
    app: mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.7-debian
        name: mysql
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 3306
          name: dbport
        env:
        - name: MYSQL_ROOT_PASSWORD
          value: Yhf@1018
        - name: MYSQL_DATABASE
          value: petclinic

---

apiVersion: v1
kind: Service
metadata:
  name: mysql
spec:
  selector:
    app: mysql
  ports:
  - name: mysqlport
    protocol: TCP
    port: 3306
    targetPort: dbport

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: petclinic1
spec:
  selector:
    matchLabels:
      run: petclinic1
  template:
    metadata:
      labels:
        run: petclinic1
    spec:
      initContainers:
      - name: db-init
        image: registry.kinginsai.com/busybox:1.33.1
        command: ['sh', '-c', 'echo -e "Checking MySQL"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e "  >> MySQL DB Server has started";']
      containers:
      - name: petclinic1
        image: r.kinginsai.com/petclinic:7.8.0
        imagePullPolicy: IfNotPresent
        env:
        - name: spring.profiles.active
          value: "mysql"
        - name: spring_datasource_password
          value: "Yhf@1018"
        - name: database_url
          value: "jdbc:mysql://mysql:3306/petclinic"
        - name: app_id
          value: "1"
        ports:
        - name: http
          containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: petclinic1
spec:
  selector:
    run: petclinic1
  ports:
  - name: petclinic1port
    protocol: TCP
    port: 8080
    targetPort: 8080

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: petclinic2
spec:
  selector:
    matchLabels:
      run: petclinic2
  template:
    metadata:
      labels:
        run: petclinic2
    spec:
      initContainers:
      - name: db-init
        image: registry.kinginsai.com/busybox:1.33.1
        command: ['sh', '-c', 'echo -e "Checking MySQL"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e "  >> MySQL DB Server has started";']
      containers:
      - name: petclinic2
        image: r.kinginsai.com/petclinic:7.8.0
        imagePullPolicy: IfNotPresent
        env:
        - name: spring.profiles.active
          value: "mysql"
        - name: spring_datasource_password
          value: "Yhf@1018"
        - name: database_url
          value: "jdbc:mysql://mysql:3306/petclinic"
        - name: app_id
          value: "2"
        ports:
        - name: http
          containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: petclinic2
spec:
  selector:
    run: petclinic2
  ports:
  - name: petclinic2port
    protocol: TCP
    port: 8080
    targetPort: 8080

然后运行部署命令

kubectl apply -f petclinic.yaml -n default

注:如果遇到部署错误使用以下命令删除部署

kubectl delete -f petclinic.yaml -n default

使用以下命令等待以上部署完成

root@ssxy:~/petclinic# kubectl get pods -n default
NAME                          READY   STATUS     RESTARTS   AGE
mysql-846d894c6c-qlqxq        1/1     Running    0          2m11s
petclinic1-b774d4d4c-vnmw5    0/1     Init:0/1   0          2m11s
petclinic2-5f8db4885d-9scm6   0/1     Init:0/1   0          2m11s

B. 部署nginx

cd ~/petclinic
touch nginx.yaml

nginx.yaml输入

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.25.1
        name: nginx
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: nginxport
        volumeMounts:
        - name: web-nginx-config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
      volumes:
        - name: web-nginx-config
          configMap:
            name: web-nginx-config
            items:
            - key: nginx.conf
              path: nginx.conf

---

apiVersion: v1
kind: ConfigMap
metadata:
    name: web-nginx-config
data:
  nginx.conf: |
    user  nginx;
    worker_processes  1;

    error_log  /var/log/nginx/error.log warn;
    pid        /var/run/nginx.pid;


    events {
        worker_connections  1024;
    }


    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;

        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';

        access_log  /var/log/nginx/access.log  main;

        sendfile        on;
        #tcp_nopush     on;

        keepalive_timeout  65;

        #gzip  on;

        upstream backend {
            #server 192.168.1.101:8080 down;

            server petclinic1:8080;
            server petclinic2:8080;
        }

        server {
            listen       80;
            
            location / {
                proxy_pass http://backend/;
            }

        }
    }

---
apiVersion: v1
kind: Service
metadata:
  name: web-nginx
  labels:
    app: web-nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: web-nginx-out
spec:
  type: NodePort
  sessionAffinity: ClientIP
  ports:
    - name: web-nginx-out
      port: 80
      targetPort: 80
      nodePort: 30080
  selector:
    app: nginx

然后运行部署命令

kubectl apply -f nginx.yaml -n default

注:如果遇到部署错误使用以下命令删除部署

kubectl delete -f nginx.yaml -n default

在远端浏览器中输入 Ubuntu18TextVMI主机IP地址:30080, 浏览网页