K8S日常运维

一、常用命令

说明:

  • kubeadm —— 启动 k8s 集群的命令工具
  • kubelet —— 集群容器内的命令工具
  • kubectl —— 操作集群的命令工具

1、获取所有节点

# 获取所有节点
[root@centos03 ~]# kubectl get nodes
NAME       STATUS   ROLES                         AGE   VERSION
centos03   Ready    control-plane,master,worker   44d   v1.20.4

2、查看命名空间

# kubectl get ns # 查看节点
kubectl get pods -n kube-system # 查看指定名称空间的pods
kubectl get pods --all-namespaces # 查看所有名称空间的pods

所有的名称空间pods:

[root@centos03 ~]# kubectl get pods --all-namespaces
NAMESPACE                      NAME                                               READY   STATUS    RESTARTS   AGE
kube-system                    calico-kube-controllers-8f59968d4-mjdzb            1/1     Running   5          44d
kube-system                    calico-node-54sn5                                  0/1     Running   293        44d
kube-system                    coredns-65944cbcb8-rhbw8                           1/1     Running   5          44d

3、查看部署的服务

[root@centos03 ~]# kubectl get svc -o wide
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE    SELECTOR
kubernetes   ClusterIP   10.233.0.1      <none>        443/TCP        44d    <none>
tomcat6      NodePort    10.233.29.246   <none>        80:30112/TCP   111s   app=tomcat6

[root@centos03 ~]# kubectl get pods -o wide
NAME                       READY   STATUS              RESTARTS   AGE     IP       NODE       NOMINATED NODE   READINESS GATES
tomcat6-56fcc999cb-w564f   0/1     ContainerCreating   0    

[root@centos03 ~]# kubectl get deployment
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
tomcat6   0/1     1            0           7m45s

[root@centos03 ~]# kubectl get all
NAME                           READY   STATUS              RESTARTS   AGE
pod/tomcat6-56fcc999cb-w564f   0/1     ContainerCreating   0          8m44s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.233.0.1      <none>        443/TCP        44d
service/tomcat6      NodePort    10.233.29.246   <none>        80:30112/TCP   5m48s

NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/tomcat6   0/1     1            0           8m44s

NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/tomcat6-56fcc999cb   1         1         0       8m44s

4、使用命令来创建yaml文件

下边我们使用yml来看如何创建:

# 使用该命令来创建yaml文件,--dry-run 测试,不真正执行。
kubectl create deployment tomcat6  --image=tomcat:6.0.53-jre8 --dry-run -o yaml

执行以上命令即可输出yaml文件:

[root@centos03 k8s]# kubectl create deployment tomcat6  --image=tomcat:6.0.53-jre8 --dry-run -o yaml
W1123 08:19:09.524860  105440 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat6
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: tomcat6
    spec:
      containers:
      - image: tomcat:6.0.53-jre8
        name: tomcat
        resources: {}
status: {}

上边输出的文件就是我们要部署的Tomcat6文件的yaml详细信息,或者也可以将上边的打印输出到yaml文件,然后对这个文件进行修改,执行操作:

# kubectl create deployment tomcat6  --image=tomcat:6.0.53-jre8 --dry-run -o yaml > tomcat6.yaml   # 重定向文件
kubectl apply -f tomcat6.yaml   # 执行yaml文件

只要有yaml文件,就可以替换上边那种超长的kubectl 命令,所以,熟悉yaml文件非常重要。

5、暴露端口(service)

上边的暴露端口命令我们也可以使用yaml文件来执行。

[root@centos03 k8s]# kubectl expose deployment tomcat6 --port=80 --target-port=8080 --type=NodePort --dry-run -o yaml 
W1123 08:23:05.561432  108701 helpers.go:553] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: tomcat6
  name: tomcat6
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat6
  type: NodePort
status:
  loadBalancer: {}
[root@centos03 k8s]# 

可以看到,kind类型为 service,这里相当于暴露服务。

关于service 说明:

由于pod是临时性的,pod的ip:port也是动态变化的。这种动态变化在k8s集群中就涉及到一个问题:如果一组后端pod作为服务提供方,供一组前端的pod所调用,那服务调用方怎么自动感知服务提供方。这就引入了k8s中的另外一个核心概念,services.
service是通过apiserver创建出来的对象实例,举例:

kind: Service
apiVersion: v1
metadata:
  name: my-service
spec:
  selector:
    app: MyApp
  ports:
  - protocol: TCP
    port: 80
    targetPort: 9376

这个配置将创建出来一个新的Service对象,名为my-service,后端是所有包含app=MyApp的pod,目标端口是9376,同时这个service也会被分配一个ip,被称为集群ip,对应的端口是80. 如果不指定targetPort, 那么 targetPort 与 port 相同。关于targetPort更灵活的设定是,targetPort可以是一个String类型的名字,该名字对应的真实端口值由各个后端pod自己定义,这样同一组pod无需保证同一个port,更加灵活。

上文说在创建service的时候,系统为service分配了一个集群虚IP和端口,服务使用方通过这个vip:port来访问真实的服务提供方。这里的vip就是kube-proxy提供出来的。

定义pod

[root@centos03 k8s]# kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
tomcat6-56fcc999cb-w564f   1/1     Running   0          23m
[root@centos03 k8s]# kubectl get pod tomcat6-56fcc999cb-w564f
NAME                       READY   STATUS    RESTARTS   AGE
tomcat6-56fcc999cb-w564f   1/1     Running   0          24m
[root@centos03 k8s]# kubectl get pod tomcat6-56fcc999cb-w564f -o yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.233.72.119/32
    cni.projectcalico.org/podIPs: 10.233.72.119/32
  creationTimestamp: "2021-11-23T00:01:16Z"
  generateName: tomcat6-56fcc999cb-
  labels:
    app: tomcat6
    pod-template-hash: 56fcc999cb
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:generateName: {}
        f:labels:
          .: {}
          f:app: {}
          f:pod-template-hash: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"a95f3d89-4c8b-4b69-b113-cc5e31e7f8ff"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:spec:
        f:containers:
          k:{"name":"tomcat"}:
            .: {}
            f:image: {}
            f:imagePullPolicy: {}
            f:name: {}
            f:resources: {}
            f:terminationMessagePath: {}
            f:terminationMessagePolicy: {}
        f:dnsPolicy: {}
        f:enableServiceLinks: {}
        f:restartPolicy: {}
        f:schedulerName: {}
        f:securityContext: {}
        f:terminationGracePeriodSeconds: {}
    manager: kube-controller-manager
    operation: Update
    time: "2021-11-23T00:01:16Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .: {}
          f:cni.projectcalico.org/podIP: {}
          f:cni.projectcalico.org/podIPs: {}
    manager: calico
    operation: Update
    time: "2021-11-23T00:01:17Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:status:
        f:conditions:
          k:{"type":"ContainersReady"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Initialized"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
          k:{"type":"Ready"}:
            .: {}
            f:lastProbeTime: {}
            f:lastTransitionTime: {}
            f:status: {}
            f:type: {}
        f:containerStatuses: {}
        f:hostIP: {}
        f:phase: {}
        f:podIP: {}
        f:podIPs:
          .: {}
          k:{"ip":"10.233.72.119"}:
            .: {}
            f:ip: {}
        f:startTime: {}
    manager: kubelet
    operation: Update
    time: "2021-11-23T00:23:26Z"
  name: tomcat6-56fcc999cb-w564f
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: ReplicaSet
    name: tomcat6-56fcc999cb
    uid: a95f3d89-4c8b-4b69-b113-cc5e31e7f8ff
  resourceVersion: "169826"
  uid: a2f6c90c-3ddb-4507-870d-3a61fd3da3cf
spec:
  containers:
  - image: tomcat:6.0.53-jre8
    imagePullPolicy: IfNotPresent
    name: tomcat
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-chlf7
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: centos03
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: default-token-chlf7
    secret:
      defaultMode: 420
      secretName: default-token-chlf7
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2021-11-23T00:01:16Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2021-11-23T00:23:26Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2021-11-23T00:23:26Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2021-11-23T00:01:16Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://df4ea674adc3d513ed5f19888b0b2ab113d1a4fdd382d320bba3bb9a1782dd9a
    image: tomcat:6.0.53-jre8
    imageID: docker-pullable://tomcat@sha256:8c643303012290f89c6f6852fa133b7c36ea6fbb8eb8b8c9588a432beb24dc5d
    lastState: {}
    name: tomcat
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2021-11-23T00:23:25Z"
  hostIP: 192.168.222.12
  phase: Running
  podIP: 10.233.72.119
  podIPs:
  - ip: 10.233.72.119
  qosClass: BestEffort
  startTime: "2021-11-23T00:01:16Z"

二、Ingress

Ingress是个什么鬼,网上资料很多(推荐官方),大家自行研究。简单来讲,就是一个负载均衡的玩意,其主要用来解决使用NodePort暴露Service的端口时Node IP会漂移的问题。同时,若大量使用NodePort暴露主机端口,管理会非常混乱。

好的解决方案就是让外界通过域名去访问Service,而无需关心其Node IP及Port。那为什么不直接使用Nginx?这是因为在K8S集群中,如果每加入一个服务,我们都在Nginx中添加一个配置,其实是一个重复性的体力活,只要是重复性的体力活,我们都应该通过技术将它干掉。

Ingress就可以解决上面的问题,其包含两个组件Ingress Controller和Ingress:

  • Ingress
    将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可

  • Ingress Controller
    将新加入的Ingress转化成Nginx的配置文件并使之生效

file

K8s -- ingress

为者常成,行者常至