IT Cloud
Шрифт:
Cluster Reproducibility
Let's take a look at the situation from the previous chapter, in which we created a cluster, deleted a replica, and it recovered. The fact is that we do not manage commands directly, but with the help of commands we create descriptions of the required configuration of the cluster and place it in the distributed storage, after which the state of the nodes is maintained in accordance with this description in the distributed storage. We can also get and edit these descriptions, or write ourselves and then upload them to a distributed storage. This will allow us to save the state on disk in the form of YAML files and restore it back, as is often done when moving from a production server to a test one. In addition, we get the opportunity to more flexibly customize the state, but since we are not limited to commands.
esschtolts @ cloudshell: ~ (essch) $ kubectl get deployment / Nginx –output = yaml
apiVersion: extensions / v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2018-12-16T10: 23: 26Z
generation: 1
labels:
run: Nginx
name: Nginx
namespace: default
resourceVersion: "1612985"
selfLink: / apis / extensions / v1beta1 / namespaces / default / deployments / Nginx
uid: 9fb3ad6a-011c-11e9-bfaa-42010aa60088
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: Nginx
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: Nginx
spec:
containers:
– image: Nginx
imagePullPolicy: Always
name: Nginx
resources: {}
terminationMessagePath: / dev / termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
– lastTransitionTime: 2018-12-16T10: 23: 26Z
lastUpdateTime: 2018-12-16T10: 23: 26Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
– lastTransitionTime: 2018-12-16T10: 23: 26Z
lastUpdateTime: 2018-12-16T10: 23: 28Z
message: ReplicaSet "Nginx-64f497f8fd" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
This will be superfluous for us, so I will delete the unnecessary, because when creating, we specified only the name and image, the rest was filled with default values:
apiVersion: extensions / v1beta1
kind: Deployment
metadata:
labels:
run: Nginx
name: Nginx
spec:
selector:
matchLabels:
run: Nginx
template:
metadata:
labels:
run: Nginx
spec:
containers:
– image: Nginx
name: Nginx
You can also create a template:
gcloud services enable compute.googleapis.com –project = $ {PROJECT}
gcloud beta compute instance-templates create-with-container $ {TEMPLATE} \
–-machine-type = custom-1-4096 \
–-image-family = cos-stable \
–-image-project = cos-cloud \
–-container-image = gcr.io / kuar-demo / kuard-amd64: 1 \
–-container-restart-policy = always \
–-preemptible \
–-region = $ {REGION} \
–-project = $ {PROJECT}
gcloud compute instance-groups managed create $ {TEMPLATE} \
–-base-instance-name = $ {TEMPLATE} \
–-template = $ {TEMPLATE} \
–-size = $ {CLONES} \
–-region = $ {REGION} \
–-project = $ {PROJECT}
High service availability
To ensure high availability, you need to redirect traffic to the spare in the event of an application crash. Also, it is often important that the load is evenly distributed, since the application in a single instance is not able to handle all the traffic. To do this, a cluster is created, for example, let's take a more complex image in order to parse a larger number of nuances:
esschtolts @ cloudshell: ~ / bitrix (essch) $ cat deploymnet.yaml
apiVersion: apps / v1
kind: Deployment
metadata:
name: Nginxlamp
spec:
selector:
matchLabels:
app: lamp
replicas: 1
template:
metadata:
labels:
app: lamp
spec:
containers:
– name: lamp
image: mattrayner / lamp: latest-1604-php5
ports:
– containerPort: 80
esschtolts @ cloudshell: ~ / bitrix (essch) $ cat loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
– name: front
port: 80
targetPort: 80
selector:
app: lamp
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods
NAME READY STATUS RESTARTS AGE
Nginxlamp-7fb6fdd47b-jttl8 2/2 Running 0 3m
esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT (S) AGE
frontend LoadBalancer 10.55.242.137 35.228.73.217 80: 32701 / TCP, 8080: 32568 / TCP 4m