IT Cloud
Шрифт:
Docker Swarm has its own load balancer Ingress load balacing, which balances the load between replicas on the port declared when creating the service, in our case it is port 80. The entry point can be any server with this port, but the response will be received from the server to which the request was forwarded by the balancer.
We can also save data to the host machine, as in a regular container, there are two options for this:
docker service create –mount type = bind, src = …, dst = .... name = .... ..... #
docker service create –mount type = volume, src = …, dst = .... name = .... ..... # to host
The application is deployed via Docker-compose running on nodes (replicas). When updating the Docker-compose configuration, you need to update the Docker stack, and the cluster will be consistently updated: one replica will be deleted and a new one will be created in its place in accordance with the new config, then the next one. If an error occurs, a rollback will be made to the previous configuration. Well, let's get started:
docker stack deploy -c docker-compose.yml test_stack
docker service update –label-add foo = bar Nginx docker service update –label-rm foo Nginx docker service update –publish-rm 80 Nginx docker node update –availability = drain swarm-node-3 swarm-node-3
Docker swarm
$ sudo docker pull swarm
$ sudo docker run –rm swarm create
docker secrete create test_secret docker service create –secret test_secret cat / run / secrets / test_secret health check: hello-check-cobbalt example pipeline: trevisCI -> Jenkins -> config ->v = UgUuF_qZmWc https://www.youtube.com/watch?v=6uVgR9WPjYM
Docker company-wide
Let's take a company-wide view: we have containers and we have servers. It doesn't matter if these are two virtual machines and several containers or hundreds of servers and thousands of containers, the problem is to distribute containers on the servers, you need a system administrator and time, if there is little time and a lot of containers, you need a lot of system administrators, otherwise they will be suboptimally distributed. that is, the server (virtual machine) is working, but not at full capacity and the resources are being sold. In this situation, container orchestration systems are designed to optimize distribution and save human resources.
Consider evolution:
* The developer creates the necessary containers by hand.
* The developer creates the necessary containers using previously prepared scripts.
* The system administrator, using any configuration and deployment management system, such as Chef, Pupel, Ansible, Salt, sets the topology of the system. The topology indicates which container is located in which place.
* Orchestration (schedulers) – semi-automatic distribution, maintenance of the state and reaction to system changes. For example: Google Kubernetes, Apache Mesos, Hashicorp Nomad, Docker Swarm mode, and YARN, which we'll cover. New ones also appear: Flocker , Helios .
There is a native Docker-swarm solution. Of the adult systems, Kubernetes (Kubernetes) and Mesos showed the most popularity, while the former is a universal and completely ready-to-use system, and the latter is a complex of various projects combined into a single package and allowing you to replace or change their components. There is also a huge number of less popular solutions that are not promoted by such giants as Google, Twitter and others: Nomad, Scheduling, Scalling, Upgrades, Service Descovery, but we will not consider them. Here we will consider the most ready-made solution – Kubernetes, which has gained great popularity for its low entry threshold, support and sufficient flexibility in most cases, pushing Mesos into the niche of customizable solutions when customization and development is economically justified.
Kubernetes has several ready-made configurations:
* MiniKube – a cluster of one local machine, designed to overcome the threshold of entry and experiments;
* kubeadm;
* kops;
* Kubernetes-Ansible;
* microKubernetes;
* OKD;
* MicroK8s.
To start the cluster yourself, you can use
KubeSai – Free Kubernetes
The smallest structural unit is called POD, which corresponds to the YML file in Docker-compose. The process of creating a POD, like other entities, is done declaratively: by writing or changing a configuration YML file and applying it to a cluster. And so, let's create a POD:
# test_pod.yml
# kybectl create -f test_pod.yaml
containers:
– name: test
image: debian
To run multiple replicas:
# test_replica_controller.yml
# kybectl create -f test_replica_controller.yml
apiVersion: v1
kind: ReplicationController
metadata:
name: Nginx
spec:
replicas: 3
selector:
app: Nginx // label by which the replica determines the presence of running containers
template:
containers:
– name: test
image: debian
For balancing, a type of service (logical entity) is used – LoadBalancer, in addition to which there is also ClasterIP and Node Port:
appVersion: v1
kind: Service
metadata:
name: test_service
apec:
type: LoadBalanser
ports:
– port: 80
– targetPort: 80
– protocol: TCP
– name: http
selector:
app: WEB
Overlay network plugins (created and configured automatically): Contig, Flannel, GCE networking, Linux bridging, Calico, Kube-DNS, SkyDNS. #configmap apiVersion: v1 kind: ConfigMap metadata: name: config_name data:
Similar to secrets in Docker-swarm, there is a secret for Kubernetes, an example of which can be NGINX settings:
#secrets
apiVersion: v1
kind: Secrets
metadata: name: test_secret
data:
password: ....
And to add a secret to POD, you need to specify it in the POD config:
....
valumes:
secret:
secretName: test_secret
…
Kubernetes has more flavors of Volumes:
* emptyDir;
* hostPatch;
* gcePersistentDisc – drive on Google Cloud;