IT Cloud
Шрифт:
Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-ks.
essh @ kubernetes-master: ~ / node-cluster / dev $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
terraform-nodejs-6fd8498cb5-29dzx 1/1 Running 0 2m57s 10.12.3.2 gke-node-ks-node-ks-pool-134dada1-c476 none>
terraform-nodejs-6fd8498cb5-jcbj6 0/1 Pending 0 2m58s none> none> none>
terraform-nodejs-6fd8498cb5-lvfjf 1/1 Running 0 2m58s 10.12.1.3 gke-node-ks-node-ks-pool-134dada1-3cdf none>
As you can see, the PODs were distributed across the pool of nodes, while not getting to the node with Kubernetes due to lack of free space. It is important to note that the number of nodes in the pool was increased automatically, and only the specified limit did not allow creating a third node in the pool. If we set remove_default_node_pool to true, then we merge the Kubernetes PODs and our PODs. According to requests for resources, Kubernetes takes up a little more than one core, and our POD takes half, so the rest of the PODs were not created, but we saved on resources:
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-node-ks-node-ks-pool-495b75fa-08q2 europe-north1-a n1-standard-1 10.166.0.57 35.228.117.98 RUNNING
gke-node-ks-node-ks-pool-495b75fa-wsf5 europe-north1-a n1-standard-1 10.166.0.59 35.228.96.97 RUNNING
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters get-credentials node-ks
Fetching cluster endpoint and auth data.
kubeconfig entry generated for node-ks.
essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
terraform-nodejs-6fd8498cb5-97svs 1/1 Running 0 14m 10.12.2.2 gke-node-ks-node-ks-pool-495b75fa-wsf5 none>
terraform-nodejs-6fd8498cb5-d9zkr 0/1 Pending 0 14m none> none> none>
terraform-nodejs-6fd8498cb5-phk8x 0/1 Pending 0 14m none> none> none>
After creating a service account, add the key and check it:
essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud auth login essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud projects create node-cluster-prod3 Create in progress for [https: // cloudresourcemanager. googleapis.com/v1/projects/node-cluster-prod3]. Waiting for [operations / cp.7153345484959140898] to finish … done. https://medium.com/@pnatraj/how-to-run-gcloud-command-line-using-a-service-account-f39043d515b9
essh @ kubernetes-master: ~ / node-cluster $ gcloud auth application-default login
essh @ kubernetes-master: ~ / node-cluster $ cp ~ / Downloads / node-cluster-prod-244519-6fd863dd4d38.json ./kubernetes_prod.json
essh @ kubernetes-master: ~ / node-cluster $ echo "kubernetes_prod.json" >> .gitignore
essh @ kubernetes-master: ~ / node-cluster $ gcloud iam service-accounts list
NAME EMAIL DISABLED
Compute Engine default service account 1008874319751-compute@developer.gserviceaccount.com False
terraform-prod terraform-prod@node-cluster-prod-244519.iam.gserviceaccount.com False
essh @ kubernetes-master: ~ / node-cluster $ gcloud projects list | grep node-cluster
node-cluster-243923 node-cluster 26345118671
node-cluster-prod-244519 node-cluster-prod 1008874319751
Let's create a prod environment:
essh @ kubernetes-master: ~ / node-cluster $ mkdir prod
essh @ kubernetes-master: ~ / node-cluster $ cd prod /
essh @ kubernetes-master: ~ / node-cluster / prod $ cp ../main.tf ../kubernetes_prod_key.json.
essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config set project node-cluster-prod-244519
Updated property [core / project].
essh @ kubernetes-master: ~ / node-cluster / prod $ gcloud config list project
[core]
project = node-cluster-prod-244519
Your active configuration is: [default]
essh @ kubernetes-master: ~ / node-cluster / prod $ cat main.tf
provider "google" {
alias = "prod"
credentials = file ("./ kubernetes_prod_key.json")
project = "node-cluster-prod-244519"
region = "us-west2"
}
module "kubernetes_prod" {
source = "../Kubernetes"
providers = {
google = google.prod
}
}
data "google_client_config" "default" {}
module "Nginx" {
source = "../nodejs"
providers = {
google = google.prod
}
image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"
endpoint = module.kubernetes_prod.endpoint
access_token = data.google_client_config.default.access_token
cluster_ca_certificate = module.kubernetes_prod.cluster_ca_certificate
}
essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform init
essh @ kubernetes-master: ~ / node-cluster / prod $ ../terraform apply
Конец ознакомительного фрагмента.