Notes for "Scalable Microservices with Kubernetes"
November 22, 2017 | Kubernetes DockerMy notes for the course “Scalable Microservices with Kubernetes” on Udacity.
- Resources
- Introduction to Microservices
- Building the Containers with Docker
- Kubernetes
- Deploying Microservices
Resources
- Books
- Articles
- Martin Fowler on the Pros and Cons of Microservices
- 12-factor manifesto
- 12-Fractured Apps
- Tools
Introduction to Microservices
Activate Google Cloud Shell.
List available time zones:
gcloud compute zones list
Set a time zone:
gcloud config set compute/zone us-west1-b
Set GOPATH:
echo "export GOPATH=~/go" >> ~/.bashrc
source ~/.bashrc
Get the code:
mkdir -p $GOPATH/src/github.com/udacity
cd $GOPATH/src/github.com/udacity
git clone https://github.com/udacity/ud615
cd ud615/app
On shell 1 - build the monolith app:
cd $GOPATH/src/github.com/udacity/ud615/app
mkdir bin
go build -o ./bin/monolith ./monolith
On shell 1 - run the monolith server:
$ sudo ./bin/monolith -http :10080
2017/11/22 11:40:51 Starting server...
2017/11/22 11:40:51 Health service listening on 0.0.0.0:81
2017/11/22 11:40:51 HTTP service listening on :10080
On shell 2 - test the app on shell 2:
$ curl http://127.0.0.1:10080
{"message":"Hello"}
$ curl http://127.0.0.1:10080/secure
authorization failed
On shell 2 - authenticate (password is password):
curl http://127.0.0.1:10080/login -u user
It prints out the token. You can copy and paste the long token in to the next command manually, but copying long, wrapped lines in cloud shell is broken. To work around this, you can either copy the JWT token in pieces, or, more easily, by assigning the token to a shell variable as follows.
On shell 2 - login and assign the value of the JWT to a variable
TOKEN=$(curl http://127.0.0.1:10080/login -u user | jq -r '.token')
echo $TOKEN
On shell 2 - access the secure endpoint using the JWT:
$ curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10080/secure
{"message":"Hello"}
On shell 2 - check out dependencies
ls vendor
cat vendor/vendor.json
On shell 1 - build and run the hello service
go build -o ./bin/hello ./hello
sudo ./bin/hello -http 0.0.0.0:10082 -health 0.0.0.0:10081
On shell 2 - build and run the auth service
go build -o ./bin/auth ./auth
sudo ./bin/auth -http :10090 -health :10091
On shell 3 - interact with the auth and hello microservices
TOKEN=$(curl 127.0.0.1:10090/login -u user | jq -r '.token')
curl -H "Authorization: Bearer $TOKEN" http://127.0.0.1:10082/secure
Building the Containers with Docker
Cloud shell - set compute/zone (Note: Google Cloud shell is an ephemeral instance and will reset if you don’t use it for more than 30 minutes. That is why you might have to set some configuration values again)
gcloud compute zones list
gcloud config set compute/zone us-west1-c
Cloud shell - launch a new VM instance
$ gcloud compute instances create ubuntu --image-project ubuntu-os-cloud --image-family ubuntu-1604-lts
Created [https://www.googleapis.com/compute/v1/projects/artful-aardvark/zones/us-west1-c/instances/ubuntu].
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
ubuntu us-west1-c n1-standard-1 10.138.0.2 35.197.47.143 RUNNING
Cloud shell - log into the VM instance
gcloud compute ssh ubuntu
VM instance - update packages and install nginx
sudo apt-get update
sudo apt-get install nginx
nginx -v
VM instance - start nginx
sudo systemctl start nginx
Check that it’s running
sudo systemctl status nginx
curl http://127.0.0.1
Install Docker
sudo apt-get install docker.io
Check Docker images
sudo docker images
Pull nginx image
sudo docker pull nginx:1.10.0
sudo docker images
Verify the versions match
sudo dpkg -l | grep nginx
Run the first instance
sudo docker run -d nginx:1.10.0
Check if it’s up
sudo docker ps
Run a different version of nginx
sudo docker run -d nginx:1.9.3
Run another version of nginx
sudo docker run -d nginx:1.10.0
Check how many instances are running
sudo docker ps
sudo ps aux | grep nginx
What’s with the container names? If you don’t specify a name, Docker gives a container a random name, such as “stoic_williams”, “sharp_bartik”, “awesome_murdock”, or “evil_hawking”. These are generated from a list of adjectives and names of famous scientists and hackers. The combination of the names and adjectives is random, except for one case. Want to see what the exception is? Check it out in the Docker source code!
List all running container processes
sudo docker ps
For use in shell scripts you might want to just get a list of container IDs (-a
stands for all instances, not just running, and -q
is for “quiet” - show just the numeric ID):
sudo docker ps -aq
Inspect the container. You can use either CONTAINER ID or NAMES field, for example for a sudo docker ps
output like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f86cf066c304 nginx:1.10.0 "nginx -g 'daemon off" 8 minutes ago Up 8 minutes 80/tcp, 443/tcp sharp_bartik
You can use either of the following commands:
sudo docker inspect f86cf066c304
or
sudo docker inspect sharp_bartik
Connect to the nginx using the internal IP. Get the internal IP address either copying from the full inspect file or by assigning it to a shell variable:
CN="sharp_bartik"
CIP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' $CN)
curl http://$CIP
You can also get all instance IDs and their corresponding IP addresses by doing this:
sudo docker inspect -f '{{.Name}} - {{.NetworkSettings.IPAddress }}' $(sudo docker ps -aq)
Stop an instance
sudo docker stop <cid>
or
sudo docker stop $(sudo docker ps -aq)
Verify no more instances running
sudo docker ps
Remove the docker containers from the system
sudo docker rm <cid>
or
sudo docker rm $(sudo docker ps -aq)
On the VM Instance, build a static binary of the monolith app
$ cd $GOPATH/src/github.com/udacity/ud615/app/monolith
$ go get -u
$ go build --tags netgo --ldflags '-extldflags "-lm -lstdc++ -static"'
$ ldd monolith
not a dynamic executable
Why did you have to build the binary with such an ugly command line? You have to explicitly make the binary static. This is really important in the Docker community right now because alpine has a different implementation of libc. So your go binary wouldn’t have had the lib it needed if it wasn’t static. You created a static binary so that your application could be self-contained.
Create a container for the app. Look at the Dockerfile
cat Dockerfile
Build the app container
sudo docker build -t monolith:1.0.0 .
List the monolith image
sudo docker images monolith:1.0.0
Run the monolith container and get it’s IP
sudo docker run -d monolith:1.0.0
sudo docker inspect <container name or cid>
or
CID=$(sudo docker run -d monolith:1.0.0)
CIP=$(sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${CID})
Test the container
curl <the container IP>
or
curl $CIP
Important note on security: If you are tired of typing “sudo” in front of all Docker commands, and confused why a lot of examples don’t have that, please read the following article about implications on security - Why we don’t let non-root users run Docker in CentOS, Fedora, or RHEL
Create docker images for the remaining microservices - auth and hello.
Build the auth app
cd $GOPATH/src/github.com/udacity/ud615/app
cd auth
go build --tags netgo --ldflags '-extldflags "-lm -lstdc++ -static"'
sudo docker build -t auth:1.0.0 .
CID2=$(sudo docker run -d auth:1.0.0)
Build the hello app
cd $GOPATH/src/github.com/udacity/ud615/app
cd hello
go build --tags netgo --ldflags '-extldflags "-lm -lstdc++ -static"'
sudo docker build -t hello:1.0.0 .
CID3=$(sudo docker run -d hello:1.0.0)
See the running containers
sudo docker ps -a
Public Vs Private Registries (Comparing Four Hosted Docker Registries):
- Docker Hub
- Quay is another popular registry because of it’s rich automated workflow for building containers from github.
- Google Cloud Registry (GCR) is a strong options for large enterprises.
See all images
sudo docker images
Docker tag command help
docker tag -h
Add your own tag
sudo docker tag monolith:1.0.0 <your username>/monolith:1.0.0
For example (you can rename too!)
sudo docker tag monolith:1.0.0 udacity/example-monolith:1.0.0
Create account on Dockerhub. To be able to push images to Dockerhub you need to create an account there
Login and use the docker push command
sudo docker login
sudo docker push udacity/example-monolith:1.0.0
Repeat for all images you created - monolith, auth and hello!
Kubernetes
- Kubernetes command cheat sheet
- Pods
- Configure Liveness and Readiness Probes
- Configure Containers Using a ConfigMap
- Secrets
- Services
Use project directory
cd $GOPATH/src/github/com/udacity/ud615/kubernetes
Note: At any time you can clean up by running the cleanup.sh script
Provision a Kubernetes Cluster with GKE using gcloud (GKE is a hosted Kubernetes by Google; GKE clusters can be customized and supports different machine types, number of nodes, and network settings)
$ gcloud container clusters create k0 --zone=us-west1-c
$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-k0-default-pool-1f7bf2a3-3gjp us-west1-c n1-standard-1 10.138.0.4 35.203.166.36 RUNNING
gke-k0-default-pool-1f7bf2a3-3twg us-west1-c n1-standard-1 10.138.0.2 35.199.167.224 RUNNING
gke-k0-default-pool-1f7bf2a3-80gb us-west1-c n1-standard-1 10.138.0.3 35.203.166.97 RUNNING
Launch a single instance:
kubectl run nginx --image=nginx:1.10.0
Get pods
kubectl get pods
Expose nginx
kubectl expose deployment nginx --port 80 --type LoadBalancer
List services
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.47.240.1 <none> 443/TCP 8m
nginx LoadBalancer 10.47.253.101 35.197.110.73 80:30244/TCP 1m
$ curl http://35.197.110.73/
Explore config file
cat pods/monolith.yaml
Create the monolith pod
kubectl create -f pods/monolith.yaml
Examine pods
kubectl get pods
It may take a few seconds before the monolith pod is up and running as the monolith container image needs to be pulled from the Docker Hub before we can run it.
Use the kubectl describe command to get more information about the monolith pod
kubectl describe pods monolith
On cloud shell 1 - set up port-forwarding
kubectl port-forward monolith 10080:80
On cloud shell 2
curl http://127.0.0.1:10080
curl http://127.0.0.1:10080/secure
On cloud shell 2 - log in
curl -u user http://127.0.0.1:10080/login
curl -H "Authorization: Bearer <token>" http://127.0.0.1:10080/secure
On cloud shell 2 - view logs
kubectl logs monolith
kubectl logs -f monolith
On cloud shell 3
curl http://127.0.0.1:10080
On cloud shell 2 - exit log watching (Ctrl-C)
You can use the kubectl exec command to run an interactive shell inside the monolith Pod. This can come in handy when you want to troubleshoot from within a container:
kubectl exec monolith --stdin --tty -c monolith /bin/sh
For example, once we have a shell into the monolith container we can test external connectivity using the ping command.
ping -c 3 google.com
When you’re done with the interactive shell be sure to logout.
exit
Creating Secrets
ls tls
The cert.pem
and key.pem
files will be used to secure traffic on the monolith server and the ca.pem
will be used by HTTP clients as the CA to trust. Since the certs being used by the monolith server where signed by the CA represented by ca.pem
, HTTP clients that trust ca.pem
will be able to validate the SSL connection to the monolith server.
Use kubectl to create the tls-certs secret from the TLS certificates stored under the tls directory:
kubectl create secret generic tls-certs --from-file=tls/
kubectl will create a key for each file in the tls directory under the tls-certs secret bucket. Use the kubectl describe command to verify that:
kubectl describe secrets tls-certs
Next we need to create a configmap entry for the proxy.conf
nginx configuration file using the kubectl create configmap command:
kubectl create configmap nginx-proxy-conf --from-file=nginx/proxy.conf
Use the kubectl describe configmap command to get more details about the nginx-proxy-conf configmap entry:
kubectl describe configmap nginx-proxy-conf
Accessing A Secure HTTPS Endpoint
cat pods/secure-monolith.yaml
Create the secure-monolith Pod using kubectl:
kubectl create -f pods/secure-monolith.yaml
kubectl get pods secure-monolith
kubectl port-forward secure-monolith 10443:443
curl --cacert tls/ca.pem https://127.0.0.1:10443
kubectl logs -c nginx secure-monolith
Create a service:
cat services/monolith.yaml
kubectl create -f services/monolith.yaml
Set up firewall rules for the service port(s)
gcloud compute firewall-rules create allow-monolith-nodeport --allow=tcp:31000
Get IP addresses of the nodes
gcloud compute instances list
curl -k https://104.197.223.141:31000
Not working yet!
Add labels to Pods:
kubectl get pods -l "app=monolith"
kubectl get pods -l "app=monolith,secure=enabled"
kubectl describe pods secure-monolith | grep Labels
kubectl label pods secure-monolith "secure=enabled"
kubectl describe pods secure-monolith | grep Labels
kubectl describe services monolith | grep Endpoints
curl -k https://104.197.223.141:31000
Now it works!
Deploying Microservices
Create deployments:
cat deployments/auth.yaml
kubectl create -f deployments/auth.yaml
kubectl describe deployments auth
kubectl create -f services/auth.yaml
kubectl create -f deployments/hello.yaml
kubectl create -f services/hello.yaml
kubectl create configmap nginx-frontend-conf --from-file=nginx/frontend.conf
kubectl create -f deployments/frontend.yaml
kubectl create -f services/frontend.yaml
kubectl get services frontend
curl -k https://104.197.163.161
Scale deployments:
kubectl get replicasets
kubectl get pods -l "app=hello,track=stable"
vim deployments/hello.yaml
kubectl apply -f deployments/hello.yaml
kubectl get replicasets
kubectl get pods
kubectl describe deployment hello
kubectl get services
Rolling updates:
vim deployments/auth.yaml
kubectl apply -f deployments/auth.yaml
kubectl describe deployments auth
kubectl get pods
Finally, delete the Kubernetes cluster:
$ gcloud container clusters list
NAME ZONE MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
k0 us-west1-c 1.7.8-gke.0 35.199.190.169 n1-standard-1 1.7.8-gke.0 3 RUNNING
$ gcloud container clusters delete k0 --zone=us-west1-c