Base on the official description in VMware Container Service Extension, the container-service-extension is a vCloud Director add-on that manages the life cycle of Kubernetes clusters for tenants.

In this blog, I’ll walk you through how to use this service from end user’s perspective, let’s say your company has already held one virtual datacenter in the cloud which is offered by VMware Cloud Provider (following graph‘s right hand side from the red line) , you have IaaS administrator and developer persons in your company(following graph’s left hand side from the red line), the high level steps are your Tenant Admin(IaaS Admin)  create the Kubernetes cluster first for your developer, developer take that resource and use Kubernetes tools natively.

overview

Tenant Admin

As an IaaS administrator, your job is to create the IaaS resources for your K8S users or developers for them to consume, first you need to prepare your desktop environment, here the example is to use VMware Photon platform for this, you can just go to http://dl.bintray.com/vmware/photon/2.0/GA/ova/photon-custom-hw11-2.0-304b817.ova to download VMware photon VM. In this VM, you need to follow the CSE tenant installation guide here to install Python, the CSE package which also includes vCloud Director CLI before you can actually provision resources in the Cloud.

Once the CSE package has been installed, you can use vcd-cli to create Kubernetes cluster for your  developers in your vCloud Director Organization virtual datacenter, it’s good to issue the following command first to let our Photon OS know where the path our vcd command is.

# export PATH=$PATH:/root/.local/bin

Create Kubernetes cluster resource

For example, you can see I have an organization virtual datacenter in the cloud, my tenant name is “GCP”, Organization VDC name is “GCPOVDC”, there are two organization networks as the graph shown below:

ovdc

You login to your cloud virtual datacenter where you want your Kubernetes’s resources to provision to.

# vcd login <Cloud site IP> <Org Name> <Administrator Name> –password <password> -w -i

login

# vcd cse template list

template list

 

# vcd cse cluster create gcp-demo1 –network GCPNW –nodes 2

CSE will first create the K8S master node, after successfully created master node, it will initial the creation of the worker nodes.

initialize

add nodes

vCD GUI1

 

# vcd cse cluster info gcp-demo1

cluster info

# vcd cse node list gcp-demo1

node list

Developer Usage

For developers, they prefer keep using native K8S command line interface (i.e: kubectl) for their work, they just need to know the cluster name and K8S master’s IP address which their IaaS admin has already prepared for them to continue their work, they can either connect directly to the cluster’s master node or use the desktop proxy to perform remotely to master node for this.

# vcd cse cluster list

This can get the master’s node IP address, then they can connect to it directly.

cluster list

 

 

Or they can just export the K8S cluster configuration file to perform the management tasks remotely.

# vcd cse cluster config gcp-demo1

config

# vcd cse cluster config gcp-demo1 > /gcp/kubectl.conf

# export KUBECONFIG=/gcp/kubectl.conf

# kubectl get nodes

kubectl get node

 

deploy sock-shop demo microservices

# kubectl create namespace sock-shop

# kubectl apply -n sock-shop -f “https://github.com/microservices-demo/microservices-demo/blob/master/deploy/kubernetes/complete-demo.yaml?raw=true&#8221;

# watch -n 1 kubectl get pods –namespace sock-shop

watch
open url at “http://${IP}:30001”, IP can be any node’s IP address.

socks

 

# kubectl get pods –namespace sock-shop -o wide

get pods

Additional node creation and deletion

  • Add node

add nodes 2

  • Delete node

delete node

Kubernetes Dashboard

Developers can also use Kubernetes traditional monitoring tool to check their deployed service running status.

# kubectl proxy –address 0.0.0.0 –accept-hosts ‘.*’

proxy

You can browse to http://master node IP:8001/ui to login into K8S dashboard.

Dashboard

nodes

resource allocation

overview1

 

  • Delete Cluster

You can delete the cluster by running the following command.

# vcd cse clusters delete <cluster name>

delete

備註: