This blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use the Couchbase, an open source NoSQL distributed document database, as the Docker container.
The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. That is a simple and easy way to develop, test, and deploy Kubernetes cluster on your local machine. But this could be of limited use rather soon as the resources are constrained by the local machine. So what do you do?
Kubernetes cluster can be installed on Amazon as well. This second part will show:
- How to setup and start the Kubernetes cluster on Amazon Web Services
- Run Docker container in the Kubernetes cluster
- Expose Pod on Kubernetes as Service
- Shutdown the cluster
Here is a quick overview:
Let’s dig into the details!
Setup Kubernetes Cluster on Amazon Web Services
Getting Started on AWS EC2 provide complete instructions to start Kubernetes cluster on Amazon. Make sure to have the pre-requisites (AWS account, AWS CLI, Full EC2 access) met before you follow these instructions.
Kubernetes cluster can be created on Amazon as:
1
2
3
4
|
set KUBERNETES_PROVIDER=aws
./cluster/kube-up.sh
|
By default, this provisions a new VPC and a 4 node Kubernetes cluster in us-west-2a
(Oregon) with t2.micro
instances running on Ubuntu. This means 5 AMIs (one for master and 4 for the worker nodes) are created. Some properties that are worth updating:
- Set
NUM_MINIONS
environment variable to whatever number of nodes are required in the cluster. Set it to 2 if you want only two worker nodes to be created. - Each instance size is 1.1.x is
t2.micro
. SetMASTER_SIZE
andMINION_SIZE
environment variables tom3.medium
otherwise the nodes are going to crawl.
If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh
.
Starting Kubernetes on Amazon shows the following log:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
|
./kubernetes/cluster/kube-up.sh
... Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: vivid
Uploading to Amazon S3
+++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel
{
"InstanceProfile": {
"InstanceProfileId": "AIPAJMNMKZSXNWXQBHXHI",
"Roles": [
{
"RoleName": "kubernetes-master",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"CreateDate": "2016-02-29T23:19:17Z",
"Path": "/",
"RoleId": "AROAJW7ER37BPXX5KFTFS",
"Arn": "arn:aws:iam::598307997273:role/kubernetes-master"
}
],
"Arn": "arn:aws:iam::598307997273:instance-profile/kubernetes-master",
"CreateDate": "2016-02-29T23:19:19Z",
"Path": "/",
"InstanceProfileName": "kubernetes-master"
}
}
{
"InstanceProfile": {
"InstanceProfileId": "AIPAILRAU7RF4R2SDCULG",
"Path": "/",
"Arn": "arn:aws:iam::598307997273:instance-profile/kubernetes-minion",
"Roles": [
{
"Path": "/",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
],
"Version": "2012-10-17"
},
"RoleName": "kubernetes-minion",
"Arn": "arn:aws:iam::598307997273:role/kubernetes-minion",
"RoleId": "AROAIBEPV6VW4IEE6MRHS",
"CreateDate": "2016-02-29T23:19:21Z"
}
],
"InstanceProfileName": "kubernetes-minion",
"CreateDate": "2016-02-29T23:19:22Z"
}
}
Using SSH key with (AWS) fingerprint: 39:b3:cb:c1:af:6a:86:de:98:95:01:3d:9a:56:bb:8b
Creating vpc.
Adding tag to vpc-7b46ac1f: Name=kubernetes-vpc
Adding tag to vpc-7b46ac1f: KubernetesCluster=kubernetes
Using VPC vpc-7b46ac1f
Creating subnet.
Adding tag to subnet-cc906fa8: KubernetesCluster=kubernetes
Using subnet subnet-cc906fa8
Creating Internet Gateway.
Using Internet Gateway igw-40055525
Associating route table.
Creating route table
Adding tag to rtb-f2dc1596: KubernetesCluster=kubernetes
Associating route table rtb-f2dc1596 to subnet subnet-cc906fa8
Adding route to route table rtb-f2dc1596
Using Route Table rtb-f2dc1596
Creating master security group.
Creating security group kubernetes-master-kubernetes.
Adding tag to sg-308b3357: KubernetesCluster=kubernetes
Creating minion security group.
Creating security group kubernetes-minion-kubernetes.
Adding tag to sg-3b8b335c: KubernetesCluster=kubernetes
Using master security group: kubernetes-master-kubernetes sg-308b3357
Using minion security group: kubernetes-minion-kubernetes sg-3b8b335c
Starting Master
Adding tag to i-b71a6f70: Name=kubernetes-master
Adding tag to i-b71a6f70: Role=kubernetes-master
Adding tag to i-b71a6f70: KubernetesCluster=kubernetes
Waiting for master to be ready
Attempt 1 to check for master nodeWaiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
[master running @52.34.244.195]
Attaching persistent data volume (vol-e072d316) to master
{
"Device": "/dev/sdb",
"State": "attaching",
"InstanceId": "i-b71a6f70",
"VolumeId": "vol-e072d316",
"AttachTime": "2016-03-02T18:10:15.985Z"
}
Attempt 1 to check for SSH to master [ssh to master working]
Attempt 1 to check for salt-master [salt-master not working yet]
Attempt 2 to check for salt-master [salt-master not working yet]
Attempt 3 to check for salt-master [salt-master not working yet]
Attempt 4 to check for salt-master [salt-master not working yet]
Attempt 5 to check for salt-master [salt-master not working yet]
Attempt 6 to check for salt-master [salt-master not working yet]
Attempt 7 to check for salt-master [salt-master not working yet]
Attempt 8 to check for salt-master [salt-master not working yet]
Attempt 9 to check for salt-master [salt-master not working yet]
Attempt 10 to check for salt-master [salt-master not working yet]
Attempt 11 to check for salt-master [salt-master not working yet]
Attempt 12 to check for salt-master [salt-master not working yet]
Attempt 13 to check for salt-master [salt-master not working yet]
Attempt 14 to check for salt-master [salt-master running]
Creating minion configuration
Creating autoscaling group
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
2 minions started; ready
Waiting 3 minutes for cluster to settle
..................Re-running salt highstate
Waiting for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This might loop forever if there was some uncaught error during start
up.
Kubernetes cluster created.
cluster "aws_kubernetes" set.
user "aws_kubernetes" set.
context "aws_kubernetes" set.
switched to context "aws_kubernetes".
Wrote config for aws_kubernetes to /Users/arungupta/.kube/config
Sanity checking cluster...
Attempt 1 to check Docker on node @ 52.37.172.215 ...not working yet
Attempt 2 to check Docker on node @ 52.37.172.215 ...not working yet
Attempt 3 to check Docker on node @ 52.37.172.215 ...working
Attempt 1 to check Docker on node @ 52.27.90.19 ...working
Kubernetes cluster is running. The master is running at:
https://52.34.244.195
The user name and password to use is located in /Users/arungupta/.kube/config.
... calling validate-cluster
Waiting for 2 ready nodes. 1 ready nodes, 2 registered. Retrying.
Found 2 node(s).
NAME LABELS STATUS AGE
ip-172-20-0-92.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-92.us-west-2.compute.internal Ready 56s
ip-172-20-0-93.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-93.us-west-2.compute.internal Ready 35s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil
etcd-1 Healthy {"health": "true"} nil
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.34.244.195
Elasticsearch is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
|
Amazon Console shows:
Three instances are created as shown – one for master node and two for worker nodes.
Username and password for the Kubernetes master are stored in /Users/arungupta/.kube/config
. Look for a section like:
1
2
3
4
5
6
7
8
|
- name: aws_kubernetes
user:
client-certificate-data: DATA
client-key-data: DATA
password: 3FkxcAURLCWBXc9H
username: admin
|
Run Docker container in Kubernetes Cluster on Amazon
Now that the cluster is up and running, get a list of all the nodes:
1
2
3
4
5
6
|
./kubernetes/cluster/kubectl.sh get no
NAME LABELS STATUS AGE
ip-172-20-0-92.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-92.us-west-2.compute.internal Ready 18m
ip-172-20-0-93.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-93.us-west-2.compute.internal Ready 18m
|
It shows two worker nodes.
Create a new Couchbase pod:
1
2
3
4
|
./kubernetes/cluster/kubectl.sh run couchbase --image=arungupta/couchbase
replicationcontroller "couchbase" created
|
Notice, how the image name can be specified on the CLI. This command creates a Replication Controller with a single pod. The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.
Get all the RC resources:
1
2
3
4
5
|
./kubernetes/cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
couchbase couchbase arungupta/couchbase run=couchbase 1 12m
|
This shows the Replication Controller that is created for you.
Get all the Pods:
1
2
3
4
5
|
./kubernetes/cluster/kubectl.sh get po
NAME READY STATUS RESTARTS AGE
couchbase-kil4y 1/1 Running 0 12m
|
The output shows the Pod that is created as part of the Replication Controller.
Get more details about the Pod:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
./kubernetes/cluster/kubectl.sh describe po couchbase-kil4y
Name: couchbase-kil4y
Namespace: default
Image(s): arungupta/couchbase
Node: ip-172-20-0-93.us-west-2.compute.internal/172.20.0.93
Start Time: Wed, 02 Mar 2016 10:25:47 -0800
Labels: run=couchbase
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: couchbase (1/1 replicas created)
Containers:
couchbase:
Container ID: docker://1c33e4f28978a5169a5d166add7c763de59839ed1f12865f4643456efdc0c60e
Image: arungupta/couchbase
Image ID: docker://080e2e96b3fc22964f3dec079713cdf314e15942d6eb135395134d629e965062
QoS Tier:
cpu: Burstable
Requests:
cpu: 100m
State: Running
Started: Wed, 02 Mar 2016 10:26:18 -0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
default-token-xuxn5:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-xuxn5
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
13m 13m 1 {scheduler } Scheduled Successfully assigned couchbase-kil4y to ip-172-20-0-93.us-west-2.compute.internal
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Created Created with docker id 3830f504a7b6
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Started Started with docker id 3830f504a7b6
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Pulling Pulling image "arungupta/couchbase"
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Pulled Successfully pulled image "arungupta/couchbase"
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Created Created with docker id 1c33e4f28978
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Started Started with docker id 1c33e4f28978
|
Expose Pod on Kubernetes as Service
Now that our pod is running, how do I access the Couchbase server?
You need to expose it outside the Kubernetes cluster.
The kubectl expose
command takes a pod, service or replication controller and expose it as a Kubernetes Service. Let’s expose the replication controller previously created and expose it:
1
2
3
4
|
./kubernetes/cluster/kubectl.sh expose rc couchbase --target-port=8091 --port=8091 --type=LoadBalancer
service "couchbase" exposed
|
Get more details about the Service:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
|
./kubernetes/cluster/kubectl.sh describe svc couchbase
Name: couchbase
Namespace: default
Labels: run=couchbase
Selector: run=couchbase
Type: LoadBalancer
IP: 10.0.158.93
LoadBalancer Ingress: a44d3f016e0a411e5888f0206c9933da-1869988881.us-west-2.elb.amazonaws.com
Port: <unnamed> 8091/TCP
NodePort: <unnamed> 32415/TCP
Endpoints: 10.244.1.4:8091
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
7s 7s 1 {service-controller } CreatingLoadBalancer Creating load balancer
5s 5s 1 {service-controller } CreatedLoadBalancer Created load balancer
|
The Loadbalancer
attribute Ingress gives you the address of the load balancer that is now publicly accessible.
Wait for 3 minutes to let the load balancer settle down. Access it using port 8091 and the login page for Couchbase Web Console shows up:
Enter the credentials as “Administrator” and “password” to see the Web Console:
And so you just accessed your pod outside the Kubernetes cluster.
Shutdown Kubernetes Cluster
Finally, shutdown the cluster using cluster/kube-down.sh
script.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
|
./kubernetes/cluster/kube-down.sh
Bringing down cluster using provider: aws
Deleting ELBs in: vpc-7b46ac1f
Waiting for ELBs to be deleted
All ELBs deleted
Deleting auto-scaling group: kubernetes-minion-group
Deleting auto-scaling launch configuration: kubernetes-minion-group
Deleting instances in VPC: vpc-7b46ac1f
Waiting for instances to be deleted
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-44077283 i-b71a6f70
Sleeping for 3 seconds...
All instances deleted
Deleting VPC: vpc-7b46ac1f
Cleaning up security group: sg-308b3357
Cleaning up security group: sg-3b8b335c
Cleaning up security group: sg-e3813984
Deleting security group: sg-308b3357
Deleting security group: sg-3b8b335c
Deleting security group: sg-e3813984
Done
|
For a complete clean up, you still need to explicitly delete the S3 bucket where Kubernetes binaries are stored.
Enjoy!
Source: http://blog.couchbase.com/2016/march/kubernetes-cluster-amazon-expose-service
If you searching here for one of the foremost methods to know how to turn on bluetooth windows 10 so you may have all these information from this portal as it is known to be hub of information. Try out for once.
You should definitely check https://eduessayhelper.org/blog/university-professors out if you want to read advices for university professors. It’s quite important to know these days