Kubernetes on AWS

 

일단 Kubernetes의 개념에서 알아 봤듯이

Kubenetes에는 한개의 마스터노드와 여러개의 워커노드가 통신을 합니다.

그러기에 기본적으로 마스터노드/워커노드 역할을 할 서버를 세팅해 줘야겠죠!

 

 

여기서 Kops나 Kubespray등 다양한 방법으로 Kubernetes 클러스터를 구성 해 줄수있는데

저는 Kops를 이용하여 구성을 해보도록 하겠습니다.

 

docker가 설치되어져있다는 전제하에 순서는 다음과 같습니다.

 

1. kubectl 설치

2. kops 설치

3. 클러스터 구성

 

 

1. kubectl 설치 - Binary 를 이용한 설치

curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

 

특정 버전을 다운받으려면 다음과 같은 형태로 버전을 입력해주면 됩니다.

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.18.0/bin/linux/amd64/kubectl
#  install kubectl 
[ec2-user@ip-172-31-37-116 bin]$ curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl

# 권한설정
[ec2-user@ip-172-31-37-116 bin]$ chmod +x ./kubectl

# 경로설정
[ec2-user@ip-172-31-37-116 bin]$ sudo mv ./kubectl /usr/local/bin/kubectl

# 설치확인
[ec2-user@ip-172-31-37-116 bin]$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:56:40Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

 

설치가 잘 된것을 확인하실 수 있습니다.

 

번외로 Package를 이용한 설치 혹은 다른 OS를 사용하시는 분들은 아래의 문서를 참고하시기 바랍니다.

https://kubernetes.io/docs/tasks/tools/install-kubectl/

 

2. kops 설치

kops도 마찬가지로 바이너리를 이용하여 설치를 진행하겠습니다.

curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

 

특정 버전을 다운받으려면 다음과 같은 형태로 버전을 입력해주면 됩니다.

curl -LO  https://github.com/kubernetes/kops/releases/download/1.15.0/kops-linux-amd64

 

# 1. install Kops
[root@ip-172-31-37-116 ~]# curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64

# 권한설정
[root@ip-172-31-37-116 ~]# chmod +x kops-linux-amd64

# 경로설정
[root@ip-172-31-37-116 ~]# mv kops-linux-amd64 /usr/local/bin/kops

# 설치확인
[root@ip-172-31-37-116 ~]# kops version
Version 1.16.1 (git-d1b07f7fd6)

 

 

3. 클러스터 구성

 

클러스터를 구성하려면 여러개의 단계가 존재하는데요

aws를 기준으로 설명드리자면 클러스터는 IP가 아닌 DNS로 통신을 하기때문에 먼저 DNS를 설정해준 후 클러스터 관리를 위해 상태나 혹은 키 정보들을 저장해줄 스토리지를 구성해줘야 합니다.

 

저는 aws위에서 관리를 하기에 DNS설정으로는

Route53, 그리고 스토리지로는 S3를 이용하여 구성해보도록 하겠습니다.

 

3-1. Route53 구성

( 이부분 작성본이 사라져서 추후 업데이트 하겠습니다ㅜㅜ)

 

3-2. S3구성

# Create Bucket aws s3api create-bucket \ --bucket prefix-example-com-state-store \ --region ap-northeast-2 # S3 versioning aws s3api put-bucket-versioning \ --bucket prefix-example-com-state-store \ --versioning-configuration Status=Enabled

 

3-3. 환경변수 설정 및 ssh key pair 생성

aws API를 이용하기위한 변수설정

aws API를 이용하기위한 변수설정

[root@ip-172-31-37-116 ~]# export AWS_ACCESS_KEY_ID=$(aws configure get aws_access_key_id)
[root@ip-172-31-37-116 ~]# export AWS_SECRET_ACCESS_KEY=$(aws configure get aws_secret_access_key)

# 사전에 ssh-keygen을 이용하여 키를 생성해줍니다. 
[root@ip-172-31-37-116 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:BTjgg7iUAAIQRTCbpBikyL1b6mQfNZoRwTPE1lR/fQQ root@ip-172-31-37-116.ap-northeast-2.compute.internal
The key's randomart image is:


# /root/.ssh/ 해당경로로 들어가보시면 key가 생성된 것을 보실 수 있습니다!

 

 

3-4. 클러스터 설정구성

# kops를 이용하여 클러스터를 생성해봅시다.

[root@ip-172-31-37-116 ~]# kops create cluster --zones ap-northeast-2c ${NAME}
I0430 05:04:27.012121    7656 create_cluster.go:562] Inferred --cloud=aws from zone "ap-northeast-2c"
I0430 05:04:27.057539    7656 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet ap-northeast-2c
I0430 05:04:28.477517    7656 create_cluster.go:1568] Using SSH public key: /root/.ssh/id_rsa.pub
Previewing changes that will be made:


I0430 05:04:31.177598    7656 apply_cluster.go:556] Gossip DNS: skipping DNS validation
I0430 05:04:31.229426    7656 executor.go:103] Tasks: 0 done / 92 total; 44 can run
I0430 05:04:32.016602    7656 executor.go:103] Tasks: 44 done / 92 total; 24 can run
I0430 05:04:32.428758    7656 executor.go:103] Tasks: 68 done / 92 total; 20 can run
I0430 05:04:32.503308    7656 executor.go:103] Tasks: 88 done / 92 total; 3 can run
W0430 05:04:32.570352    7656 keypair.go:140] Task did not have an address: *awstasks.LoadBalancer {"Name":"api.kbseo.k8s.local","Lifecycle":"Sync","LoadBalancerName":"api-kbseo-k8s-local-a1vmch","DNSName":null,"HostedZoneId":null,"Subnets":[{"Name":"ap-northeast-2c.kbseo.k8s.local","ShortName":"ap-northeast-2c","Lifecycle":"Sync","ID":null,"VPC":{"Name":"kbseo.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"kbseo.k8s.local","Name":"kbseo.k8s.local","kubernetes.io/cluster/kbseo.k8s.local":"owned"}},"AvailabilityZone":"ap-northeast-2c","CIDR":"172.20.32.0/19","Shared":false,"Tags":{"KubernetesCluster":"kbseo.k8s.local","Name":"ap-northeast-2c.kbseo.k8s.local","SubnetType":"Public","kubernetes.io/cluster/kbseo.k8s.local":"owned","kubernetes.io/role/elb":"1"}}],"SecurityGroups":[{"Name":"api-elb.kbseo.k8s.local","Lifecycle":"Sync","ID":null,"Description":"Security group for api ELB","VPC":{"Name":"kbseo.k8s.local","Lifecycle":"Sync","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"kbseo.k8s.local","Name":"kbseo.k8s.local","kubernetes.io/cluster/kbseo.k8s.local":"owned"}},"RemoveExtraRules":["port=443"],"Shared":null,"Tags":{"KubernetesCluster":"kbseo.k8s.local","Name":"api-elb.kbseo.k8s.local","kubernetes.io/cluster/kbseo.k8s.local":"owned"}}],"Listeners":{"443":{"InstancePort":443,"SSLCertificateID":""}},"Scheme":null,"HealthCheck":{"Target":"SSL:443","HealthyThreshold":2,"UnhealthyThreshold":2,"Interval":10,"Timeout":5},"AccessLog":null,"ConnectionDraining":null,"ConnectionSettings":{"IdleTimeout":300},"CrossZoneLoadBalancing":{"Enabled":false},"SSLCertificateID":"","Tags":{"KubernetesCluster":"kbseo.k8s.local","Name":"api.kbseo.k8s.local","kubernetes.io/cluster/kbseo.k8s.local":"owned"}}
I0430 05:04:32.755402    7656 executor.go:103] Tasks: 91 done / 92 total; 1 can run
I0430 05:04:32.840253    7656 executor.go:103] Tasks: 92 done / 92 total; 0 can run
Will create resources:
  AutoscalingGroup/master-ap-northeast-2c.masters.kbseo.k8s.local
      Granularity             1Minute
      LaunchConfiguration     name:master-ap-northeast-2c.masters.kbseo.k8s.local
      MaxSize                 1
      Metrics                 [GroupDesiredCapacity, GroupInServiceInstances, GroupMaxSize, GroupMinSize, GroupPendingInstances, GroupStandbyInstances, GroupTerminatingInstances, GroupTotalInstances]
      MinSize                 1
      Subnets                 [name:ap-northeast-2c.kbseo.k8s.local]
      SuspendProcesses        []
      Tags                    {Name: master-ap-northeast-2c.masters.kbseo.k8s.local, k8s.io/role/master: 1, kops.k8s.io/instancegroup: master-ap-northeast-2c, k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: master-ap-northeast-2c, KubernetesCluster: kbseo.k8s.local}


  AutoscalingGroup/nodes.kbseo.k8s.local
      Granularity             1Minute
      LaunchConfiguration     name:nodes.kbseo.k8s.local
      MaxSize                 2
      Metrics                 [GroupDesiredCapacity, GroupInServiceInstances, GroupMaxSize, GroupMinSize, GroupPendingInstances, GroupStandbyInstances, GroupTerminatingInstances, GroupTotalInstances]
      MinSize                 2
      Subnets                 [name:ap-northeast-2c.kbseo.k8s.local]
      SuspendProcesses        []
      Tags                    {k8s.io/cluster-autoscaler/node-template/label/kops.k8s.io/instancegroup: nodes, KubernetesCluster: kbseo.k8s.local, Name: nodes.kbseo.k8s.local, k8s.io/role/node: 1, kops.k8s.io/instancegroup: nodes}


  DHCPOptions/kbseo.k8s.local
      DomainName              ap-northeast-2.compute.internal
      DomainNameServers       AmazonProvidedDNS
      Shared                  false
      Tags                    {Name: kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned}


  EBSVolume/c.etcd-events.kbseo.k8s.local
      AvailabilityZone        ap-northeast-2c
      Encrypted               false
      SizeGB                  20
      Tags                    {k8s.io/etcd/events: c/c, k8s.io/role/master: 1, kubernetes.io/cluster/kbseo.k8s.local: owned, Name: c.etcd-events.kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local}
      VolumeType              gp2


  EBSVolume/c.etcd-main.kbseo.k8s.local
      AvailabilityZone        ap-northeast-2c
      Encrypted               false
      SizeGB                  20
      Tags                    {k8s.io/etcd/main: c/c, k8s.io/role/master: 1, kubernetes.io/cluster/kbseo.k8s.local: owned, Name: c.etcd-main.kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local}
      VolumeType              gp2


  IAMInstanceProfile/masters.kbseo.k8s.local
      Shared                  false


  IAMInstanceProfile/nodes.kbseo.k8s.local
      Shared                  false


  IAMInstanceProfileRole/masters.kbseo.k8s.local
      InstanceProfile         name:masters.kbseo.k8s.local id:masters.kbseo.k8s.local
      Role                    name:masters.kbseo.k8s.local


  IAMInstanceProfileRole/nodes.kbseo.k8s.local
      InstanceProfile         name:nodes.kbseo.k8s.local id:nodes.kbseo.k8s.local
      Role                    name:nodes.kbseo.k8s.local


  IAMRole/masters.kbseo.k8s.local
      ExportWithID            masters


  IAMRole/nodes.kbseo.k8s.local
      ExportWithID            nodes


  IAMRolePolicy/masters.kbseo.k8s.local
      Role                    name:masters.kbseo.k8s.local


  IAMRolePolicy/nodes.kbseo.k8s.local
      Role                    name:nodes.kbseo.k8s.local


  InternetGateway/kbseo.k8s.local
      VPC                     name:kbseo.k8s.local
      Shared                  false
      Tags                    {Name: kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned}


  Keypair/apiserver-aggregator
      Signer                  name:apiserver-aggregator-ca id:cn=apiserver-aggregator-ca
      Subject                 cn=aggregator
      Type                    client
      Format                  v1alpha2


  Keypair/apiserver-aggregator-ca
      Subject                 cn=apiserver-aggregator-ca
      Type                    ca
      Format                  v1alpha2


  Keypair/apiserver-proxy-client
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=apiserver-proxy-client
      Type                    client
      Format                  v1alpha2


  Keypair/ca
      Subject                 cn=kubernetes
      Type                    ca
      Format                  v1alpha2


  Keypair/etcd-clients-ca
      Subject                 cn=etcd-clients-ca
      Type                    ca
      Format                  v1alpha2


  Keypair/etcd-manager-ca-events
      Subject                 cn=etcd-manager-ca-events
      Type                    ca
      Format                  v1alpha2


  Keypair/etcd-manager-ca-main
      Subject                 cn=etcd-manager-ca-main
      Type                    ca
      Format                  v1alpha2


  Keypair/etcd-peers-ca-events
      Subject                 cn=etcd-peers-ca-events
      Type                    ca
      Format                  v1alpha2


  Keypair/etcd-peers-ca-main
      Subject                 cn=etcd-peers-ca-main
      Type                    ca
      Format                  v1alpha2


  Keypair/kops
      Signer                  name:ca id:cn=kubernetes
      Subject                 o=system:masters,cn=kops
      Type                    client
      Format                  v1alpha2


  Keypair/kube-controller-manager
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=system:kube-controller-manager
      Type                    client
      Format                  v1alpha2


  Keypair/kube-proxy
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=system:kube-proxy
      Type                    client
      Format                  v1alpha2


  Keypair/kube-scheduler
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=system:kube-scheduler
      Type                    client
      Format                  v1alpha2


  Keypair/kubecfg
      Signer                  name:ca id:cn=kubernetes
      Subject                 o=system:masters,cn=kubecfg
      Type                    client
      Format                  v1alpha2


  Keypair/kubelet
      Signer                  name:ca id:cn=kubernetes
      Subject                 o=system:nodes,cn=kubelet
      Type                    client
      Format                  v1alpha2


  Keypair/kubelet-api
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=kubelet-api
      Type                    client
      Format                  v1alpha2


  Keypair/master
      AlternateNames          [100.64.0.1, 127.0.0.1, api.internal.kbseo.k8s.local, api.kbseo.k8s.local, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]
      Signer                  name:ca id:cn=kubernetes
      Subject                 cn=kubernetes-master
      Type                    server
      Format                  v1alpha2


  LaunchConfiguration/master-ap-northeast-2c.masters.kbseo.k8s.local
      AssociatePublicIP       true
      IAMInstanceProfile      name:masters.kbseo.k8s.local id:masters.kbseo.k8s.local
      ImageID                 kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
      InstanceType            c4.large
      RootVolumeDeleteOnTermination    true
      RootVolumeSize          64
      RootVolumeType          gp2
      SSHKey                  name:kubernetes.kbseo.k8s.local-12:b8:34:12:0c:2d:97:b3:2e:3c:5b:98:2c:5c:70:a8 id:kubernetes.kbseo.k8s.local-12:b8:34:12:0c:2d:97:b3:2e:3c:5b:98:2c:5c:70:a8
      SecurityGroups          [name:masters.kbseo.k8s.local]
      SpotPrice               


  LaunchConfiguration/nodes.kbseo.k8s.local
      AssociatePublicIP       true
      IAMInstanceProfile      name:nodes.kbseo.k8s.local id:nodes.kbseo.k8s.local
      ImageID                 kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
      InstanceType            t2.medium
      RootVolumeDeleteOnTermination    true
      RootVolumeSize          128
      RootVolumeType          gp2
      SSHKey                  name:kubernetes.kbseo.k8s.local-12:b8:34:12:0c:2d:97:b3:2e:3c:5b:98:2c:5c:70:a8 id:kubernetes.kbseo.k8s.local-12:b8:34:12:0c:2d:97:b3:2e:3c:5b:98:2c:5c:70:a8
      SecurityGroups          [name:nodes.kbseo.k8s.local]
      SpotPrice               


  LoadBalancer/api.kbseo.k8s.local
      LoadBalancerName        api-kbseo-k8s-local-a1vmch
      Subnets                 [name:ap-northeast-2c.kbseo.k8s.local]
      SecurityGroups          [name:api-elb.kbseo.k8s.local]
      Listeners               {443: {"InstancePort":443,"SSLCertificateID":""}}
      HealthCheck             {"Target":"SSL:443","HealthyThreshold":2,"UnhealthyThreshold":2,"Interval":10,"Timeout":5}
      ConnectionSettings      {"IdleTimeout":300}
      CrossZoneLoadBalancing    {"Enabled":false}
      SSLCertificateID        
      Tags                    {Name: api.kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned}


  LoadBalancerAttachment/api-master-ap-northeast-2c
      LoadBalancer            name:api.kbseo.k8s.local id:api.kbseo.k8s.local
      AutoscalingGroup        name:master-ap-northeast-2c.masters.kbseo.k8s.local id:master-ap-northeast-2c.masters.kbseo.k8s.local


  ManagedFile/etcd-cluster-spec-events
      Location                backups/etcd/events/control/etcd-cluster-spec


  ManagedFile/etcd-cluster-spec-main
      Location                backups/etcd/main/control/etcd-cluster-spec


  ManagedFile/kbseo.k8s.local-addons-bootstrap
      Location                addons/bootstrap-channel.yaml


  ManagedFile/kbseo.k8s.local-addons-core.addons.k8s.io
      Location                addons/core.addons.k8s.io/v1.4.0.yaml


  ManagedFile/kbseo.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.12
      Location                addons/dns-controller.addons.k8s.io/k8s-1.12.yaml


  ManagedFile/kbseo.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.6
      Location                addons/dns-controller.addons.k8s.io/k8s-1.6.yaml


  ManagedFile/kbseo.k8s.local-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
      Location                addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml


  ManagedFile/kbseo.k8s.local-addons-kops-controller.addons.k8s.io-k8s-1.16
      Location                addons/kops-controller.addons.k8s.io/k8s-1.16.yaml


  ManagedFile/kbseo.k8s.local-addons-kube-dns.addons.k8s.io-k8s-1.12
      Location                addons/kube-dns.addons.k8s.io/k8s-1.12.yaml


  ManagedFile/kbseo.k8s.local-addons-kube-dns.addons.k8s.io-k8s-1.6
      Location                addons/kube-dns.addons.k8s.io/k8s-1.6.yaml


  ManagedFile/kbseo.k8s.local-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
      Location                addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml


  ManagedFile/kbseo.k8s.local-addons-kubelet-api.rbac.addons.k8s.io-k8s-1.9
      Location                addons/kubelet-api.rbac.addons.k8s.io/k8s-1.9.yaml


  ManagedFile/kbseo.k8s.local-addons-limit-range.addons.k8s.io
      Location                addons/limit-range.addons.k8s.io/v1.5.0.yaml


  ManagedFile/kbseo.k8s.local-addons-rbac.addons.k8s.io-k8s-1.8
      Location                addons/rbac.addons.k8s.io/k8s-1.8.yaml


  ManagedFile/kbseo.k8s.local-addons-storage-aws.addons.k8s.io-v1.15.0
      Location                addons/storage-aws.addons.k8s.io/v1.15.0.yaml


  ManagedFile/kbseo.k8s.local-addons-storage-aws.addons.k8s.io-v1.6.0
      Location                addons/storage-aws.addons.k8s.io/v1.6.0.yaml


  ManagedFile/kbseo.k8s.local-addons-storage-aws.addons.k8s.io-v1.7.0
      Location                addons/storage-aws.addons.k8s.io/v1.7.0.yaml


  ManagedFile/manifests-etcdmanager-events
      Location                manifests/etcd/events.yaml


  ManagedFile/manifests-etcdmanager-main
      Location                manifests/etcd/main.yaml


  Route/0.0.0.0/0
      RouteTable              name:kbseo.k8s.local
      CIDR                    0.0.0.0/0
      InternetGateway         name:kbseo.k8s.local


  RouteTable/kbseo.k8s.local
      VPC                     name:kbseo.k8s.local
      Shared                  false
      Tags                    {Name: kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned, kubernetes.io/kops/role: public}


  RouteTableAssociation/ap-northeast-2c.kbseo.k8s.local
      RouteTable              name:kbseo.k8s.local
      Subnet                  name:ap-northeast-2c.kbseo.k8s.local


  SSHKey/kubernetes.kbseo.k8s.local-12:b8:34:12:0c:2d:97:b3:2e:3c:5b:98:2c:5c:70:a8
      KeyFingerprint          cd:b0:00:ed:d3:5a:e4:b3:17:51:80:87:9c:50:48:31


  Secret/admin


  Secret/kube


  Secret/kube-proxy


  Secret/kubelet


  Secret/system:controller_manager


  Secret/system:dns


  Secret/system:logging


  Secret/system:monitoring


  Secret/system:scheduler


  SecurityGroup/api-elb.kbseo.k8s.local
      Description             Security group for api ELB
      VPC                     name:kbseo.k8s.local
      RemoveExtraRules        [port=443]
      Tags                    {Name: api-elb.kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned}


  SecurityGroup/masters.kbseo.k8s.local
      Description             Security group for masters
      VPC                     name:kbseo.k8s.local
      RemoveExtraRules        [port=22, port=443, port=2380, port=2381, port=4001, port=4002, port=4789, port=179]
      Tags                    {KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned, Name: masters.kbseo.k8s.local}


  SecurityGroup/nodes.kbseo.k8s.local
      Description             Security group for nodes
      VPC                     name:kbseo.k8s.local
      RemoveExtraRules        [port=22]
      Tags                    {kubernetes.io/cluster/kbseo.k8s.local: owned, Name: nodes.kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local}


  SecurityGroupRule/all-master-to-master
      SecurityGroup           name:masters.kbseo.k8s.local
      SourceGroup             name:masters.kbseo.k8s.local


  SecurityGroupRule/all-master-to-node
      SecurityGroup           name:nodes.kbseo.k8s.local
      SourceGroup             name:masters.kbseo.k8s.local


  SecurityGroupRule/all-node-to-node
      SecurityGroup           name:nodes.kbseo.k8s.local
      SourceGroup             name:nodes.kbseo.k8s.local


  SecurityGroupRule/api-elb-egress
      SecurityGroup           name:api-elb.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Egress                  true


  SecurityGroupRule/https-api-elb-0.0.0.0/0
      SecurityGroup           name:api-elb.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Protocol                tcp
      FromPort                443
      ToPort                  443


  SecurityGroupRule/https-elb-to-master
      SecurityGroup           name:masters.kbseo.k8s.local
      Protocol                tcp
      FromPort                443
      ToPort                  443
      SourceGroup             name:api-elb.kbseo.k8s.local


  SecurityGroupRule/icmp-pmtu-api-elb-0.0.0.0/0
      SecurityGroup           name:api-elb.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Protocol                icmp
      FromPort                3
      ToPort                  4


  SecurityGroupRule/master-egress
      SecurityGroup           name:masters.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Egress                  true


  SecurityGroupRule/node-egress
      SecurityGroup           name:nodes.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Egress                  true


  SecurityGroupRule/node-to-master-tcp-1-2379
      SecurityGroup           name:masters.kbseo.k8s.local
      Protocol                tcp
      FromPort                1
      ToPort                  2379
      SourceGroup             name:nodes.kbseo.k8s.local


  SecurityGroupRule/node-to-master-tcp-2382-4000
      SecurityGroup           name:masters.kbseo.k8s.local
      Protocol                tcp
      FromPort                2382
      ToPort                  4000
      SourceGroup             name:nodes.kbseo.k8s.local


  SecurityGroupRule/node-to-master-tcp-4003-65535
      SecurityGroup           name:masters.kbseo.k8s.local
      Protocol                tcp
      FromPort                4003
      ToPort                  65535
      SourceGroup             name:nodes.kbseo.k8s.local


  SecurityGroupRule/node-to-master-udp-1-65535
      SecurityGroup           name:masters.kbseo.k8s.local
      Protocol                udp
      FromPort                1
      ToPort                  65535
      SourceGroup             name:nodes.kbseo.k8s.local


  SecurityGroupRule/ssh-external-to-master-0.0.0.0/0
      SecurityGroup           name:masters.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Protocol                tcp
      FromPort                22
      ToPort                  22


  SecurityGroupRule/ssh-external-to-node-0.0.0.0/0
      SecurityGroup           name:nodes.kbseo.k8s.local
      CIDR                    0.0.0.0/0
      Protocol                tcp
      FromPort                22
      ToPort                  22


  Subnet/ap-northeast-2c.kbseo.k8s.local
      ShortName               ap-northeast-2c
      VPC                     name:kbseo.k8s.local
      AvailabilityZone        ap-northeast-2c
      CIDR                    172.20.32.0/19
      Shared                  false
      Tags                    {KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned, SubnetType: Public, kubernetes.io/role/elb: 1, Name: ap-northeast-2c.kbseo.k8s.local}


  VPC/kbseo.k8s.local
      CIDR                    172.20.0.0/16
      EnableDNSHostnames      true
      EnableDNSSupport        true
      Shared                  false
      Tags                    {Name: kbseo.k8s.local, KubernetesCluster: kbseo.k8s.local, kubernetes.io/cluster/kbseo.k8s.local: owned}


  VPCDHCPOptionsAssociation/kbseo.k8s.local
      VPC                     name:kbseo.k8s.local
      DHCPOptions             name:kbseo.k8s.local


Must specify --yes to apply changes


Cluster configuration has been created.


Suggestions:
* list clusters with: kops get cluster
* edit this cluster with: kops edit cluster kbseo.k8s.local
* edit your node instance group: kops edit ig --name=kbseo.k8s.local nodes
* edit your master instance group: kops edit ig --name=kbseo.k8s.local master-ap-northeast-2c


Finally configure your cluster with: kops update cluster --name kbseo.k8s.local --yes

 

자 클러스터가 만들어졌는지 확인해볼까요?

[root@ip-172-31-37-116 ~]# kops get ig --name ${NAME}
NAME            ROLE    MACHINETYPE    MIN    MAX    ZONES
master-ap-northeast-2c    Master    c4.large    1    1    ap-northeast-2c
nodes            Node    t2.medium    2    2    ap-northeast-2c

# 여기에서 주의할 점은 실제 리소스가 아닌 설정만을 생성한다는 점 입니다.
# kops update cluster로 실제 구성이 가능합니다.

 

기본으로 c4.large타입과 t2.medium이 생성되는데 

요금의 압박이 있기에 인스턴스 사이즈를 변경해줍니다.

 

[root@ip-172-31-37-116 ~]# kops edit ig master-ap-northeast-2c --name ${NAME}


apiVersion: kops.k8s.io/v1alpha2
ps edit ig nodes --name ${NAME}
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-04-30T05:04:28Z"
  labels:
    kops.k8s.io/cluster: kbseo.k8s.local
  name: master-ap-northeast-2c
spec:
  image: kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
  machineType: c4.large     <------- 원하고자 하는 instance type
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-ap-northeast-2c
  role: Master
  subnets:
  - ap-northeast-2c




[root@ip-172-31-37-116 ~]# kops edit ig nodes --name ${NAME}

apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2020-04-30T05:04:28Z"
  generation: 1
  labels:
    kops.k8s.io/cluster: kbseo.k8s.local
  name: nodes
spec:
  image: kope.io/k8s-1.16-debian-stretch-amd64-hvm-ebs-2020-01-17
  machineType: t3.small
  maxSize: 2
  minSize: 2
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - ap-northeast-2c
~                    



# 확인
[root@ip-172-31-37-116 ~]# kops get ig --name ${NAME}
NAME            ROLE    MACHINETYPE    MIN    MAX    ZONES
master-ap-northeast-2c    Master    t2.small    1    1    ap-northeast-2c
nodes            Node    t2.small    2    2    ap-northeast-2c

[root@ip-172-31-37-116 ~]# kops update cluster ${NAME} --yes
I0430 06:26:24.977771    8160 apply_cluster.go:556] Gossip DNS: skipping DNS validation
I0430 06:26:25.625424    8160 executor.go:103] Tasks: 0 done / 92 total; 44 can run
I0430 06:26:26.447564    8160 executor.go:103] Tasks: 44 done / 92 total; 24 can run
I0430 06:26:27.165011    8160 executor.go:103] Tasks: 68 done / 92 total; 20 can run
I0430 06:26:27.415603    8160 executor.go:103] Tasks: 88 done / 92 total; 3 can run
I0430 06:26:27.586632    8160 executor.go:103] Tasks: 91 done / 92 total; 1 can run
I0430 06:26:27.675721    8160 executor.go:103] Tasks: 92 done / 92 total; 0 can run
I0430 06:26:27.768309    8160 update_cluster.go:305] Exporting kubecfg for cluster
kops has set your kubectl context to kbseo.k8s.local


Cluster changes have been applied to the cloud.




Changes may require instances to restart: kops rolling-update cluster


[root@ip-172-31-37-116 ~]#
[root@ip-172-31-37-116 ~]# kops rolling-update cluster
Using cluster from kubectl context: kbseo.k8s.local


NAME            STATUS        NEEDUPDATE    READY    MIN    MAX    NODES
master-ap-northeast-2c    NeedsUpdate    1        0    1    1    1
nodes            NeedsUpdate    2        0    2    2    2

 

 

edit 명령어를 통해 리소스들을 수정해 줄 수 있습니다.

 

[root@ip-172-31-37-116 ~]# kops edit cluster ${NAME}

Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: "2020-04-30T05:04:28Z"
  name: kbseo.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://kubernetes-aws-kbseo-io/kbseo.k8s.local
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-ap-northeast-2c
      name: c
    memoryRequest: 100Mi
    name: main
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-ap-northeast-2c
      name: c
    memoryRequest: 100Mi
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.16.8
  masterInternalName: api.internal.kbseo.k8s.local
  masterPublicName: api.kbseo.k8s.local
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.20.32.0/19
    name: ap-northeast-2c
    type: Public
    zone: ap-northeast-2c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

 

3-4. 클러스터 생성

 

실제 리소스들을 한번 만들어볼까요?

 

[root@ip-172-31-37-116 ~]# kops update cluster ${NAME} --yes

Cluster is starting.  It should be ready in a few minutes.


Suggestions:
* validate cluster: kops validate cluster
* list nodes: kubectl get nodes --show-labels
* ssh to the master: ssh -i ~/.ssh/id_rsa admin@api.kbseo.k8s.local
* the admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
* read about installing addons at: https://github.com/kubernetes/kops/blob/master/docs/operations/addons.md.

 

완료되는데 까지 시간이 조금 걸립니다 ㅎ.ㅎ

 

번외로 

kops delete cluster --name ${NAME} --yes

해당 명령어를 통해 클러스터를 삭제시킬 수 있습니다.

 

 

 

참조

https://kubernetes.io/ko/docs/setup/production-environment/tools/kops/

 

 

+ Recent posts