k8s the hard wayをやる
ch3
VPC, subnet, Firewall Rules, Public IP Addressを設定して、Compute Instanceを生成
この辺は前準備
code:bash
NAME SUBNET_MODE BGP_ROUTING_MODE IPV4_RANGE GATEWAY_IPV4
kubernetes-the-hard-way CUSTOM REGIONAL
Instances on this network will not be reachable until firewall rules
are created. As an example, you can allow all internal traffic between
instances as well as SSH, RDP, and ICMP by running:
VPC Network作成したてだと、デフォで全ポート遮断されてる感じかな
このチュートリアルだと、Ingressに対してのみアクセスさせて、それ以外は基本開放しない流れになりそう
code:bash
gcloud compute networks subnets create kubernetes \
--network kubernetes-the-hard-way \
--range 10.240.0.0/24
10.*.*.* はPrivate IP
subnet設定してると何かリソース作った時に勝手にこのsubnetからIP決定して付与してくれてそう
Firewall Rulesの設定
code:bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-internal \
--allow tcp,udp,icmp \
--network kubernetes-the-hard-way \
--source-ranges 10.240.0.0/24,10.200.0.0/16
内部で全てのプロトコルを許容するFirewall Rules
code:bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-external \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes-the-hard-way \
--source-ranges 0.0.0.0/0
外部からのSSH, ICMP, HTTPSを許容するFirewall Rules
6443はkube-apiserverのPort
Firewall Rulesの設定一覧を出す
code:bash
gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
~ via 🐹 v1.16.3 via 💠 nks-production on ☁️ eaten.jb@gmail.com(us-west1) took 5s
❯ gcloud compute firewall-rules list --filter="network:kubernetes-the-hard-way"
NAME NETWORK DIRECTION PRIORITY ALLOW DENY DISABLED
kubernetes-the-hard-way-allow-external kubernetes-the-hard-way INGRESS 1000 tcp:22,tcp:6443,icmp False
kubernetes-the-hard-way-allow-internal kubernetes-the-hard-way INGRESS 1000 tcp,udp,icmp False
To show all fields of the firewall, please show in JSON format: --format=json
To show all fields in table format, please see the examples in --help.
よさそう
次はk8sのPublic IP付与
kube-apiserverの前段に設置されるロードバランサに付与することになる
code:bash
gcloud compute addresses create kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region)
設定を確認
gcloud compute addresses list --filter="name=('kubernetes-the-hard-way')"
付与されていそう(outputは割愛)
次はCompute instanceの作成
Ubuntu server 20.04を作成
code:bash
for i in 0 1 2; do
gcloud compute instances create controller-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--private-network-ip 10.240.0.1${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,controller
done
3個作ってる
これはCompute Engine上に作られる
次はWorker instanceの作成
Worker instanceはpod subnet allocationを必要とするらしい
pod subnet allocationがなんともわからん
ここでいうWorkerってなんだろう
Deploymentを監視していて、実際にPodを作成するインスタンスかな
code:bash
for i in 0 1 2; do
gcloud compute instances create worker-${i} \
--async \
--boot-disk-size 200GB \
--can-ip-forward \
--image-family ubuntu-2004-lts \
--image-project ubuntu-os-cloud \
--machine-type e2-standard-2 \
--metadata pod-cidr=10.200.${i}.0/24 \
--private-network-ip 10.240.0.2${i} \
--scopes compute-rw,storage-ro,service-management,service-control,logging-write,monitoring \
--subnet kubernetes \
--tags kubernetes-the-hard-way,worker
done
WorkerもUbuntu 20.04
gcloud compute instances list --filter="tags.items=kubernetes-the-hard-way"
作成を確認
次はSSHアクセスのための設定
ControllerとWorkerにそれぞれ設定する
code:bash
gcloud compute ssh controller-0
❯ gcloud compute ssh controller-0
WARNING: The private SSH key file for gcloud does not exist.
WARNING: The public SSH key file for gcloud does not exist.
WARNING: You do not have an SSH key for gcloud.
WARNING: SSH keygen will be executed to generate a key.
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
生成されたSSH Keyは自動でアップロードされて、プロジェクトに保存されるとのこと(便利〜
ch4
ch4はCAとTLS Certificatのプロビジョニング
Certificate Authority
TLS ccertificateを生成するために使われるCertificate Authority(認証局)を使えるようにする
CA configration file, certificate, private keyを生成する
code:bash
{
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"kubernetes": {
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "CA",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
}
ca-key.pem
ca.pem
が生成される
次はClientとServerのcertificateを生成
これらはk8sのadmin userのためのものらしい
code:bash
{
cat > admin-csr.json <<EOF
{
"CN": "admin",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:masters",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin
}
admin-key.pem
admin.pem
が生成される
Kubelet Client Certificates
Node AuthorizerなるAuthorization modeがある
Kubeletが生成するAPIリクエストをauthorizeするために使われるらしい
Kubeletがauthorizeされるためには、 system:nodes グループとして認識されるようなcredentialを使う必要がある
system:node:<nodeName> というusernameを持つcredential
code:bash
for instance in worker-0 worker-1 worker-2; do
cat > ${instance}-csr.json <<EOF
{
"CN": "system:node:${instance}",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:nodes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
EXTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces0.accessConfigs0.natIP)') INTERNAL_IP=$(gcloud compute instances describe ${instance} \
--format 'value(networkInterfaces0.networkIP)') cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=${instance},${EXTERNAL_IP},${INTERNAL_IP} \
-profile=kubernetes \
${instance}-csr.json | cfssljson -bare ${instance}
done
gencert で certificate fileを作っている
worker-0-key.pem
worker-0.pem
worker-1-key.pem
worker-1.pem
worker-2-key.pem
worker-2.pem
が生成される
次はControllre Manager ClientのCertificate
code:bash
{
cat > kube-controller-manager-csr.json <<EOF
{
"CN": "system:kube-controller-manager",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-controller-manager",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
}
kube-controller-manager-key.pem
kube-controller-manager.pem
が生成される
生成フェーズ多すぎる
次は kube-proxy clientのcertificateと private key
code:bash
{
cat > kube-proxy-csr.json <<EOF
{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:node-proxier",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-proxy-csr.json | cfssljson -bare kube-proxy
}
kube-proxy-key.pem
kube-proxy.pem
次は kube-scheduler certificate
code:bash
{
cat > kube-scheduler-csr.json <<EOF
{
"CN": "system:kube-scheduler",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "system:kube-scheduler",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
kube-scheduler-csr.json | cfssljson -bare kube-scheduler
}
kube-scheduler-key.pem
kube-scheduler.pem
次は kube-apiserver certificate
VPC Networkで作ったStatic IPが kube-apiserver のcertificateに記述されるとのこと
code:bash
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
KUBERNETES_HOSTNAMES=kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.svc.cluster.local
cat > kubernetes-csr.json <<EOF
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.32.0.1,10.240.0.10,10.240.0.11,10.240.0.12,${KUBERNETES_PUBLIC_ADDRESS},127.0.0.1,${KUBERNETES_HOSTNAMES} \
-profile=kubernetes \
kubernetes-csr.json | cfssljson -bare kubernetes
}
KUBERNETES_PUBLIC_ADDRESS はgcloud内でいい感じに展開されるはず
kubernetes-key.pem
kubernetes.pem
が生成される
次はサービスアカウントのキーペア
code:bash
{
cat > service-account-csr.json <<EOF
{
"CN": "service-accounts",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Portland",
"O": "Kubernetes",
"OU": "Kubernetes The Hard Way",
"ST": "Oregon"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
service-account-csr.json | cfssljson -bare service-account
}
service-account-key.pem
service-account.pem
適切なcertificateとprivate keyをそれぞれのWorker instanceにコピーする
code:bash
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ca.pem ${instance}-key.pem ${instance}.pem ${instance}:~/
done
Controllersにも反映させる
code:bash
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp ca.pem ca-key.pem kubernetes-key.pem kubernetes.pem \
service-account-key.pem service-account.pem ${instance}:~/
done
worker-0, worker-1, worker-2
controller-0 controller-1 controller-2
にそれぞれ反映
controllerでは ca kubernetes service-account のキーペアを登録
adminのキーペアはまだ使ってなさそう
ch4で作ったキーペアたちは以下の通り
- CA(controller)
- Admin Client
- Kubelet Client (worker)
- kube-controller-manager Client
- kube-proxy Client
- kube-scheduler Client
- kube-apiserver
- Service Account(controller)
Workerに ca-key.pem を配置しなかったのは、相手が正しいか判定するだけだから(秘密鍵はサーバ側が持つ, デジタル署名が正しいかを判定するのがクライアント側の役割)
今回配置しなかったモノたちはあとで使うのかな
The kube-proxy, kube-controller-manager, kube-scheduler, and kubelet client certificates will be used to generate client authentication configuration files in the next lab.
正解!
ch4おわり
ch5
Overview
The Node authorizer allows a kubelet to perform API operations. This includes:
Read operations:
services
endpoints
nodes
pods
secrets, configmaps, persistent volume claims and persistent volumes related to pods bound to the kubelet's node
Kubernetes
Using Node Authorization
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
Overview The Node authorizer allows a kubelet to perform API operations. This includes:
Read operations:
services endpoints nodes pods secrets, configmaps, persistent volume claims and persistent volumes related to pods bound t...
KubeletがAPI Serverにリクエストを送る時の特別なAuthorization Modeってことはわかったんだけど、Certificate fileの名前だけで判定しているのかな?
namespaceみたいなものらしい
自分の受け持つ範囲のアクセス権限以外はアクセスできないようにすることで事故を防ぐ的な
で、それはGroupが system:nodes であることと、 Usernameが system:node:<nodename> であること
だけで設定できるから便利(Node側は)
code:bash
❯ cat worker-1.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
name: kubernetes-the-hard-way
contexts:
- context:
cluster: kubernetes-the-hard-way
user: system:node:worker-1
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:node:worker-1
user:
client-certificate-data: xxx
client-key-data: xxx
生成されたkubeconfigみるとcontextのuserに確かに記述されている
次は kube-proxy config file
code:bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}
そういえばキーペアは ca.pem ca-key.pem から作られているからサーバのアクセスに使えるんだな
kube-controller-manager config file
code:bash
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.pem \
--client-key=kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig
kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}
kube-scheduler config file
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.pem \
--client-key=kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig
kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}
admin user config file
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=admin.kubeconfig
kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig
kubectl config use-context default --kubeconfig=admin.kubeconfig
}
で、作ったこれらのConfig fileをインスタンスたちにコピーする
code:bash
for instance in worker-0 worker-1 worker-2; do
gcloud compute scp ${instance}.kubeconfig kube-proxy.kubeconfig ${instance}:~/
done
kube-controller-manager kube-scheduler はController instanceへコピー
code:bash
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp admin.kubeconfig kube-controller-manager.kubeconfig kube-scheduler.kubeconfig ${instance}:~/
done
ch6 Generation the Data Encryption config and key
k8sはクラスタの状態や、アプリの設定、Secretsなどいろんなものを保存していて、k8sはこれらを暗号化してくれる
暗号化のキーを生成
ENCRYPTION_KEY=$(head -c 32 /dev/urandom | base64)
そしたら encryption-config.yaml を作る
code:bash
cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: ${ENCRYPTION_KEY}
- identity: {}
EOF
これをController instanceに配置
code:bash
for instance in controller-0 controller-1 controller-2; do
gcloud compute scp encryption-config.yaml ${instance}:~/
done
ch7 Bootstrapping the etcd Cluster
k8sのコンポーネント自体はステートレスで、全ての状態はetcdで管理される
このchでは全てのControllerにSSHでアクセスして作業するらしい
わかるなあ
etcdはノードが3個いないと成立しないから、起動時に他のノードが立ち上がるまで待機する
まずはController内でetcdをダウンロード
code:bash
wget -q --show-progress --https-only --timestamping \
展開してetcdをbinに配置
code:bash
{
tar -xvf etcd-v3.4.10-linux-amd64.tar.gz
sudo mv etcd-v3.4.10-linux-amd64/etcd* /usr/local/bin/
}
$ which etcd
/usr/local/bin/etcd
etcd Serverを設定
code:bash
{
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo chmod 700 /var/lib/etcd
sudo cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/
}
Controller instanceのInternal IPがClient requestとetcd clusterのpeerとコミュニケーションするために使われるらしい
code:bash
INTERNAL_IP=$(curl -s -H "Metadata-Flavor: Google" \
環境変数をセット
こんなのあるのか
instanceの中からリクエストしたらIPが返ってきた
当然それ以外からは特に何も起こらない
起こらないというか、リクエストが到達しない
code:bash
curl: (6) Could not resolve host: metadata.google.internal
metadata.google.internal なるほど
$ echo $INTERNAL_IP
10.240.0.10
Private IPが取れる
すごーい
etcdのメンバーはそれぞれ一意なETCD_NAMEを持つ必要があるらしい
普通にhostname -sで出力されるものを使う
ETCD_NAME=$(hostname -s)
controller-0 とかが出力される
環境変数の設定が終わったら、 etcd.service systemd unit fileを作る
code:bash
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
Description=etcd
Type=notify
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/kubernetes.pem \\
--key-file=/etc/etcd/kubernetes-key.pem \\
--peer-cert-file=/etc/etcd/kubernetes.pem \\
--peer-key-file=/etc/etcd/kubernetes-key.pem \\
--trusted-ca-file=/etc/etcd/ca.pem \\
--peer-trusted-ca-file=/etc/etcd/ca.pem \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
WantedBy=multi-user.target
EOF
どうでもいいけど etcd って太古からあるデーモンの名前っぽいよね(割と最近作られててビビった
作ったら etcdを立ち上げる
code:bash
{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
特にoutput出ないので、ステータスの確認をすると良い
code:bash
$ sudo systemctl status etcd
● etcd.service - etcd
Loaded: loaded (/etc/systemd/system/etcd.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2021-04-25 08:31:07 UTC; 11s ago
Main PID: 2396 (etcd)
Tasks: 10 (limit: 9544)
Memory: 26.7M
CGroup: /system.slice/etcd.service
└─2396 /usr/local/bin/etcd --name controller-0 --cert-file=/etc/etcd/kubernetes.pem --key-file=/etc/etcd/kubernetes-key.pem --peer-cert-file=/etc/etcd/kubernet>
Apr 25 08:31:07 controller-0 etcd2396: set the initial cluster version to 3.0 etc menberを出力してみる
code:bash
$ sudo ETCDCTL_API=3 etcdctl member list \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem
ch8 Bootstrapping the k0s Control Plane
Verifilcation
kubectl get componentstatuses --kubeconfig admin.kubeconfig
ヘルスチェックできていそうということの確認
HTTPでリクエストできることの確認はこう
code:bash
HTTP/1.1 200 OK
Server: nginx/1.18.0 (Ubuntu)
Date: Sun, 25 Apr 2021 09:00:12 GMT
Content-Type: text/plain; charset=utf-8
Content-Length: 2
Connection: keep-alive
Cache-Control: no-cache, private
X-Content-Type-Options: nosniff
次は k8s AuthorizationのためのRBAC
RBACはRole Based Access Controlの略
Admin とかそういうのがロールになる
kube-apiserverからKubelet APIにアクセスするためのRBAC permissionを設定する
kube-apiserverはkubectlなどからリクエストを受けて、各nodeで動くKubeletに命令を送る
このチュートリアルでは Kubeletの --authorization-mode を webhook にする
Webhook モードはauthorizationの決定に SubjectAccessReview APIを使う(謎
まずはClusterRoleを作る
system:kube-apiserver-to-kubelet を作る、Kubelet APIにアクセスして、Podを管理するために必要な一通りのタスクができる権限を与える
code:bash
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
これはクラスタ全体に適用されるので、コマンドは1回でOK(冪等性あるっぽくて、別に何回打っても良い
kube-apiserver はKubeletにアクセスする際、 kubernetes userとして認証される
これは --kubelet-client-certificate で定義されたclient certificateを使うためにこうなる
system:kube-apiserver-to-kubelet Cluster Roleを kubernetes userに紐付ける
code:bash
cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
これも実行は1回で良さそう
次はk8s Frontend Load Balancer
kubernetes-the-hard-way static IP Addressを設定に使う
ここのセクションはインスタンスのSSH内では実行できないとのこと
Network Load Balancerを作る
code:bash
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
gcloud compute http-health-checks create kubernetes \
--description "Kubernetes Health Check" \
--host "kubernetes.default.svc.cluster.local" \
--request-path "/healthz"
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-health-check \
--network kubernetes-the-hard-way \
--source-ranges 209.85.152.0/22,209.85.204.0/22,35.191.0.0/16 \
--allow tcp
gcloud compute target-pools create kubernetes-target-pool \
--http-health-check kubernetes
gcloud compute target-pools add-instances kubernetes-target-pool \
--instances controller-0,controller-1,controller-2
gcloud compute forwarding-rules create kubernetes-forwarding-rule \
--address ${KUBERNETES_PUBLIC_ADDRESS} \
--ports 6443 \
--region $(gcloud config get-value compute/region) \
--target-pool kubernetes-target-pool
}
http-health-checks firewall-rules target-pools forwarding-rules
code:bash
{
"major": "1",
"minor": "18",
"gitVersion": "v1.18.6",
"gitCommit": "dff82dc0de47299ab66c83c626e08b245ab19037",
"gitTreeState": "clean",
"buildDate": "2020-07-15T16:51:04Z",
"goVersion": "go1.13.9",
"compiler": "gc",
"platform": "linux/amd64"
}%
ch9 Bootstrapping the k8s Worker Nodes
Controllerの設定が終わったので次はWorker
runc, container networking plugins, containerd, kubelet, kube-proxy をインストールする
まずはいろいろインストールする
code:bash
{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}
socat は kubectl port-forward コマンドのサポートを可能にしてくれる
次にSwapの無効化
sudo swapoff -a
swapはOSの機能のこと
$ swapoff --help
code:bash
Usage:
Disable devices and files for paging and swapping.
次にWorkerのバイナリ群をダウンロード
code:bash
wget -q --show-progress --https-only --timestamping \
ディレクトリをワンサカ作る
code:bash
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
そしてインストール
code:bash
{
mkdir containerd
tar -xvf crictl-v1.18.0-linux-amd64.tar.gz
tar -xvf containerd-1.3.6-linux-amd64.tar.gz -C containerd
sudo tar -xvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin/
sudo mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo mv crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo mv containerd/bin/* /bin/
}
次にCNI Networkingを設定する
pod-cidr はGCPのAPIから取れる
code:bash
POD_CIDR=$(curl -s -H "Metadata-Flavor: Google" \
$ echo $POD_CIDR
10.200.0.0/24
bridge config fileを作成
code:bash
cat <<EOF | sudo tee /etc/cni/net.d/10-bridge.conf
{
"cniVersion": "0.3.1",
"name": "bridge",
"type": "bridge",
"bridge": "cnio0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"ranges": [
],
}
}
EOF
loopback config fileを作成
code:bash
cat <<EOF | sudo tee /etc/cni/net.d/99-loopback.conf
{
"cniVersion": "0.3.1",
"name": "lo",
"type": "loopback"
}
EOF
次は containerd config fileを作成
code:bash
sudo mkdir -p /etc/containerd/
cat << EOF | sudo tee /etc/containerd/config.toml
snapshotter = "overlayfs"
runtime_type = "io.containerd.runtime.v1.linux"
runtime_engine = "/usr/local/bin/runc"
runtime_root = ""
EOF
containerd.service systemd unit fileを作成
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
Description=containerd container runtime
After=network.target
ExecStartPre=/sbin/modprobe overlay
ExecStart=/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
WantedBy=multi-user.target
EOF
containerdはこの辺読んだ方が良さそう
CRIとはContainer runtime interfaceのこと
containerdここまで
次はKubeletの設定
code:bash
{
sudo mv ${HOSTNAME}-key.pem ${HOSTNAME}.pem /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.pem /var/lib/kubernetes/
}
hostnameは worker-0 とかのこと
kubelet-config.yaml を作成
code:bash
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.32.0.10"
podCIDR: "${POD_CIDR}"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
tlsCertFile: "/var/lib/kubelet/${HOSTNAME}.pem"
tlsPrivateKeyFile: "/var/lib/kubelet/${HOSTNAME}-key.pem"
EOF
resolvConf は systemd-resolved が動くシステム上でサービスディスカバリにCoreDNSを使うときのループを避ける
:wakaran:
そして kubelet.serbice systemd unit fileを作成
code:bash
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
Description=Kubernetes Kubelet
After=containerd.service
Requires=containerd.service
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--container-runtime=remote \\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5
WantedBy=multi-user.target
EOF
次は kube-proxy の設定
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfig
kube-proxy-config.yaml を作成
code:bash
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "10.200.0.0/16"
EOF
そして kube-proxy.service systemd unit fileを作成
code:bash
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
Description=Kubernetes Kube Proxy
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
WantedBy=multi-user.target
EOF
全てのプロセスを起動
code:bash
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
❯ gcloud compute ssh controller-0 \
--command "kubectl get nodes --kubeconfig admin.kubeconfig"
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 34s v1.18.6
worker-1 Ready <none> 32s v1.18.6
worker-2 Ready <none> 30s v1.18.6
nodeが立ち上がっていそう
ch10 Configuring kubectl for Remote Access
admin user credentialに紐づいたkubectlコマンドを実行するための設定を作る
kubeconfigはkube-apiserverを必要とする
高い可用性を確保するために、external load balancerを使おう(毎度のこと
まずは admin userを認証するためのkubeconfigを作る
code:bash
{
KUBERNETES_PUBLIC_ADDRESS=$(gcloud compute addresses describe kubernetes-the-hard-way \
--region $(gcloud config get-value compute/region) \
--format 'value(address)')
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.pem \
--embed-certs=true \
kubectl config set-credentials admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
これはローカルで実行するやつ
これでローカルからkubectlが使える
code:bash
❯ kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
etcd-2 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}
❯ kubectl get nodes
NAME STATUS ROLES AGE VERSION
worker-0 Ready <none> 5m3s v1.18.6
worker-1 Ready <none> 5m1s v1.18.6
worker-2 Ready <none> 4m59s v1.18.6
ch11 Provisioning Pod Network Routes
ノードにスケジュールされたPodはPod CIDRの範囲からIPを割り当てられる
この段階では、Pod同士はルーティング設定がないために、他のノード上で動くPodとやりとりができない
(Podからのリクエストがノードの外に出ていけない)
routing tableを作る
まずは、kubernetes-the-hard-way VPC networkから必要な情報を集める
code:bash
for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'valueseparator=" "(networkInterfaces0.networkIP,metadata.items0.value)' done
❯ for instance in worker-0 worker-1 worker-2; do
gcloud compute instances describe ${instance} \
--format 'valueseparator=" "(networkInterfaces0.networkIP,metadata.items0.value)' done
10.240.0.20 10.200.0.0/24
10.240.0.21 10.200.1.0/24
10.240.0.22 10.200.2.0/24
そしてnetwork routeを作る
code:bash
for i in 0 1 2; do
gcloud compute routes create kubernetes-route-10-200-${i}-0-24 \
--network kubernetes-the-hard-way \
--next-hop-address 10.240.0.2${i} \
--destination-range 10.200.${i}.0/24
done
完了したら routesを表示してみる
code:bash
gcloud compute routes list --filter "network: kubernetes-the-hard-way"
❯ gcloud compute routes list --filter "network: kubernetes-the-hard-way"
NAME NETWORK DEST_RANGE NEXT_HOP PRIORITY
default-route-09b081e5d085bc0d kubernetes-the-hard-way 10.240.0.0/24 kubernetes-the-hard-way 0
default-route-97fa0417e8996ab6 kubernetes-the-hard-way 0.0.0.0/0 default-internet-gateway 1000
kubernetes-route-10-200-0-0-24 kubernetes-the-hard-way 10.200.0.0/24 10.240.0.20 1000
kubernetes-route-10-200-1-0-24 kubernetes-the-hard-way 10.200.1.0/24 10.240.0.21 1000
kubernetes-route-10-200-2-0-24 kubernetes-the-hard-way 10.200.2.0/24 10.240.0.22 1000
ch12 Deploying the DNS Cluster Add-on
次はDNS add-onをデプロイしてみる
これはDNSベースのサービスディスカバリ
CoreDNSの機能を使って作られている?
まずは coredns クラスタアドオンをデプロイする
kubectl apply -f https://storage.googleapis.com/kubernetes-the-hard-way/coredns-1.7.0.yaml
そういうのもあるのか
code:bash
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
これだけで作れた
作られたことも確認してみる
code:bash
❯ kubectl get pods -l k8s-app=kube-dns -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5677dc4cdb-d57s9 1/1 Running 0 20s
coredns-5677dc4cdb-s58dl 1/1 Running 0 20s
次は busybox デプロイメントを作る
code:bash
kubectl run busybox --image=busybox:1.28 --command -- sleep 3600
❯ kubectl get pods -l run=busybox
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 7s
busybox Podのfull nameを取得する
❯ kubectl exec -ti $POD_NAME -- nslookup kubernetes
Server: 10.32.0.10
Address 1: 10.32.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.32.0.1 kubernetes.default.svc.cluster.local
❯ echo $POD_NAME
busybox
pod名からIPが引けてる
ch13 Smoke Test
立ち上げたクラスタがちゃんと動くかの様々な検証
まずはSecret dataが暗号化されているか
適当なSecretを作る
code:bash
kubectl create secret generic kubernetes-the-hard-way \
--from-literal="mykey=mydata"
etcdに保存されたSecretをhexdumpしてみる
code:bash
gcloud compute ssh controller-0 \
--command "sudo ETCDCTL_API=3 etcdctl get \
--cacert=/etc/etcd/ca.pem \
--cert=/etc/etcd/kubernetes.pem \
--key=/etc/etcd/kubernetes-key.pem\
/registry/secrets/default/kubernetes-the-hard-way | hexdump -C"
00000000 2f 72 65 67 69 73 74 72 79 2f 73 65 63 72 65 74 |/registry/secret|
00000010 73 2f 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e |s/default/kubern|
00000020 65 74 65 73 2d 74 68 65 2d 68 61 72 64 2d 77 61 |etes-the-hard-wa|
00000030 79 0a 6b 38 73 3a 65 6e 63 3a 61 65 73 63 62 63 |y.k8s:enc:aescbc|
00000040 3a 76 31 3a 6b 65 79 31 3a 8c 7b 16 f3 26 59 d5 |:v1:key1:.{..&Y.|
00000050 c9 65 1c f0 3a 04 e7 66 2a f6 50 93 4e d4 d7 8c |.e..:..f*.P.N...|
00000060 ca 24 ab 68 54 5f 31 f6 5c e5 5c c6 29 1d cc da |.$.hT_1.\.\.)...|
00000070 22 fc c9 be 23 8a 26 b4 9b 38 1d 57 65 87 2a ac |"...#.&..8.We.*.|
00000080 70 11 ea 06 93 b7 de ba 12 83 42 94 9d 27 8f ee |p.........B..'..|
00000090 95 05 b0 77 31 ab 66 3d d9 e2 38 85 f9 a5 59 3a |...w1.f=..8...Y:|
000000a0 90 c1 46 ae b4 9d 13 05 82 58 71 4e 5b cb ac e2 |..F......XqN[...|
000000b0 3b 6e d7 10 ab 7c fc fe dd f0 e6 0a 7b 24 2e 68 |;n...|......{$.h|
000000c0 5e 78 98 5f 33 40 f8 d2 10 30 1f de 17 3f 06 a1 |^x._3@...0...?..|
000000d0 81 bd 1f 2e be e9 35 26 2c be 39 16 cf ac c2 6d |......5&,.9....m|
000000e0 32 56 05 7d 80 39 5d c0 a4 43 46 75 96 0c 87 49 |2V.}.9]..CFu...I|
000000f0 3c 17 1a 1c 8e 52 b1 e8 42 6b a5 e8 b2 b3 27 bc |<....R..Bk....'.|
00000100 80 a6 53 2a 9f 57 d2 de a3 f8 7f 84 2c 01 c9 d9 |..S*.W......,...|
00000110 4f e0 3f e7 a7 1e 46 b7 47 dc f0 53 d2 d2 e1 99 |O.?...F.G..S....|
00000120 0b b7 b3 49 d0 3c a5 e8 26 ce 2c 51 42 2c 0f 48 |...I.<..&.,QB,.H|
00000130 b1 9a 1a dd 24 d1 06 d8 34 bf 09 2e 20 cc 3d 3d |....$...4... .==|
00000140 e2 5a e5 e4 44 b7 ae 57 49 0a |.Z..D..WI.|
0000014a
k8s:enc:aescbc:v1:key1
aescbc providerが key1 の暗号化に使われていることがわかる
aescbc は AES_CBC 暗号アルゴリズムのこと
次はDeploymentの確認
nginx deployemntを作ってみる
code:bash
kubectl create deployment nginx --image=nginx
❯ kubectl get pods -l app=nginx
NAME READY STATUS RESTARTS AGE
nginx-f89759699-59p2l 1/1 Running 0 8s
ちゃんとPodが作られている
次はポートフォワーディング
さっき作ったnignxのPod名を取得して、
code:bash
POD_NAME=$(kubectl get pods -l app=nginx -o jsonpath="{.items0.metadata.name}") ❯ kubectl port-forward $POD_NAME 8080:80
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from ::1:8080 -> 80 ポートフォワード設定を追加
そしてローカルからリクエストしてみる
code:bash
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 25 Apr 2021 10:05:08 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes
無事 localhost:8080 のリクエストが、nginx podの port80にフォワーディングされている
kubectl port-forward 初めて使ったけど、即興でポートフォワーディングを有効化できるんだな
次はLog
code:bash
❯ kubectl logs $POD_NAME
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
ちゃんとoutputされている
次はExecコマンド
Podに nignx -v コマンドを送ってみる
code:bash
❯ kubectl exec -ti $POD_NAME -- nginx -v
nginx version: nginx/1.19.10
E0425 19:07:05.186738 36028 v3.go:79] EOF
次はService
nginx deploymentをNodePort ServiceでExposeしてみる
kubectl expose deployment nginx --port 80 --type NodePort
本チュートリアルでは、LoadBalancer serbiceは使うことができない
理由は、このクラスタは cloud provider integration で設定されていないから
code:bash
NODE_PORT=$(kubectl get svc nginx \
--output=jsonpath='{range .spec.ports0}{.nodePort}') nginx serbiceに紐づいたポートを取得
そしてFirewall Rulesを作成
code:bash
gcloud compute firewall-rules create kubernetes-the-hard-way-allow-nginx-service \
--allow=tcp:${NODE_PORT} \
--network kubernetes-the-hard-way
worker instanceのexternal IPを取得して、
code:bash
EXTERNAL_IP=$(gcloud compute instances describe worker-0 \
--format 'value(networkInterfaces0.accessConfigs0.natIP)') そしてリクエスト
code:bash
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Sun, 25 Apr 2021 10:09:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes