2020/11/05
今回できたこと
に成果まとめ用scrapboxを作成する。
Kurbenetes環境の起動と停止の確認。
参考文献(VTJ資料、Docker on FreeBSD using bhyve)の追加。
未解決
minikube 環境構築のための Go 言語のライブラリのインストール。
お約束
この文章は、日本仮想化技術株式会社の社会人インターシップ制度を利用して作業した結果をまとめたものです。
成果まとめ用scrapbox作成
日報に相当するdiaryや、まとめた内容を記述するために、に以下のページを作成する。
minikube 環境構築のための Go 言語のライブラリのインストール
未解決: #miniube のビルド時に、 #Go のライブラリ関係でエラーが出ている。
code:shell
$ go get github.com/jteeuwen/go-bindata/
$ go install github.com/jteeuwen/go-bindata/
code:shell
$ make
package github.com/go-bindata/go-bindata/v3: cannot find package "github.com/go-bindata/go-bindata/v3" in any of:
/usr/local/go/src/github.com/go-bindata/go-bindata/v3 (from $GOROOT)
/home/pi/go/src/github.com/go-bindata/go-bindata/v3 (from $GOPATH)
make: *** Makefile:335: pkg/minikube/assets/assets.go エラー 1
RaspberryPi でのKubernetesの起動と停止
Kubernetes を起動するためには、kubeadm initを使う。
一度起動すると、OSのreboot後も起動した状態で立ち上がる。
code:shell
$ sudo kubeadm init
W1105 21:44:50.803472 4905 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups kubelet.config.k8s.io kubeproxy.config.k8s.io
init Using Kubernetes version: v1.19.3
preflight Running pre-flight checks
WARNING IsDockerSystemdCheck: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
WARNING SystemVerification: missing optional cgroups: hugetlb
preflight Pulling images required for setting up a Kubernetes cluster
preflight This might take a minute or two, depending on the speed of your internet connection
preflight You can also perform this action in beforehand using 'kubeadm config images pull'
certs Using certificateDir folder "/etc/kubernetes/pki"
certs Generating "ca" certificate and key
certs Generating "apiserver" certificate and key
certs apiserver serving cert is signed for DNS names kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local raspberrypi and IPs 10.96.0.1 192.168.3.175
certs Generating "apiserver-kubelet-client" certificate and key
certs Generating "front-proxy-ca" certificate and key
certs Generating "front-proxy-client" certificate and key
certs Generating "etcd/ca" certificate and key
certs Generating "etcd/server" certificate and key
certs etcd/server serving cert is signed for DNS names localhost raspberrypi and IPs 192.168.3.175 127.0.0.1 ::1
certs Generating "etcd/peer" certificate and key
certs etcd/peer serving cert is signed for DNS names localhost raspberrypi and IPs 192.168.3.175 127.0.0.1 ::1
certs Generating "etcd/healthcheck-client" certificate and key
certs Generating "apiserver-etcd-client" certificate and key
certs Generating "sa" key and public key
kubeconfig Using kubeconfig folder "/etc/kubernetes"
kubeconfig Writing "admin.conf" kubeconfig file
kubeconfig Writing "kubelet.conf" kubeconfig file
kubeconfig Writing "controller-manager.conf" kubeconfig file
kubeconfig Writing "scheduler.conf" kubeconfig file
kubelet-start Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
kubelet-start Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
kubelet-start Starting the kubelet
control-plane Using manifest folder "/etc/kubernetes/manifests"
control-plane Creating static Pod manifest for "kube-apiserver"
control-plane Creating static Pod manifest for "kube-controller-manager"
control-plane Creating static Pod manifest for "kube-scheduler"
etcd Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
wait-control-plane Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
apiclient All control plane components are healthy after 34.512475 seconds
upload-config Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
kubelet Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
upload-certs Skipping phase. Please see --upload-certs
mark-control-plane Marking the node raspberrypi as control-plane by adding the label "node-role.kubernetes.io/master=''"
mark-control-plane Marking the node raspberrypi as control-plane by adding the taints node-role.kubernetes.io/master:NoSchedule
bootstrap-token Using token: u2m9jn.obvgfgvl3v7sqrbh
bootstrap-token Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
bootstrap-token configured RBAC rules to allow Node Bootstrap tokens to get nodes
bootstrap-token configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
bootstrap-token configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
bootstrap-token configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
bootstrap-token Creating the "cluster-info" ConfigMap in the "kube-public" namespace
kubelet-finalize Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
addons Applied essential addon: CoreDNS
addons Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f podnetwork.yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.3.175:6443 --token u2m9jn.obvgfgvl3v7sqrbh \
--discovery-token-ca-cert-hash sha256:845f7ad9b857218661494a0232c874883198f6645dfe180e5812929576f4fde6
Kubernetes の停止には、kubeadm resetを使う。
この状態でOSをrebootすると、Kubernetes は起動しない状態でRaspberry Piが起動する。
code:shell
$ sudo kubeadm reset
reset Reading configuration from the cluster...
reset FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
reset WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
reset Are you sure you want to proceed? y/N: y
preflight Running pre-flight checks
reset Removing info for node "raspberrypi" from the ConfigMap "kubeadm-config" in the "kube-system" Namespace
reset Stopping the kubelet service
reset Unmounting mounted directories in "/var/lib/kubelet"
reset Deleting contents of config directories: /etc/kubernetes/manifests /etc/kubernetes/pki
reset Deleting files: /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
reset Deleting contents of stateful directories: /var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
その他の参考資料
足元の資料を忘れてたorz
仮想化通信Kubernetes: 実際に手を動かして環境を構築する話が多い。
How to run Docker on FreeBSD 12: FreeBSDで無理やりDockerを動かす
bhyveというFreeBSDの仮想化環境を設定し、
そこにDebian 9をインストールし、
そこにDockerをインストールし、
Docker remote APIをあけ、
外部からコンテナを管理するためにPortainerをインストールする
#diary
#Kubernetes #miniube
#Linux #FreeBSD
#RaspberryPi