搭建环境

☁  ~  sudo sysctl -w net.ipv4.ip_forward=1
net.ipv4.ip_forward = 1

安装minikube

☁  ~  sudo pacman -Syu minikube kubeadm kubelet
☁  ~  sudo pacman -Syu cri-o
☁  ~  sudo pacman -Syu apparmor btrfs-progs
☁  ~  sudo systemctl start crio.service
☁  ~  sudo systemctl enable crio.service
☁  ~  kubectl config use-context minikube
Switched to context "minikube".

docker驱动

☁  ~  sudo usermod -aG docker $USER && newgrp docker
☁  ~  sudo systemctl enable docker.service 
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
☁  ~  sudo systemctl enable docker.socket                                                                                                          
Created symlink /etc/systemd/system/sockets.target.wants/docker.socket → /usr/lib/systemd/system/docker.socket.
☁  ~  sudo systemctl restart docker.service 
☁  ~  sudo systemctl restart docker.socket 

kvm驱动

☁  ~  sudo pacman -Syu libvirt
☁  ~  sudo pacman -S libvirt-dbus
☁  ~  sudo pacman -S libvirt-glib
☁  ~  sudo systemctl enable libvirt-dbus.service
☁  ~  sudo systemctl enable libvirt-guests.service
☁  ~  sudo systemctl enable libvirtd.service
☁  ~ sudo usermod -aG kvm $USER && newgrp kvm
/etc/polkit-1/rules.d/50-libvirt.rules

/* Allow users in kvm group to manage the libvirt daemon without authentication(允许 kvm 组的用户管理 libvirt 而无需认证)
*/
polkit.addRule(function(action, subject) {
    if (action.id == "org.libvirt.unix.manage" &&
        subject.isInGroup("kvm")) {
            return polkit.Result.YES;
    }
});

初始化minikube

production.cloudflare.docker.com
registry-1.docker.io
auth.docker.io
storage.googleapis.com
k8s.gcr.io
github.com
objects.githubusercontent.com
charts.jetstack.io
releases.rancher.com
charts.jetstack.io
☁  ~  minikube start --driver=docker --image-mirror-country=cn
😄  Arch  上的 minikube v1.26.1
✨  根据现有的配置文件使用 docker 驱动程序
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🏃  Updating the running docker "minikube" container ...
🌐  找到的网络选项:
    ▪ HTTP_PROXY=http://0.0.0.0:8000
    ▪ HTTPS_PROXY=http://0.0.0.0:8000
    ▪ NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
❗  This container is having trouble accessing <https://k8s.gcr.io>
💡  To pull new external images, you may need to configure a proxy: <https://minikube.sigs.k8s.io/docs/reference/networking/proxy/>
🐳  正在 Docker 20.10.17 中准备 Kubernetes v1.24.3…
    ▪ env HTTP_PROXY=http://0.0.0.0:8000
    ▪ env HTTPS_PROXY=http://0.0.0.0:8000
    ▪ env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24

🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
☁  ~  minikube start --driver=kvm2 --image-mirror-country=cn --cpus 4 --memory 8192
😄  Arch  上的 minikube v1.26.1
✨  根据用户配置使用 kvm2 驱动程序
💾  正在下载驱动 docker-machine-driver-kvm2:
🎉  minikube 1.27.0 is available! Download it: <https://github.com/kubernetes/minikube/releases/tag/v1.27.0>
💡  To disable this notice, run: 'minikube config set WantUpdateNotification false'

安装Rancher

☁  ~  helm repo add rancher-latest <https://releases.rancher.com/server-charts/latest>                                                                           
"rancher-latest" already exists with the same configuration, skipping                                                                                          
☁  ~  kubectl create namespace cattle-system                                                                                                                   
namespace/cattle-system created                                                                                                                                
☁  ~  kubectl apply -f <https://github.com/cert-manager/cert-manager/releases/download/v1.7.1/cert-manager.crds.yaml>                                            
customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created                                                                      
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created                                                                             
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created                                                                          
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created                                                                           
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created                                                                                  
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created                                                                              
☁  ~  helm repo add jetstack <https://charts.jetstack.io>                                                                                                        
"jetstack" already exists with the same configuration, skipping
☁  ~  helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "jetstack" chart repository
...Successfully got an update from the "rancher-latest" chart repository
Update Complete. ⎈Happy Helming!⎈
☁  ~  helm install cert-manager jetstack/cert-manager \\
  --namespace cert-manager \\
  --create-namespace \\
  --version v1.7.1
NAME: cert-manager
LAST DEPLOYED: Thu Sep 15 23:43:03 2022
NAMESPACE: cert-manager
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
cert-manager v1.7.1 has been deployed successfully!

In order to begin issuing certificates, you will need to set up a ClusterIssuer
or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).

More information on the different types of issuers and how to configure them
can be found in our documentation:

<https://cert-manager.io/docs/configuration/>

For information on how to configure cert-manager to automatically provision
Certificates for Ingress resources, take a look at the `ingress-shim`
documentation:

<https://cert-manager.io/docs/usage/ingress/>
☁  ~
☁  ~  cat .kube/config|grep server
    server: <https://192.168.49.2:8443>
☁  ~
☁  ~  helm install rancher rancher-latest/rancher \\ 
  --namespace cattle-system \\
  --set hostname=rancher.kali-team.cn \\
  --set bootstrapPassword=admin \\
  --set ingress.tls.source=letsEncrypt \\
  --set [email protected] \\
  --set letsEncrypt.ingress.class=nginx

NAME: rancher
LAST DEPLOYED: Thu Sep 15 23:54:06 2022
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.

Check out our docs at <https://rancher.com/docs/>

If you provided your own bootstrap password during installation, browse to <https://192.168.49.2.sslip.io> to get started.

If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:

echo https://192.168.49.2.sslip.io/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')


To get just the bootstrap password on its own, run:

kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'


Happy Containering!

验证

☁  ~  kubectl -n cattle-system rollout status deploy/rancher 
deployment "rancher" successfully rolled out
☁  ~  kubectl get pods --namespace cattle-system                                                                                                               
NAME                               READY   STATUS    RESTARTS   AGE                                                                                            
cm-acme-http-solver-vxrsx          1/1     Running   0          5h51m                                                                                          
rancher-75bc989d89-kmzrp           1/1     Running   0          5h51m                                                                                          
rancher-75bc989d89-n6fzk           1/1     Running   0          5h51m                                                                                          
rancher-75bc989d89-q6sxp           1/1     Running   0          5h51m                                                                                          
rancher-webhook-576c5b6859-nzpfg   1/1     Running   0          128m
☁  ~  kubectl get deployments -n cattle-system
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
rancher           3/3     3            3           6h7m
rancher-webhook   1/1     1            1           144m

排错

☁  ~  kubectl -n cattle-system get pod                                                                                                                         
NAME                        READY   STATUS             RESTARTS   AGE                                                                                          
cm-acme-http-solver-p9g8j   1/1     Running            0          23m                                                                                          
rancher-75bc989d89-5g67r    0/1     ImagePullBackOff   0          23m                                                                                          
rancher-75bc989d89-jst5n    0/1     ImagePullBackOff   0          23m                                                                                          
rancher-75bc989d89-qqw89    0/1     ImagePullBackOff   0          23m
☁  ~  kubectl describe pod rancher-75bc989d89-5g67r -n cattle-system                                                                                           
Name:             rancher-75bc989d89-5g67r                                                                                                                     
Namespace:        cattle-system                                                                                                                                
Priority:         0                                                                                                                                            
Service Account:  rancher
Node:             minikube/192.168.49.2 
Start Time:       Fri, 16 Sep 2022 00:32:49 +0800
Labels:           app=rancher
                  pod-template-hash=75bc989d89
                  release=rancher
Annotations:      <none>
Status:           Pending
IP:               172.17.0.8
IPs:
  IP:           172.17.0.8
Controlled By:  ReplicaSet/rancher-75bc989d89
Containers:
  rancher:
    Container ID:  
    Image:         rancher/rancher:v2.6.8
    Image ID:      
    Port:          80/TCP
    Host Port:     0/TCP
    Args:
      --no-cacerts
      --http-listen-port=80
      --https-listen-port=443
      --add-local=true
    State:          Waiting
Reason:       ImagePullBackOff
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:80/healthz delay=60s timeout=1s period=30s #success=1 #failure=3
    Readiness:      http-get http://:80/healthz delay=5s timeout=1s period=30s #success=1 #failure=3
    Environment:
      CATTLE_NAMESPACE:           cattle-system
      CATTLE_PEER_SERVICE:        rancher
      CATTLE_BOOTSTRAP_PASSWORD:  <set to the key 'bootstrapPassword' in secret 'bootstrap-secret'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cvs6v (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  kube-api-access-cvs6v:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort 
Node-Selectors:              <none>
Tolerations:                 cattle.io/os=linux:NoSchedule
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  26m                   default-scheduler  Successfully assigned cattle-system/rancher-75bc989d89-5g67r to minikube
  Normal   Pulling    12m (x4 over 26m)     kubelet            Pulling image "rancher/rancher:v2.6.8"
  Warning  Failed     9m7s (x4 over 20m)    kubelet            Error: ErrImagePull
  Warning  Failed     8m30s (x8 over 20m)   kubelet            Error: ImagePullBackOff
  Normal   BackOff    5m24s (x17 over 20m)  kubelet            Back-off pulling image "rancher/rancher:v2.6.8"
  Warning  Failed     6s (x5 over 20m)      kubelet            Failed to pull image "rancher/rancher:v2.6.8": rpc error: code = Unknown desc = context deadline exceeded
☁  ~

Untitled

参考

https://docs.ranchermanager.rancher.io/zh/getting-started/quick-start-guides/deploy-rancher-manager/helm-cli

https://docs.ranchermanager.rancher.io/zh/pages-for-subheaders/install-upgrade-on-a-kubernetes-cluster

Powered by Kali-Team