Topics
This section is about how to deploy Cluster API Provider Outscale.
Pre-built Kubernetes OMIs
New OMI will be builded and publish every two weeks for each supported Os Distribution on each outscale region.
Supported Os Distribution
- Ubuntu (ubuntu-20.04)
- Ubuntu (ubuntu-22.04)
Supported Outscale Regions
- eu-west-2
- us-east-2
Supported Image on eu-east-2:
ubuntu:
- ubuntu-2204-2204-kubernetes-v1.28.5-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.27.9-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.26.12-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.25.16-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.24.16-2024-01-23
- ubuntu-2204-2204-kubernetes-v1.23.17-2024-01-14
- ubuntu-2204-2204-kubernetes-v1.22.11-2024-01-17
Supported Image on CloudGov:
- ubuntu-2204-2204-kubernetes-v1.28.5-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.27.9-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.26.12-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.25.16-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.24.16-2024-01-23
- ubuntu-2204-2204-kubernetes-v1.24.16-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.23.17-2024-01-14
- ubuntu-2204-2204-kubernetes-v1.22.11-2024-01-17
Supported Image on us-east-2:
- ubuntu-2204-2204-kubernetes-v1.28.5-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.27.9-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.26.12-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.25.16-2024-01-17
- ubuntu-2204-2204-kubernetes-v1.24.16-2024-01-23
- ubuntu-2204-2204-kubernetes-v1.23.17-2024-01-14
- ubuntu-2204-2204-kubernetes-v1.22.11-2024-01-17
Prerequisite
-
Install kubectl
-
Outscale account with ak/sk Outscale Access Key and Secret Key
-
A Kubernetes cluster:
-
Look at cluster-api note (cluster-api)
Deploy Cluster Api
Clone
Please clone the project:
git clone https://github.com/outscale-dev/cluster-api-provider-outscale
If you use your own cluster for production (with backup, disaster recovery, …) and expose it:
export KUBECONFIG=<...>
Or you can use kind (only for local dev): Create kind:
kind create cluster
Check cluster is ready:
kubectl cluster-info
Install clusterctl
:warning: In order to install tools (clusterctl, …) with makefile, you need to have installed golang to download binaries golang You can install clusterctl for linux with:
make install-clusterctl
Or you can install clusterctl with the clusterctl section (cluster-api). And check version which is already installed:
./bin/clusterctl version
clusterctl version: &version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4", GitCommit:"8b5cd363e11b023c2b67a1937a2af680ead9e35c", GitTreeState:"clean", BuildDate:"2022-10-17T13:37:39Z", GoVersion:"go1.18.7", Compiler:"gc", Platform:"linux/amd64"}
Initialize clusterctl
You can enable clusterresourceset with
export EXP_CLUSTER_RESOURCE_SET=true
Please create $HOME/.cluster-api/clusterctl.yaml:
providers:
- name: outscale
type: InfrastructureProvider
url: https://github.com/outscale/cluster-api-provider-outscale/releases/latest/infrastructure-components.yaml
You can initialize clusterctl with credential with:
export OSC_ACCESS_KEY=<your-access-key>
export OSC_SECRET_KEY=<your-secret-access-key>
export OSC_REGION=<your-region>
make credential
./bin/clusterctl init --infrastructure outscale
Create our cluster
Launch your stack with clusterctl
You can create a keypair before if you want. You can access nodes shell (with openlens, lens, …) You have to set:
export OSC_IOPS=<osc-iops>
export OSC_VOLUME_SIZE=<osc-volume-size>
export OSC_VOLUME_TYPE=<osc-volume-type>
export OSC_KEYPAIR_NAME=<osc-keypairname>
export OSC_SUBREGION_NAME=<osc-subregion>
export OSC_VM_TYPE=<osc-vm-type>
export OSC_IMAGE_NAME=<osc-image-name>
Then you will generate:
./bin/clusterctl generate cluster <cluster-name> --kubernetes-version <kubernetes-version> --control-plane-machine-count=<control-plane-machine-count> --worker-machine-count=<worker-machine-count> > getstarted.yaml
WARNING: Kubernetes version must match the kubernetes version which is included in image name in omi
You can then change to get what you want which is based on doc.
Then apply:
kubectl apply -f getstarted.yaml
Add security group rule after
You can add security group rule if you set extraSecurityGroupRule = true after you have already create a cluster and you want to set new security group rule.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscCluster
metadata:
name: cluster-api
namespace: default
spec:
network:
extraSecurityGroupRule: false
Add a public ip after bastion is created
You can add a public ip if you set publicIpNameAfterBastion = true after you have already create a cluster with a bastion.
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscCluster
metadata:
name: cluster-api
namespace: default
spec:
network:
...
bastion:
..
publicIpNameAfterBastion: true
Get Kubeconfig
You can then get the status:
root@cidev-admin v1beta1]# kubectl get cluster-api -A
NAMESPACE NAME CLUSTER AGE
default kubeadmconfig.bootstrap.cluster.x-k8s.io/cluster-api-control-plane-lzj65 cluster-api 95m
default kubeadmconfig.bootstrap.cluster.x-k8s.io/cluster-api-md-0-zgx4w cluster-api 95m
NAMESPACE NAME AGE
default kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/cluster-api-md-0 95m
NAMESPACE NAME CLUSTER REPLICAS READY AVAILABLE AGE VERSION
default machineset.cluster.x-k8s.io/cluster-api-md-0-7568fb659d cluster-api 1 95m v1.22.11
NAMESPACE NAME CLUSTER REPLICAS READY UPDATED UNAVAILABLE PHASE AGE VERSION
default machinedeployment.cluster.x-k8s.io/cluster-api-md-0 cluster-api 1 1 1 ScalingUp 95m v1.22.11
NAMESPACE NAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION
default machine.cluster.x-k8s.io/cluster-api-control-plane-4q2s8 cluster-api ip-10-0-0-45.eu-west-2.compute.internal aws:///eu-west-2a/i-3b629324 Running 95m v1.22.11
default machine.cluster.x-k8s.io/cluster-api-md-0-7568fb659d-hnkfw cluster-api ip-10-0-0-144.eu-west-2.compute.internal aws:///eu-west-2a/i-add154be Running 95m v1.22.11
NAMESPACE NAME PHASE AGE VERSION
default cluster.cluster.x-k8s.io/cluster-api Provisioned 95m
NAMESPACE NAME AGE TYPE PROVIDER VERSION
capi-kubeadm-bootstrap-system provider.clusterctl.cluster.x-k8s.io/bootstrap-kubeadm 46h BootstrapProvider kubeadm v1.2.1
capi-kubeadm-control-plane-system provider.clusterctl.cluster.x-k8s.io/control-plane-kubeadm 46h ControlPlaneProvider kubeadm v1.2.1
capi-system provider.clusterctl.cluster.x-k8s.io/cluster-api 46h CoreProvider cluster-api v1.2.1
NAMESPACE NAME CLUSTER INITIALIZED API SERVER AVAILABLE REPLICAS READY UPDATED UNAVAILABLE AGE VERSION
default kubeadmcontrolplane.controlplane.cluster.x-k8s.io/cluster-api-control-plane cluster-api 1 1 1 95m v1.22.11
NAMESPACE NAME AGE
default oscmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-api-control-plane 95m
default oscmachinetemplate.infrastructure.cluster.x-k8s.io/cluster-api-md-0 95m
Get kubeconfig
In order to get kubeconfig please use: kubeconfig
Node Ready
In order to have node ready, you must have a CNI and CCM.
You can use clusterresourceset with label clustername + crs-cni and label clustername + crs-ccm where clustername is the name of your cluster.
To install cni you can use helm charts or clusteresourceset.
To install helm,please follow helm
A list of cni:
To install ccm, you can use helm charts or clusteresourceset.
Delete Cluster api
Delete cluster
To delete our cluster:
kubectl delete -f getstarted.yaml
To delete cluster-api:
clusterctl delete --all
Prerequisite
- Install kubectl
- Install kustomize
v3.1.0+
- Outscale account with ak/sk Outscale Access Key and Secret Key
- A Kubernetes cluster:
- Container registry to store container image
- Registry secret registry-secret
Build
Clone
Please clone the project:
git clone https://github.com/outscale-dev/cluster-api-provider-outscale
User Credentials configuration
Put your ak/sk in osc-secret.yaml and launch:
/usr/local/bin/kubectl apply -f osc-secret.yaml
Registry credentials configuration
Public Outscale dockerhub
You can either use outscale/cluster-api-provider-outscale latest image.
Build and push own image
Or you can build and push image to your public or private registry
IMG=my-registry/controller:my-tag make docker-build
IMG=my-registry/controller:my-tag make docker-push
Deploy
Deploying Cluster Api
Please look at cluster-api section about deployment of cert-manager and cluster-api
Or you can use this to deploy cluster-api with cert-manager:
make deploy-clusterapi
Deploying Cluster API Provider Outscale
Deploy Cluster API Outscale controller manager
This step will deploy the Outscale Cluster API controller manager (currently compose of only of the Cluster Infrastructure Provider controller)
IMG=my-registry/controller:my-tag make deploy
Check your cluster is deployed
[root@cidev-admin cluster-api-provider-outscale]# kubectl get pod -n cluster-api-provider-outscale-system
NAME READY STATUS RESTARTS AGE
cluster-api-provider-outscale-controller-manager-7d5c48d67t6d7f 2/2 Running 0 22s
Watch controller log
this step will watch controller log:
kubectl logs -f cluster-api-provider-outscale-controller-manager-7d5c48d67t6d7f -n cluster-api-provider-outscale-system -c manager
Create your cluster
This step will create your infrastructure cluster.
It will create vpc, net, sg, routetables, eip, nat.
You can change parameter from cluster-template.yaml (please look at configuration) if you need:
kubectl apply -f example/cluster-template.yaml
Kubeadmconfig
You can use bootstrap to custom the bootstrap config.
Currently, we overwrite runc with containerd because of containerd
Get kubeconfig
In order to get kubeconfig please use: kubeconfig
Node Ready
In order to have node ready, you must have a CNI and CCM.
You can use [clusterresourceset][clusterresourceset] with label clustername + crs-cni and label clustername + crs-ccm where clustername is the name of your cluster.
To install cni you can use helm charts or clusteresourceset.
To install helm,please follow helm
A list of cni:
To install ccm, you can use helm charts or clusteresourceset.
CleanUp
Delete cluster
This step will delete your cluster with:
kubectl delete -f example/cluster-template.yaml
Delete Cluster API Outscale controller manager
This step will delete the Cluster Api Outscale controller manager with:
IMG=my-registry/controller:my-tag make undeploy
Delete Cluster Api
Please look at cluster-api section about deployment of cert-manager and cluster-api
Or you can use this to undeploy cluster-api with cert-manager:
make undeploy-clusterapi
Troubleshooting Guide
Common issues that you might see.
Missing credentials
Please set your credentials
kubectl logs -f cluster-api-provider-outscale-controller-manager-9f8dd7d8bqncnb -n cluster-api-provider-outscale-system
You will get:
1.6630978127864842e+09 ERROR controller.oscmachine Reconciler error {"reconciler group": "infrastructure.cluster.x-k8s.io", "reconciler kind": "OscMachine", "name": "capo-quickstart-md-0-tpjgs", "namespace": "default", "error": "environment variable OSC_ACCESS_KEY is required failed to create Osc Client"}
Override Limit
Please check you have enough core, ram, instance quota.
Otherwise you will get:
controller.osccluster Reconciler error {"reconciler group": "infrastructure.cluster.x-k8s.io", "reconciler kind": "OscCluster", "name": "cluster-api-test", "namespace": "default", "error": "400 Bad Request Can not create net for Osccluster default/cluster-api-test"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.11.2/pkg/internal/controller/controller.go:227
Node are not ready
Node are not ready because they need cni to be ready.
Not running Node
If your vm is never in running phase and but still in provisonned phase, please look at the cloud init log of your vm.
Trouble with e2etest path
You should clean a previous installation before launching e2etest:
make uninstall-clusterapi
kubectl get crd -A | grep x-k8s.
You should delete all cluster-api’s CRD. If deletion is blocked, you can use Finalizers to delete crds by patching them one by one
kubectl patch --namespace=my-namespace my-object my-object--name --patch='{"metadata":{"finalizers":null}}' --type=merge
Clean Stack
If your vm did not reach running state, you can use:
ClusterToClean=my-cluster-name make testclean
to clean you stack
If there is some cluster-api k8s object (such as oscMachineTemplate) still remaning after running the cleaning script, please do:
kubectl delete oscmachinetemplate --all -A
kubectl patch --namespace=my-namespace oscmachinetemplate my-object-name --patch='{"metadata":{"finalizers":null}}' --type=merge
Be able to use cillium as a cni
With ubuntu 22.04, cillium is not compatibled with hotplug.
Log console of the vm:
Jan 16 11:17:35 ip-10-0-0-36 kubelet[644]: I0116 11:17:35.535025 644 log.go:198] http: superfluous response.WriteHeader call from k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Response).WriteHeader (response.go:220)
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: 2024-01-16 11:17:41,273 - hotplug_hook.py[ERROR]: Received fatal exception handling hotplug!
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: Traceback (most recent call last):
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 277, in handle_args
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: handle_hotplug(
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 235, in handle_hotplug
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: raise last_exception
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 224, in handle_hotplug
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: event_handler.detect_hotplugged_device()
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 104, in detect_hotplugged_device
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: raise RuntimeError(
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: RuntimeError: Failed to detect aa:17:dc:e4:6a:8d in updated metadata
Jan 16 11:17:41 ip-10-0-0-36 cloud-init[2589]: [CLOUDINIT]2024-01-16 11:17:41,273 - hotplug_hook.py[ERROR]: Received fatal exception handling hotplug!
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 277, in handle_args
handle_hotplug(
File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 235, in handle_hotplug
raise last_exception
File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 224, in handle_hotplug
event_handler.detect_hotplugged_device()
File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 104, in detect_hotplugged_device
raise RuntimeError(
RuntimeError: Failed to detect aa:17:dc:e4:6a:8d in updated metadata
Jan 16 11:17:41 ip-10-0-0-36 cloud-init[2589]: [CLOUDINIT]2024-01-16 11:17:41,274 - handlers.py[DEBUG]: finish: hotplug-hook: FAIL: Handle reconfiguration on hotplug events.
Jan 16 11:17:41 ip-10-0-0-36 cloud-init[2589]: [CLOUDINIT]2024-01-16 11:17:41,274 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
Jan 16 11:17:41 ip-10-0-0-36 cloud-init[2589]: [CLOUDINIT]2024-01-16 11:17:41,274 - util.py[DEBUG]: Read 14 bytes from /proc/uptime
Jan 16 11:17:41 ip-10-0-0-36 cloud-init[2589]: [CLOUDINIT]2024-01-16 11:17:41,274 - util.py[DEBUG]: cloud-init mode 'hotplug-hook' took 76.643 seconds (76.64)
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: Traceback (most recent call last):
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/bin/cloud-init", line 11, in <module>
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: load_entry_point('cloud-init==22.2', 'console_scripts', 'cloud-init')()
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/main.py", line 1088, in main
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: retval = util.log_time(
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/util.py", line 2621, in log_time
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: ret = func(*args, **kwargs)
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 277, in handle_args
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: handle_hotplug(
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 235, in handle_hotplug
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: raise last_exception
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 224, in handle_hotplug
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: event_handler.detect_hotplugged_device()
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: File "/usr/lib/python3/dist-packages/cloudinit/cmd/devel/hotplug_hook.py", line 104, in detect_hotplugged_device
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: raise RuntimeError(
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2589]: RuntimeError: Failed to detect aa:17:dc:e4:6a:8d in updated metadata
Jan 16 11:17:41 ip-10-0-0-36 systemd[1]: cloud-init-hotplugd.service: Main process exited, code=exited, status=1/FAILURE
Jan 16 11:17:41 ip-10-0-0-36 systemd[1]: cloud-init-hotplugd.service: Failed with result 'exit-code'.
Jan 16 11:17:41 ip-10-0-0-36 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=cloud-init-hotplugd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
Jan 16 11:17:41 ip-10-0-0-36 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=unconfined msg='unit=cloud-init-hotplugd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
Jan 16 11:17:41 ip-10-0-0-36 systemd[1]: Started cloud-init hotplug hook daemon.
Jan 16 11:17:41 ip-10-0-0-36 cloud-init-hotplugd[2606]: args=--subsystem=net handle --devpath=/devices/virtual/net/cilium_vxlan --udevaction=add
Jan 16 11:17:41 ip-10-0-0-36 bash[2606]: [CLOUDINIT]2024-01-16 11:17:41,659 - hotplug_hook.py[DEBUG]: hotplug-hook called with the following arguments: {hotplug_action: handle, subsystem: net, udevaction: add, devpath: /devices/virtual/net/cilium_vxlan}
Jan 16 11:17:41 ip-10-0-0-36 bash[2606]: [CLOUDINIT]2024-01-16 11:17:41,659 - handlers.py[DEBUG]: start: hotplug-hook: Handle reconfiguration on hotplug events.
In order to be able to deploy cillium with k8s nodes based on ubuntu 22.04, you have to deactivate hotplug.
Remove hotplug from array when /etc/cloud/cloud.cfg.d/06_hotplug.cfg:
updates:
network:
when: ["boot"]
You can reboot your node to run again cloudinit.
Or you can run again cloudinit without reboot.
Clean existing config
sudo cloud-init clean --logs
Detect local data source
sudo cloud-init init --local
Detect any datasources whic require network up
sudo cloud-init init
Run all cloud_config_modules
sudo cloud-init modules --mode=config
Run all cloud_final_modules
sudo cloud-init modules --mode=final
Cluster Autoscaler Guide
Use cluster-autoscaler Please look at cluster-api first.
Install with helm
We will use kubeconfig-incluster mode
helm install --set 'autoDiscovery.clusterName=hello-osc' --set 'cloudProvider=clusterapi' --set 'clusterAPIKubeconfigSecret=hello-osc-kubeconfig' --set 'clusterAPIMode=kubeconfig-incluster'
hello-osc is the clusterName hello-osc-kubeconfig is the generated workload kubeconfig
Add Labels
You need to have at least this annotations in each machineDeployment:
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
Cluster-template
There are a relationship between controller
Configuration
cluster infrastructure controller OscCluster
example without bastion:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscCluster
metadata:
name: hello-osc
namespace: default
spec:
network:
bastion:
enable: false
clusterName: cluster-api
subregionName: eu-west-2a
loadBalancer:
loadbalancername: OscSdkExample-7
subregionname: eu-west-2a
net:
name: cluster-api-net
clusterName: cluster-api
ipRange: "172.19.95.128/25"
subnets:
- name: cluster-api-subnet
ipSubnetRange: "172.19.95.192/27"
publicIps:
- name: cluster-api-publicip
internetService:
clusterName: cluster-api
name: cluster-api-internetservice
natService:
clusterName: cluster-api
name: cluster-api-natservice
publicipname: cluster-api-publicip
subnetname: cluster-api-subnet
routeTables:
- name: cluster-api-routetable
subnetname: cluster-api-subnet
routes:
- name: cluster-api-routes
targetName: cluster-api-internetservice
targetType: gateway
destination: "0.0.0.0/0"
securityGroups:
- name: cluster-api-securitygroups
description: Security Group with cluster-api
securityGroupRules:
- name: cluste-api-securitygrouprule
flow: Inbound
ipProtocol: tcp
ipRange: "46.231.147.5/32"
fromPortRange: 22
toPortRange: 22
example with bastion:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscCluster
metadata:
name: cluster-api
namespace: default
spec:
network:
clusterName: cluster-api
loadBalancer:
loadbalancername: cluster-api-lb
clusterName: cluster-api
loadbalancertype: internet-facing
subnetname: cluster-api-subnet
securitygroupname: cluster-api-securitygroup-lb
net:
name: cluster-api-net
clusterName: cluster-api-az
ipRange: "10.0.0.0/16"
internetService:
name: cluster-api-igw
clusterName: cluster-api
controlPlaneSubnets:
- cluster-api-subnet
subnets:
- name: cluster-api-subnet
ipSubnetRange: "10.0.0.0/24"
subregionName: eu-west-2a
natServices:
- name: cluster-api-nat
clusterName: cluster-api
publicipname: cluster-api-publicip
subnetname: cluster-api-subnet
publicIps:
- name: cluster-api-publicip
clusterName: cluster-api
routeTables:
- name: cluster-api-rtb
subnets:
- cluster-api-subnet
routes:
- name: cluster-api-nat
targetName: cluster-api-nat
targetType: nat
destination: "0.0.0.0/0"
securityGroups:
- name: cluster-api-securitygroup-lb
description: Cluster-api Load Balancer Security Group
securityGroupRules:
- name: cluster-api-securitygrouprule-calico-vxlan
flow: Inbound
ipProtocol: tcp
ipRange: "0.0.0.0/0"
fromPortRange: 6443
toPortRange: 6443
bastion:
clusterName: cluster-api
enable: true
name: cluster-api-vm-bastion
keypairName: cluster-api
deviceName: /dev/sda1
imageName: ubuntu-2004-2004-kubernetes-v1.22.11-2022-08-22
rootDisk:
rootDiskSize: 15
rootDiskIops: 1000
rootDiskType: io1
subnetName: cluster-api-subnet-public
subregionName: eu-west-2a
securityGroupNames:
- name: cluster-api-securitygroup-lb
vmType: "tinav6.c4r8p2"
loadBalancer
Name | Default | Required | Description |
---|---|---|---|
loadbalancername | OscClusterApi-1 | false | The Load Balancer unique name |
subregionname | eu-west-2a | false | The SubRegion Name where the Load Balancer will be created |
listener | `` | false | The Listener Spec |
healthcheck | `` | false | The healthcheck Spec |
Listener
Name | Default | Required | Description |
---|---|---|---|
backendport | 6443 | false | The port on which the backend vm will listen |
backendprotocol | TCP | false | The protocol (‘HTTP’ |
loadbalancerport | 6443 | false | The port on which the loadbalancer will listen |
loadbalancerprotocol | TCP | false | the routing protocol (‘HTTP’ |
HealthCheck
Name | Default | Required | Description |
---|---|---|---|
checkinterval | 30 | false | the time in second between two pings |
healthythreshold | 10 | false | the consecutive number of pings which are sucessful to consider the vm healthy |
unhealthythreshold | 5 | false | the consecutive number of pings which are failed to consider the vm unhealthy |
port | 6443 | false | the HealthCheck port number |
protocol | TCP | false | The HealthCheck protocol (‘HTTP’ |
timeout | 5 | false | the Timeout to consider VM unhealthy |
Bastion
Name | Default | Required | Description |
---|---|---|---|
clusterName | cluster-api | false | The cluster name |
enable | false | false | Enable to have bastion |
name | cluster-api-vm-bastion | false | The name of the bastion |
imageName | tcp | false | the omi |
keypairName | cluster-api | false | The keypair name used to access bastion |
deviceName | /dev/sda1 | false | The device name |
rootDiskSize | 15 | false | The Root Disk Size |
rootDiskIops | 1000 | false | The Root Disk Iops (only for io1) |
rootDiskType | io1 | false | The Root Disk Type (io1, gp2, standard) |
subnetName | cluster-api-subnet-public | false | The Subnet associated to your bastion |
subregionName | eu-west-2a | false | The subregionName used for bastion and volume |
securityGroupNames | cluster-api-securitygroup-lb | false | The securityGroupName which is associated with bastion |
vmType | tinav6.c2r4p2 | false | The vmType use for the bastion |
Net
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-net | false | the tag name associated with the Net |
ipRange | 172.19.95.128/25 | false | Net Ip range with CIDR notation |
clusterName | cluster-api | false | Name of the cluster |
subregionName | eu-west-2a | false | The subregionName used for vm and volume |
controlPlaneSubnets
List of subnet to spread controlPlane nodes
Subnet
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-subnet | false | The tag name associated with the Subnet |
ipSubnetRange | 172.19.95.192/27 | false | Subnet Ip range with CIDR notation |
publicIps
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-publicip | false | The tag name associated with the Public Ip |
internetService
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-internetservice | false | The tag name associated with the Internet Service |
clusterName | cluster-api | false | Name of the cluster |
natService
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-natservice | false | The tag name associated with the Nat Service |
publicIpName | cluster-api-publicip | false | The Public Ip tag name associated wtih a Public Ip |
subnetName | cluster-api-subnet | false | The subnet tag name associated with a Subnet |
clusterName | cluster-api | false | Name of the cluster |
natServices
List of natServices
You can have either list of natService (natServices) or one natService (natService)
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-natservice | false | The tag name associated with the Nat Service |
publicIpName | cluster-api-publicip | false | The Public Ip tag name associated wtih a Public Ip |
subnetName | cluster-api-subnet | false | The subnet tag name associated with a Subnet |
clusterName | cluster-api | false | Name of the cluster |
routeTables
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-routetable | false | The tag name associated with the Route Table |
subnetName | cluster-api-subnet | false | The subnet tag name associated with a Subnet |
route | `` | false | The route configuration |
route
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-route | false | The tag name associated with the Route |
targetName | cluster-api-internetservice | false | The tag name associated with the target resource type |
targetType | gateway | false | The target resource type which can be Internet Service (gateway) or Nat Service (nat-service) |
destination | 0.0.0.0/0 | false | the destination match Ip range with CIDR notation |
securityGroup
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-securitygroup | false | The tag name associate with the security group |
description | Security Group with cluster-api | false | The description of the security group |
securityGroupRules | `` | false | The securityGroupRules configuration |
securityGroupRule
Name | Default | Required | Description |
---|---|---|---|
name | cluster-api-securitygrouprule | false | The tag name associate with the security group |
flow | Inbound | false | The flow of the security group (inbound or outbound) |
ipProtocol | tcp | false | The ip protocol name (tcp, udp, icmp or -1) |
ipRange | 46.231.147.5/32 | false | The ip range of the security group rule |
fromPortRange | 6443 | false | The beginning of the port range |
toPortRange | 6443 | false | The end of the port range |
machine infrastructure controller OscCluster
example:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscMachineTemplate
metadata:
name: "cluster-api-md-0"
namespace: default
annotations:
cluster.x-k8s.io/cluster-api-autoscaler-node-group-max-size: "5"
cluster.x-k8s.io/cluster-api-autoscaler-node-group-min-size: "0"
spec:
template:
spec:
node:
clusterName: cluster-api
image:
name: ubuntu-2004-2004-kubernetes-v1.22.11-2022-08-22
keypair:
name: cluster-api
vm:
clusterName: cluster-api
name: cluster-api-vm-kw
keypairName: cluster-api
deviceName: /dev/sda1
rootDisk:
rootDiskSize: 30
rootDiskIops: 1500
rootDiskType: io1
subnetName: cluster-api-subnet-kw
subregionName: eu-west-2a
securityGroupNames:
- name: cluster-api-securitygroups-kw
vmType: "tinav6.c2r4p2"
OscImage
Name | Default | Required | Description |
---|---|---|---|
name | `` | false | The image name you will use |
OscKeypair
Name | Default | Required | Description |
---|---|---|---|
keypairName | cluster-api-keypair | false | The keypairname you will use |
destroyKeypair | false | false | Destroy keypair at the end |
OscVm
Name | Default | Required | Description |
---|---|---|---|
clusterName | cluster-api | false | The cluster name |
name | cluster-api-vm-kw | false | The name of the vm |
keypairName | cluster-api | false | The keypair name used to access vm |
deviceName | cluster-api | false | The device path to mount root volumes |
rootDiskSize | 30 | false | The Root Disk Size |
rootDiskIops | 1500 | false | The Root Disk Iops (only for io1) |
rootDiskType | io1 | false | The Root Disk Type (io1, gp2, standard) |
rootDiskType | io1 | false | The Root Disk Type (io1, gp2, standard) |
subnetName | cluster-api-subnet-kw | false | The Subnet associated to your vm |
subregionName | eu-west-2a | false | The subregionName used for vm and volume |
securityGroupNames | cluster-api-securitygroups-kw | false | The securityGroupName which is associated with vm |
vmType | tinav6.c2r4p2 | false | The vmType use for the vm |
imageName | ubuntu-2004-2004-kubernetes-v1.22.11-2022-08-22 | false | The vmType use for the vm |
Upgrade cluster
How to upgrade cluster and switch version
Matrix compatibility
There are a compatibility list between version (version support). So with the change of version, you can have to change Core Provider, Kubeadm Bootstrap Provider and Kubeadm Control Plane Provider.
v0.3.0 (v1beta1) | |
---|---|
Kubernetes v1.22 | ✓ |
Kubernetes v1.23 | ✓ |
Kubernetes v1.24 | ✓ |
Kubernetes v1.25 | ✓ |
Kubernetes v1.26 | ✓ |
Kubernetes v1.27 | ✓ |
Kubernetes v1.28 | ✓ |
Upgrade with clusterctl
Based on k8s compatibility, you will have to change operator with a specific version specially if you have to increase your cluster with multiple versions.
It is also possible that you will change several times cluster-api version if you upgrade from an old version of kubernetes (ex: v1.22) to a recent one (v1.28.5).
In order to do so, please follow the following steps.
- Delete cluster-api controllers
cluserctl delete --all
- Delete some cluster-api crd
kubectl delete crd ipaddressclaims.ipam.cluster.x-k8s.io ipaddresses.ipam.cluster.x-k8s.io
- Specify operator version
Please write in $HOME/.cluster-api/clusterctl.yaml :
providers:
- name: "kubeadm"
url: "https://github.com/kubernetes-sigs/cluster-api/releases/v1.3.10/bootstrap-components.yaml"
type: "BootstrapProvider"
- name: "kubeadm"
url: "https://github.com/kubernetes-sigs/cluster-api/releases/v1.3.10/control-plane-components.yaml"
type: "ControlPlaneProvider"
- name: "cluster-api"
url: "https://github.com/kubernetes-sigs/cluster-api/releases/v1.3.10/core-components.yaml"
type: "CoreProvider"
- Deploy operators
clusterctl init --infrastructure outscale
:warning: It is possible that cert-manager-test is stuck in terminating state
In order to force clean cert-manager-test namespace (Namespace stuck as Terminating, How I removed it):
NAMESPACE=cert-manager-test
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
Upgrade control plane
We will first update control plane (Updating Machine Infrastructure and Bootstrap Templates)
- Create new template based on previous one:
kubectl get oscmachinetemplate <name> -o yaml > file.yaml
- Change the metadata name and change the omi version:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscMachineTemplate
metadata:
...
name: cluster-api-control-plane-1-28
...
spec:
template:
spec:
node:
...
image:
name: ubuntu-2204-2204-kubernetes-v1.28.5-2024-01-10
- Create new templates:
kubectl apply -f file.yaml
- Edit kubeadmcontrolplane
kubectl edit kubeadmcontrolplane <name>
- Change version and infrastructure reference
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
spec:
...
machineTemplate:
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscMachineTemplate
name: cluster-api-control-plane1-28
namespace: default
...
version: v1.28.5
Warning
It is possible that old control plane vm will only be deleted when you upgrade your first worker.
Upgrade worker
We will after upgrade worker (Updating Machine Infrastructure and Bootstrap Templates)
- Create new template based on previous one:
kubectl get oscmachinetemplate <name> -o yaml > file.yaml
- Change the metadata name and change the omi version:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscMachineTemplate
metadata:
...
name: cluster-api-md-1-28
...
spec:
template:
spec:
node:
...
image:
name: ubuntu-2204-2204-kubernetes-v1.28.5-2024-01-10
- Create new templates:
kubectl apply -f file.yaml
- Edit machinedeployments
kubectl edit machinedeployments.cluster.x-k8s.io <name>
- Change version and infrastructure reference
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
spec:
...
spec:
...
infrastructureRef:
apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: OscMachineTemplate
name: cluster-api-md-0
namespace: default
version: v1.28.5
Warning
Depends on your strategy of rolling out (Upgrading management and workload clusters)
- Delete old machineset
kubectl delete machinesets.cluster.x-k8s.io <name>
Topics
This section is abut how to develop Cluster API Provider Outscale.
Prerequisite
- Install kubectl
- Install kustomize
v3.1.0+
- Outscale account with ak/sk Outscale Access Key and Secret Key
- A Kubernetes cluster:
- Container registry to store container image
- Registry secret registry-secret
Configuration
Clone
Please clone the project
git clone https://github.com/outscale-dev/cluster-api-provider-outscale
User Credentials configuration
This step wil deploy user credential secret Put your credentials in osc-secret.yaml and apply:
/usr/local/bin/kubectl apply -f osc-secret.yaml
Registry credentials configuration
If you use a private registry (docker registry, harbor, dockerhub, quay.io, ….) with credentials, registry credentials must be named regcred and must be deployed in cluster-api-provider-outscale-system namespace.
kubectl get secret regcred -n cluster-api-provider-outscale-system
NAME TYPE DATA AGE
regcred kubernetes.io/dockerconfigjson 1 52s
If you want to change it with another name, you can do so in this file cluster-api-provider-outscale/config/default:
value: [{ name: regcred }]
Build and deploy
Deploying Cluster Api
Please look at cluster-api section about deployment of cert-manager and cluster-api
Or you can use this to deploy cluster-api with cert-manager:
make deploy-clusterapi
Build, Push and Deploy
This step will build and push image to your public or private registry and deploy it.
Environment variable
Set those environment variable with yours:
export K8S_CONTEXT=phandalin
export CONTROLLER_IMAGE=my-registry/controller
-
K8S_CONTEXT is your context in your kubeconfig file.
-
CONTROLLER_IMAGE is the project path where the image will be stored. Tilt will add a tag each time it build an new image.
CAPM
Please run to generate capm.yaml:
IMG=my-registry/controller:latest make capm
- IMG is the CONTROLLER_IMAGE with CONTROLLER_IMAGE_TAG. Tilt will change the tag each time it build an new image.
Tilt
Please launch tilt at the project’s root folder:
[root@cidev-admin cluster-api-provider-outscale]# tilt up
Tilt started on http://localhost:10350/
v0.25.3, built 2022-03-04
(space) to open the browser
(s) to stream logs (--stream=true)
(t) to open legacy terminal mode (--legacy=true)
(ctrl-c) to exit
You can track your docker build and controller log in your web browser.
Check your cluster is deployed
[root@cidev-admin cluster-api-provider-outscale]# kubectl get pod -n cluster-api-provider-outscale-system
NAME READY STATUS RESTARTS AGE
cluster-api-provider-outscale-controller-manager-7d5c48d67t6d7f 2/2 Running 0 22s
Update api
In order to test the change of an api, please do:
make manifest
make generate
make capm
kubectl apply -f capm.yaml
Develop
Install project in order to devellop
:warning: In order to install tools (clusterctl, …) with makefile, you need to have installed golang to download binaries golang
You must install those project with :
make install-dev-prerequisites
Optionally, you can install those project(kind, [gh], [packer], [kubebuildertool]):
make install-packer
make install-gh
make install-kind
make install-kubebuildertool
CleanUp
Delete cluster
This step will delete your cluster
kubectl delete -f example/cluster-template.yaml
Delete Cluster Api Outscale controller manager
This step will delete the outscale controller manager
IMG=my-registry/controller:my-tag make undeploy
Delete Cluster Api
Please look at cluster-api section about deployment of cert-manager and cluster-api
Or you can use this to undeploy cluster-api with cert-manager:
make deploy-clusterapi
Prerequisite
- Install kubectl
- Install kustomize
v3.1.0+
- Outscale account with ak/sk Outscale Access Key and Secret Key
- A Kubernetes cluster:
- Container registry to store container image
- Registry secret registry-secret
Configuration
Test
:warning: In order to install tools (clusterctl, …) with makefile, you need to have installed golang to download binaries golang
Lint
Please use format to indent your go and yamlfile:
make format
Lint go :
make golint-ci
make vet
Lint shell:
make shellcheck
Lint yaml:
make yamllint
boilerplate:
make verify-boilerplate
Generate Mock
Please use if you want to mock functions described in cloud folder for unit test:
make mock-generate
Unit test
Please use if you want to launch unit test:
make unit-test
Yòu can look at code coverage with covers.txt and covers.html
Functional test
Please use if you want to launch functional test:
export OSC_ACCESS_KEY=<your-osc-acces-key>
export OSC_SECRET_KEY=<your-osc-secret-key>
export KUBECONFIG=<your-kubeconfig-path>
make testenv
E2e test
Please use if you want to launch feature e2etest:
export OSC_ACCESS_KEY=<your-osc-acces-key>
export OSC_SECRET_KEY=<your-osc-secret-key>
export KUBECONFIG=<your-kubeconfig-path>
export IMG=<your-image>
make e2etestexistingcluster
Please use if you want to launch upgrade/remediation e2etest (it will use kind):
export OSC_ACCESS_KEY=<your-osc-acces-key>
export OSC_SECRET_KEY=<your-osc-secret-key>
export OSC_REGION=<your-osc-region>
export IMG=<your-image>
make e2etestkind
Please use if you want to launch conformance e2etest (it will use kind):
export OSC_ACCESS_KEY=<your-osc-acces-key>
export OSC_SECRET_KEY=<your-osc-secret-key>
export OSC_REGION=<your-osc-region>
export KUBECONFIG=<your-kubeconfig-path>
export IMG=<your-image>
make e2econformance
Prerequisite
- Install kubectl
- Install kustomize
v3.1.0+
- Outscale account with ak/sk Outscale Access Key and Secret Key
- A Kubernetes cluster:
- Container registry to store container image
- Registry secret registry-secret
Install Tilt
If you want to install tilt with all dev tools:
make install-dev-prerequisites:
Or if you want to install till only:
make install-tilt
Tilt configuration:
You can either configure tilt with setting in your bashrc or profile:
export CONTROLLER_IMAGE=myregistry/osc/cluster-api-outscale-controllers:latest
export K8S_CONTEXT=cluster-api-dev
K8S_CONTEXT is the name of the cluster. (k8s context in kubeconfig).
CONTROLLER_IMAGE is the image controller (myregistry is the url of your registry, osc is the project, cluster-api-outscale-controllers is the name of the image, latest is the tag of the image )
Or you can set with tilt.config:
{
"allowed_contexts": cluster-api-dev,
"controller_image": myregistry/osc/cluster-api-outscale-controllers:latest,
}
Tilt
Please launch tilt at the project’s root folder:
[root@cidev-admin cluster-api-provider-outscale]# tilt up
Tilt started on http://localhost:10350/
v0.25.3, built 2022-03-04
(space) to open the browser
(s) to stream logs (--stream=true)
(t) to open legacy terminal mode (--legacy=true)
(ctrl-c) to exit
You can track your docker build and controller log in your web browser.
Prerequisite
- Install kubectl
- Install kustomize
v3.1.0+
- Outscale account with ak/sk Outscale Access Key and Secret Key
- A Kubernetes cluster:
- Container registry to store container image
- Registry secret registry-secret
Release
Versioning
Please use this semantic version:
- Pre-release:
v0.1.1-alpha.1
- Minor release:
v0.1.0
- Patch release:
v0.1.1
- Major release:
v1.0.0
Update metadata.yaml
You should have update metadata.yaml to included new release version for cluster-api contract-version. You don’t have to do it for patch/minor version. Add in metadata.yaml:
apiVersion: clusterctl.cluster.x-k8s.io/v1alpha3
releaseSeries:
...
- major: 1
minor: 5
contract: v1beta1
Update config test
Please also update type: InfrastructureProvider
spec of config.
Create a tag
Create a new branch for release. :warning: Never use the main And create tag
For patch/major release:
git checkout release-1.x
git fetch upstream
git rebase upstream/release-1.x
Create tag with git:
export RELEASE_TAG=v1.2.3
git tag -s ${RELEASE_TAG} -m "${RELEASE_TAG}
git push upstream ${RELEASE_TAG}
This will trigger this github action release This github action will generate image, and will create the new release.
Test locally
If you want to test locally what is done by github action you can test you get changelog:
make release
make release-changelog
Kubernetes Omi Generation
Generation
Kubernetes image are created using image-builder (with packer and ansible to generate kubernetes containerd image) with a cron github ci job run every month to create new kubernetes images.
To launch locally:
git clone https://github.com/kubernetes-sigs/image-builder
export OSC_ACCESS_KEY=access
export OSC_SECRET_KEY=secret
export OSC_REGION=region
cd images/capi/scripts/ci-outscale-nightly.sh
Image Deprecation
Image will be deprecated after 6 months.
python3 hack/cleanup/cleanup_oapi.py --days 183 --owner my_owner --imageNameFilterPath ./keep_image --imageNamePattern "^(ubuntu|centos)-[0-9.]+-[0-9.]+-kubernetes-v[0-9]+.[0-9]{2}.[0-9]+-[0-9]{4}-[0-9]{2}-[0-9]{2}$"
keep_image is a file to keep image which match imageNamePattern and are older than 6 months.
example:
ubuntu-2004-2004-kubernetes-v1.25.2-2022-10-13
Kubernetes Custom Omi Generation
OMI
Select omi you want to use. (we only test and verify with ubuntu omi)
Clone
Please clone projet image-builder in $HOME
git clone https://github.com/kubernetes-sigs/image-builder.git
New Omi
Please create $HOME/image-builder/images/capi/packer/outscale/ubuntu-2204.json and replace UBUNTU_OMI with the name for you omi and remove $HOME/image-builder/images/capi/packer/outscale/ubuntu-2004.json
{
"build_name": "ubuntu-2204",
"distribution": "ubuntu",
"distribution_release": "ubuntu",
"distribution_version": "2204",
"image_name": "UBUNTU_OMI"
}
Makefile
Replace in Makefile ($HOME/image-builder/images/capi/Makefile) osc-ubuntu-2004 by osc-ubuntu-2204.
Select the version
The kubernetes packages repository change.
You can also override other values from kubernetes.json.
Before k8s 1.26
Please set the version you want (Replace 1.22.1 with the kubernetes version you want) in $HOME/image-builder/images/capi/overwrite-k8s.json
{
"build_timestamp": "nightly",
"kubernetes_deb_gpg_key": "https://packages.cloud.google.com/apt/doc/apt-key.gpg",
"kubernetes_deb_repo": "\"https://apt.kubernetes.io/ kubernetes-xenial\"",
"kubernetes_deb_version": "1.22.1-00",
"kubernetes_rpm_gpg_key": "\"https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\"",
"kubernetes_rpm_repo": "https://packages.cloud.google.com/yum/repos/kubernetes-el7-{{user `kubernetes_rpm_repo_arch`}}",
"kubernetes_rpm_version": "1.22.1",
"kubernetes_semver": "v1.22.1-0",
"kubernetes_series": "v1.22"
}
After k8s 1.26
Please set the version you want (Replace 1.22.1 with the kubernetes version you want) in $HOME/image-builder/images/capi/overwrite-k8s.json
{
"build_timestamp": "nightly",
"kubernetes_deb_version": "1.22.1-1.1",
"kubernetes_rpm_version": "1.22.1",
"kubernetes_semver": "v1.22.1",
"kubernetes_series": "v1.22"
}
Download dependencies
cd $HOME/image-builder/images/capi
make deps-osc
Build image
Add packer group, and curent user to packer group
sudo groupadd -r packer && sudo useradd -m -s /bin/bash -r -g packer packer
Set permision for capi:
cp -rf $HOME/image-builder/images/capi /tmp
sudo chown -R packer:packer /tmp/capi
sudo chmod -R 777 /tmp/capi
Execute packer:
sudo runuser -l packer -c "export LANG=C.UTF-8; export LC_ALL=C.UTF-8; export PACKER_LOG=1; export PATH=$HOME/.local/bin/:/tmp/capi/.local/bin:$PATH; export OSC_ACCESS_KEY=${OSC_ACCESS_KEY}; export OSC_SECRET_KEY=${OSC_SECRET_KEY}; export OSC_REGION=${OSC_REGION}; export OSC_ACCOUNT_ID=${OSC_ACCOUNT_ID}; cd /tmp/capi; PACKER_VAR_FILES=overwrite-k8s.json make build-osc-all"