Deploy Single-Node OKD
Requirments
| Name | Core Count | Ram (GB) | Storage (GB) | OS |
|---|---|---|---|---|
| Firewall | 4 | 8 | 100 | Pfsense |
| SNO-Services_machine | 8 | 8 | 100 | Fedora |
| SNO-OKD Node | 16 | 32 | 300 | Coreos |
DNS
This is how you relate the IP address to a hostname. In my enviornment I had an opensense firewall that had the service unbound dns running. I went to this service and created records for api, api-int, and *.apps to the ip address of the single node:
| Usage | FQDN | Description |
|---|---|---|
| Kubernetes API | api.<cluster_name>.<base_domain> | Add a DNS A/AAAA or CNAME record. This record must be resolvable by both clients external to the cluster and within the cluster. |
| Internal API | api-int.<cluster_name>.<base_domain> | Add a DNS A/AAAA or CNAME record when creating the ISO manually. This record must be resolvable by nodes within the cluster. |
| Ingress route | *.apps.<cluster_name>.<base_domain> | Add a wildcard DNS A/AAAA or CNAME record that targets the node. This record must be resolvable by both clients external to the cluster and within the cluster. |
To test DNS you can run the following:
dig +short api.okd.<base_domain>
# Example: dig +short api.okd.tutoriallabcluster.lan
dig +short api-int.okd.<base_domain>
# Example: dig +short api-int.okd.tutoriallabcluster.lan
dig +short console-openshift-console.apps.okd.<base_domain>
# Example: dig +short console-openshift-console.apps.okd.tutoriallabcluster.lan
If these fail check your dns settings. If your dns settings are not properly configured the node will node complete the bootstraping process.
DHCP
I also made use of the firewall to reserve the ip address when the single node boots up. To do this I got the mac address of the node from my hypervisor and created a reservation.
Installing single-node OKD manually
Generating the installation with coreos-installer
- Set the OKD version:
OKD_VERSION=<okd_version> - Set the host architecture:
export ARCH=<architecture>- Download the OKD client (oc) and make it available for use by entering the following commands:
curl -L https://github.com/okd-project/okd/releases/download/$OKD_VERSION/openshift-client-linux-$OKD_VERSION.tar.gz -o oc.tar.gz
tar zxf oc.tar.gz
chmod +x oc- Download the OKD installer and make it available for use by entering the following commands:
curl -L https://github.com/okd-project/okd/releases/download/$OKD_VERSION/openshift-install-linux-$OKD_VERSION.tar.gz -o openshift-install-linux.tar.gz
tar zxvf openshift-install-linux.tar.gz
chmod +x openshift-install
- Retrieve the FCOS ISO URL by running the following command:
export ISO_URL=$(./openshift-install coreos print-stream-json | grep location | grep $ARCH | grep iso | cut -d\" -f4)
- Download the FCOS ISO:
curl -L $ISO_URL -o fcos-live.iso
- Prepare the install-config.yaml file:
apiVersion: v1
baseDomain: <domain>
compute:
- name: worker
replicas: 0
controlPlane:
name: master
replicas: 1
metadata:
name: <name>
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
none: {}
bootstrapInPlace:
installationDisk: /dev/disk/by-id/<disk_id>
pullSecret: '<pull_secret>'
sshKey: |
<ssh_key>
- Generate OKD assets by running the following commands:
mkdir sno
cp install-config.yaml sno
./openshift-install --dir=sno create single-node-ignition-config- Embed the ignition data into the FCOS ISO by running the following commands:
alias coreos-installer='podman run --privileged --pull always --rm -v /dev:/dev -v /run/udev:/run/udev -v $PWD:/data -w /data quay.io/coreos/coreos-installer:release'
coreos-installer iso ignition embed -fi sno/bootstrap-in-place-for-live-iso.ign fcos-live.iso
Monitoring the cluster installation using openshift-install
./openshift-install --dir=sno wait-for install-complete
After the instillation check the enviornment by using the following commands:
export KUBECONFIG=sno/auth/kubeconfig
oc get nodes