Command Line Interface for your Kubernetes (OpenShift) Home Lab
I have provided a set of shell utilities to simplify many of the provisioning and management tasks for your lab. These utilities also enable you to run multiple OpenShift clusters at the same time.
Note: These utilities are very opinionated toward the equipment that I run in my lab. See the equipment list here: Lab Equipment
You can get the utilities from: https://github.com/cgruver/kamarotos
Install the Utilities
In the spirit of Kubernetes naming, I wanted to give it a nautical name. Since these scripts take on the drudgery of repeated tasks, I chose to name them after the guy that cleans the toilets on a ship… Thus, the project is named: καμαρότος. That is, kamarótos; Greek for Ship’s steward or cabin boy…
-
Create a directory for all of your lab manifests and utilities:
mkdir -p ${HOME}/okd-lab/bin -
Create a temporary working directory:
WORK_DIR=$(mktemp -d) -
Clone the git repository that I have created with helper scripts:
git clone https://github.com/cgruver/kamarotos.git ${WORK_DIR} -
Copy the helper scripts to
${HOME}/okd-lab:cp ${WORK_DIR}/bin/* ${HOME}/okd-lab/bin chmod 700 ${HOME}/okd-lab/bin/* -
Copy the lab configuration example files to ${HOME}/okd-lab/lab-config/examples
mkdir -p ${HOME}/okd-lab/lab-config/cluster-configs cp -r ${WORK_DIR}/examples ${HOME}/okd-lab/lab-config -
Remove the temporary directory
rm -rf ${WORK_DIR} -
Add the following to your shell environment:
Your default shell will be something like
bashorzsh. Although you might have changed it.You need to add the following line to the appropriate shell file in your home directory:
.bashrc, or.zshrc, etc…Bash:
echo ". ${HOME}/okd-lab/bin/labEnv.sh" >> ~/.bashrcZsh:
echo ". ${HOME}/okd-lab/bin/labEnv.sh" >> ~/.zshrcNote: Take a look at the file
${HOME}/okd-lab/bin/labEnv.sh. It will set variables in your shell when you log in, so make sure you are comfortable with what it is setting. If you don’t want to add it to your shell automatically, the you will need to execute. ${HOME}/okd-lab/bin/labEnv.shbefore running any lab commands.It’s always a good practice to look at what a downloaded script is doing, since it is running with your logged in privileges… I know that you NEVER run one of those;
curl some URL | bash… without looking at the file first… right?There will be a test later… :-)
-
Install
yqWe will need the
yqutility for YAML file manipulation: https://mikefarah.gitbook.io/yq/-
MacOS:
brew install yq -
Linux:
mkdir ${OKD_LAB_PATH}/yq-tmp YQ_VER=$(basename $(curl -Ls -o /dev/null -w %{url_effective} https://github.com/mikefarah/yq/releases/latest)) wget -O ${OKD_LAB_PATH}/yq-tmp/yq.tar.gz https://github.com/mikefarah/yq/releases/download/${YQ_VER}/yq_linux_amd64.tar.gz tar -xzf ${OKD_LAB_PATH}/yq-tmp/yq.tar.gz -C ${OKD_LAB_PATH}/yq-tmp cp ${OKD_LAB_PATH}/yq-tmp/yq_linux_amd64 ${OKD_LAB_PATH}/bin/yq chmod 700 ${OKD_LAB_PATH}/bin/yq
-
-
Create an SSH Key Pair:
If you don’t have an SSH key pair configured on your workstation, then create one now:
ssh-keygen -t rsa -b 4096 -N "" -f ${HOME}/.ssh/id_rsa -
Copy the SSH public key to the Lab configuration folder:
cp ~/.ssh/id_rsa.pub ${OKD_LAB_PATH}/ssh_key.pub -
Open a new terminal to set the variables.
Configuration Files
There are two YAML configuration files that are used by the labcli utilities for deploying the infrastructure for you home lab:
The examples directory in the kamarotos project contains a sample lab.yaml file. This file is the main configuration file for your lab. It contains references to “sub domains” that contain the configuration for a specific OpenShift cluster.
The OpenShift cluster configuration files are in examples/domain-configs
These files correspond to the following cluster configurations:
| Domain Config File | Description |
|---|---|
kvm-cluster-basic.yaml |
3 Node cluster with control-plane & worker combined nodes, deployed on a single KVM host. |
kvm-cluster-3-worker.yaml |
6 Node cluster, 3 control-plane & 3 worker nodes, deployed on 2 KVM hosts. |
sno-kvm.yaml |
Single Node Cluster, deployed on a KVM host. |
sno-bm.yaml |
Single Node Cluster, deployed on a bare metal server |
bare-metal-basic.yaml |
3 Node cluster with control-plane & worker combined nodes, deployed on 3 bare metal servers |
bare-metal-3-worker.yaml |
6 Node cluster, 3 control-plane & 3 worker nodes, deployed on 6 bare metal servers |
labctx function
labctx is used to set local environment variables in your shell that the labcli command uses to interact with a given domain in your lab.
It can be executed in two ways:
Interactive
With no argument, labctx will parse your lab configuration files and ask you to select the domain that you want to work with in a given shell terminal:
user@localhost ~ % labctx
1 - cp
2 - dc1
3 - dc2
4 - dc3
5 - metal
Enter the index of the domain that you want to work with:
Key in the index value for the domain that you want to work with and press return:
user@localhost ~ % labctx
1 - cp
2 - dc1
3 - dc2
4 - dc3
5 - metal
Enter the index of the domain that you want to work with:
3
Your shell environment is now set up to control lab domain: dc2.my.awesome.lab
user@localhost ~ %
List your environment variables, and you will see that your shell is now setup with a lot of domain specific variables that the labcli operations will use.
Declarative
By passing an argument indicating the domain that you want to manage, labctx will bypass the interactive selection.
labctx dc2
labenv function
The labenv function is used to set or unset the appropriate shell environment variables for your lab. It has four subcommands.
labenv [-e | -k | -c] [-d=lab-domain]
-
labenv -e -
labenv -k -
labenv -c
labcli
The labcli command has multiple subcommands and one global, optional argument:
labcli -d=<domain> --subcommand -subcommand-option
The optional -d=domain argument will bypass the interactive invocation of labctx and use the domain specified.
example: labcli -d=dc2 --pi -s
Note: The domain variable set in the execution of labcli do not alter your current shell environment.
Lab Network Configuration
labcli --pi
The pi subcommand is used to configure the Raspberry Pi that is used for host installation, Sonatype Nexus, and other functions.
labcli --pi has four options:
-
labcli --pi -iInitialize the SD-Card for the Raspberry Pi -
labcli --pi -sPerform the initial setup of the Raspberry Pi after booting from the SD-Card -
labcli --pi -nInstall and configure Sonatype Nexus -
labcli --pi -gInstall and configure Gitea
labcli --router
The router subcommand is used to configure the GL-iNet routers that I use in my lab.
labcli --router has three operations:
-
labcli --router -iInitialize a new or reset edge or domain router to prepare it for your network.Add the
-eoption to initialize the edge network router.labcli --router -e -iAdd the
-wloption to configure a wireless lan on the GL-iNet MV1000WAdd the
-wwoption to configure a wireless repeater on the GL-iNet MV1000WExample: Initialize the edge network router with WiFi lan, and a repeater connection to your home WiFi:
labcli --router -e -i -wl -ww -
labcli --router -sConfigure a domain or edge router that is attached to the lab networkThis command takes two optional variables:
-
Add the
-eoption to operate on the edge network router. -
Add the
-awoption to configure a wireless lan on the GL-iNet MV1000 with a supported WiFi dongle attached
-
labcli --disconnect
This command will deny internet access to the selected lab domain.
Use this to simulate a disconnected or air-gapped data center environment.
labcli --connect
This command will restore internet access to the selected lab domain.
Use this when you get really annoyed that most Kubernetes projects don’t consider use in an air-gapped environment. ;-)
OpenShift Cluster Provisioning
labcli --pull-secret
The pull-secret subcommand is used in preparation for installing an OpenShift cluster to prepare the registry Pull Secret.
labcli --latest
The latest subcommand will update the selected domain to use the latest release of OKD.
labcli --mirror
The mirror subcommand creates a local mirror of the OpenShift images for executing a disconnected network install.
labcli --cli
The cli subcommand will download the OpenShift cli for the selected lab domain, and create symbolic links in the shell path.
labcli --deploy
The deploy subcommand is used to deploy the compute infrastructure for your home lab.
labcli --deploy has three operations:
-
labcli --deploy -cCreates the deployment configuration and artifacts for the control plane for an OpenShift cluster in the selected domain. -
labcli --deploy -wCreates the deployment configuration and artifacts for worker nodes for an OpenShift cluster in the selected domain. -
labcli --deploy -kCreates the deployment configuration and artifacts for CentOS Stream based KVM hosts.
labcli --destroy
The destroy subcommand is used to tear down lab infrastructure.
labcli --destroy has five operations:
-
labcli --destroy -bRemoves theboostrapnode during an OpenShift cluster install. -
labcli --destroy -w=<host-name>Removes the specified worked node from the OpenShift cluster. If-w=allthen it removes all worker nodes from the cluster. -
labcli --destroy -cTears down the whole OpenShift cluster. -
labcli --destroy -k=<host-name>Destroys the specified KVM host from a lab domain. If-k=allthen it removes all KVM hosts from the domain. The KVM hosts are reset to PXE boot on the next power on. -
labcli --destroy -m=<host-name>Removes a control-plan node from the OpenShift cluster.
labcli --start
The start subcommand is used to start the KVM guests that are part of an OpenShift cluster.
labcli --start has four operations:
-
labcli --start -bBrings up the bootstrap node to start an OpenShift cluster install. -
labcli --start -mBrings up the control-plane nodes of an OpenShift cluster. -
labcli --start -wBrings up the worker nodes of an OpenShift cluster. -
labcli --start -uRemoves the scheduling cordon from the worker nodes of an OpenShift cluster that has been shut down.
labcli --stop
The stop subcommand is used to shut down OpenShift cluster nodes in a lab domain.
labcli --stop has two operations:
-
labcli --stop -wGracefully shuts down the worker nodes in an OpenShift cluster. -
labcli --stop -cGracefully shuts down an entire OpenShift cluster.
labcli --trust
The trust subcommand will pull the self-signed cert from a cluster and trust it on your workstation.
labcli --config-infra
The config-infra subcommand is used after installing a cluster and adding worker nodes. It wil label the control plan nodes as infrastructure nodes, and move the ingress, registry, and monitoring workloads to the control plane, leaving your worker nodes for application workloads.
labcli --csr
The csr subcommand is used during the installation of worker nodes to approve the certificate signing requests from the new worker nodes.
OpenShift Cluster Tasks
labcli --user
The user subcommand is used to add htpasswd authenticated users to your OpenShift clusters
labcli --user has one operation with two optional flags:
To initialize the htpasswd OAuth provider and create a cluster admin user:
labcli --user -i -a -u=<user-name> This will prompt for a passwd, create the htpasswd secret, patch the oauth provider for htpasswd, and then grant the new user the cluster-admin role.
To add additional users:
labcli --user -u=<user-name> This will prompt for a passwd, and create a user in the cluster.
labcli --user -a -u=<user-name> This will prompt for a passwd, and create a cluster admin user in the cluster.
labcli --kube
The kube subcommand retrieves the saved kubeadmin credentials to give you break-glass access to the selected domain cluster.
Other Operatrions
labcli --git-secret -n=<kube namespace>
The git-secret subcommand will create a basic auth secret for your git service account and assign it to the pipeline service account in the designated namespace.
Convienience operations for Mac OS users:
labcli --console
The console subcommand launches the Safari web browser with the URL of the selected OpenShift cluster.
labcli --login
The login command will issue oc login against the selected domain cluster.
labcli --dns
The dns subcommand will reset the Mac OS DNS client. This is sometimes necessary to clear the cache.
WIP (These commands work, but need docs)
labcli --post
labcli --post -d