Kubernetes integration
Faasm runs on a K8s cluster.
Faasm’s K8s configuration can be found in the k8s directory,
and the relevant parts of the deployment in the corresponding faasmctl
script.
Faasm assumes a K8s cluster is set up with
kubectl
available
on the commandline.
If you don’t have a cluster set up, see the instructions at the bottom of this page.
Deplying Faasm to K8s
To deploy Faasm to your k8s cluster, you must specify the number of workers:
faasmctl deploy.k8s --workers=4
As a rule of thumb, the number of workers should not exceed the number of nodes in the underlying node pool. This leaves enough space for the scheduler to place one Faasm worker on each K8s node, while still respecting resource constraints.
If Faasm workers are getting colocated on nodes, this means that there is not enough space on the nodes that aren’t being used, so you may need to reduce the number of workers.
If you want to scale up or down the number of workers, you can use standard
kubectl
commands.
Faasm config file
Once everything has started up, faasmctl
should also generate a config file,
faasm.ini
at root of this project. This contains the relevant endpoints for
accessing the Faasm deployment.
It is very important that you save the path to this INI file in an env. variable:
export FAASM_INI_FILE=./faasm.ini
Uploading and invoking functions
Once you have configured your faasm.ini
file, you can use the Faasm, CPP,
Python CLIs as usual.
Troubleshooting
First off you need to check that everything else is running. List all pods in
all namespaces and check for any whose status is not Running
:
kubectl get pods --all-namespaces
Provided everything looks normal, or you find a container with something wrong, you can look at logs. E.g. for a Faasm worker:
# Find the faasm-worker-xxx pod
kubectl -n faasm get pods
# Tail the logs for a specific pod
kubectl -n faasm logs -f faasm-worker-xxx user-container
# Tail the logs for all containers in the deployment
# You only need to specify the max-log-requests if you have more than 5
kubectl -n faasm logs -f -c user-container -l run=faasm-worker --max-log-requests=<N_CONTAINERS>
# Get all logs from the given deployment (add a very large --tail)
kubectl -n faasm logs --tail=100000 -c user-container -l run=faasm-worker --max-log-requests=10 > /tmp/out.log
We also highly encourage using k9s to manage the cluster and troubleshoot the Faasm deployment.
Isolation and privileges
Faasm uses namespaces and cgroups to achieve isolation, therefore containers
running Faasm workers need privileges that they don’t otherwise have. The
current solution to this is to run containers in privileged
mode. This may not
be available on certain clusters, in which case you’ll need to set the following
environment vars:
CGROUP_MODE=off
NETNS_MODE=off
K8s Cluster set-up
Google Kubernetes Engine
To set up Faasm on GKE you can do the following:
Set up an account and the Cloud SDK (Ubuntu quick start).
Create a Kubernetes cluster without Istio enabled.
Aim for >=4 nodes with more than one vCPU.
Set up your local
kubectl
to connect to your cluster (click the “Connect” button in the web interface).Check things are working by running
kubectl get nodes
.
Azure Kubernetes Service
To set up Faasm on AKS, you can do the following:
Set up an account and install the
az
client.Create a resource group with sufficient quota.
Create an AKS cluster under said resource group.
Aim for >=4 nodes with more than one vCPU.
Set up the local
kubectl
via theget credentials
command.Continue with Faasm installation as described above.
MicroK8s
Install according to the official docs.
Then enable RBAC, DNS and load balancer plugins with:
microk8s.enable dns rbac metallb:192.168.1.240/24
Note that you may have to set up different permissions to run without sudo
(although this is not necessarily required).
Make sure you have set up kubectl
as per the
docs.
You can then continue with Faasm installation as described above.
Redis bare-metal set-up
There are a couple of tweaks required to handle running Redis, as detailed in the Redis admin docs.
First you can turn off transparent huge pages (add transparent_hugepage=never
to GRUB_CMDLINE_LINUX
in /etc/default/grub
and run sudo update-grub
).
Then if testing under very high throughput you can set the following in
etc/sysctl.conf
:
# Connection-related
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 15
net.core.somaxconn = 65535
# Memory-related
vm.overcommit_memory=1