This guide provides step-by-step instructions for installing the Cequence Sensor within your Kubernetes cluster. The Cequence Sensor for Kubernetes captures network traffic from a Kubernetes cluster and transmits the data to the Cequence Unified API Protection (UAP) platform for analysis. The Cequence Sensor includes both external and internal traffic from the Kubernetes cluster, discovering external APIs and internal APIs within Cequence.
- Cequence Sensor captures traffic to applications and sends the traffic to the Cequence UAP platform.
- The integration is passive and doesn't require application instrumentation.
- You can deploy Cequence Sensor for Kubernetes as a DaemonSet or a sidecar.
- When deployed as a DaemonSet, you can configure a namespace selector to limit Cequence Sensor's activity to specific namespaces.
- When deployed as a DaemonSet, you can configure a pod selector to limit Sensor's activity to specific pods within a namespace.
- Together, these selectors avoid unnecessary resource consumption by excluding unwanted namespaces and pods, improving the efficiency and reliability of the sensor
Prerequisites
- Kubernetes cluster.
- Helm (version 3.x or later).
- kubectl command-line tool.
- Access credentials for Cequence's GitLab registry. Cequence provides these credentials. Request the credentials from your sales account team or log a Zendesk ticket.
Required permissions
Installing the Cequence Sensor in Kubernetes requires get, list, create, update, and delete permissions for the following:
- ServiceAccount
- ClusterRole
- ClusterRoleBinding
- Role
- RoleBinding
- DaemonSet
- ConfigMap
- Service
- Deployment
- MutatingWebhookConfiguration
- Secret
- Namespace
Run the following script to verify the permissions.
#!/bin/bash
# Set the user or ServiceAccount to check
USER_TO_CHECK="system:serviceaccount:kube-system:admin-user"
# Function to check permission
check_permission() {
local resource=$1
local verb=$2
local api_group=$3
if [ -z "$api_group" ]; then
kubectl auth can-i $verb $resource --as=$USER_TO_CHECK > /dev/null 2>&1
else
kubectl auth can-i $verb $resource --as=$USER_TO_CHECK --subresource=$api_group > /dev/null 2>&1
fi
if [ $? -eq 0 ]; then
echo " Pass: Can $verb $resource"
else
echo " Fail: Cannot $verb $resource"
fi
}
# Check core API resources
for resource in serviceaccounts configmaps services secrets namespaces; do
for verb in get list create update delete; do
check_permission $resource $verb
done
done
# Check RBAC resources
for resource in clusterroles clusterrolebindings roles rolebindings; do
for verb in get list create update delete; do
check_permission $resource $verb rbac.authorization.k8s.io
done
done
# Check Apps API resources
for resource in daemonsets deployments; do
for verb in get list create update delete; do
check_permission $resource $verb apps
done
done
# Check AdmissionRegistration API resources
check_permission mutatingwebhookconfigurations get admissionregistration.k8s.io
check_permission mutatingwebhookconfigurations list admissionregistration.k8s.io
check_permission mutatingwebhookconfigurations create admissionregistration.k8s.io
check_permission mutatingwebhookconfigurations update admissionregistration.k8s.io
check_permission mutatingwebhookconfigurations delete admissionregistration.k8s.io
Installation Steps
- Run the following command to add the Cequence Helm Chart Repository.
helm repo add cequence https://cequence.gitlab.io/helm-charts
helm repo update - Run the following command to create a dedicated namespace for the Cequence Sensor.
kubectl create ns cq-sensor
- Run the following command to create a secret to access the Cequence images.
kubectl create secret docker-registry regcred \
Replace <registry_username> and <registry_password> with the credentials provided by the Cequence team. When sidecar injection is enabled, create a secret in the namespaces where you plan to deploy the sidecar.
--docker-server="registry.gitlab.com" \
--docker-username=<registry_username> \
--docker-password=<registry_password> \
-n cq-sensor - Create a file named sensor-overrides.yaml with the following content. Adjust the values according to your environment.
global:
logLevel: "INFO"
reportMode: "bridge"
clientId: "xxxxx"
clientSecret: "xxxxx"
uapSubdomain: "cqai.yourdomain.com"
skipTlsverify: true
sensorDaemonset:
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "512Mi"
cpu: "400m"
enabled: true
networkSensor:
enabled: true
debugInterval: 5m
namespaceSelector:
exclude: []
include:
- "^appnamespace1$"
- "^appnamespace2$"
- "^appnamespace3$"
podSelector:
exclude: []
include: ["^pod1-test$"]
sensorSidecarInjector:
enabled: false - (Optional) Specify namespace and pod selectors. Details are discussed later in this article.
- Install the Cequence Sensor using Helm.
helm upgrade --install cequence-sensor cequence/sensor --version 5.2.0 \
-f sensor-overrides.yaml \
-n cq-sensor - Label the namespace where you want to inject the sensor sidecar. Replace my-test with your application namespace.
kubectl label ns my-test cequence-sensor/enabled=true
Restart any application pods that were running before applying the label.
Namespace and pod selector syntax
The values you specify in namespaceSelector determine which namespaces the Daemonset targets or ignores. This allows for granular control over where the sensor operates, ensuring it only monitors designated namespaces and excludes others.
namespaceSelector:
exclude: []
include:
- "^appnamespace1$"
- "^appnamespace2$"
- "^appnamespace3$"
Namespace selector options
- exclude: A list of namespaces to ignore. The Daemonset ignores namespaces that match patterns in this list.
- include: A list of namespaces where the sensor is enabled. The sensor only monitors namespeces that match the defined patterns.
Namespace selector pattern matching
The entries in include and exclude support regular expressions. For example, ^appnamespace1$ matches exactly appnamespace1 but nothing else (due to the start ^ and end $ markers).
Any namespace name that doesn’t match the regular expression in include is automatically excluded.
Namespace selector examples
The following definition includes namespaces prefixed with app-.
include:
- "^app-.*$"
The following definition excludes namespaces prefixed with app-.
exclude:
- "^app-.*$"
The podSelector allows filtering at the pod level within the selected namespaces. It specifies pods to include or exclude from monitoring.
podSelector:
exclude: []
include: ["^pod1-test$"]
Pod selector options
- exclude: A list of namespaces to ignore. Namespaces matching patterns in this list are ignored by the sensorDaemonset.
- include: A list of namespaces where the sensor is enabled. Only namespaces matching the defined patterns are monitored.
Pod selector pattern matching
The inclusion and exclusion patterns support regular expressions. For example, ^pod1-test$ matches exactly pod1-test but nothing else, due to the starting ^ and ending $ markers.
Any namespace name that doesn’t match the regular expression in include is automatically excluded.
Combining namespace and pod selectors
When you specify both selectors, Cequence Sensor only monitors pods that match both of the specified criteria, as in the following example:
namespaceSelector:
include: ["^appnamespace1$", "^appnamespace2$", "^appnamespace3$"]
podSelector:
include: ["^pod1-test$"]
In this configuration, Cequence Sensor monitors pod-test1 in appnamespace1, appnamespace2, or appnamespace3, and nothing else.
Verification
To verify the installation, check the status of the Cequence Sensor pods:
kubectl get pods -n cq-sensor
Support
For additional assistance or troubleshooting, please contact Cequence support.