Overview
Istio is an open source service mesh that layers transparently onto existing distributed applications.
Cequence Unified API Protection integrates directly with the Istio Gateway by redirecting application network traffic through the Cequence Defender for detection and mitigation of adverse traffic. Redirection is accomplished in Istio either by configuring an Istio Envoy Filter or establishing Cequence Defender as an alternate Virtual Service within Istio. Application traffic is routed in a 'hairpin' configuration, so that both requests and responses are routed through the Envoy proxy and the Cequence Defender.
Cequence Defender container will usually be deployed to its own pod, but may also be co-located in the Istio Gateway pod.
See Istio Integration Overview for a general introduction of using Istio with Cequence Unified API Protection. You may also want to reference Istio installation guides and the Envoy Proxy documentation.
Document Purpose
This document provides detail and step-by-step deployment instructions to integrate Cequence Defender into a Istio mesh environment for purposes of threat detection and mitigation.
Deployment
Cequence Defender can be deployed either to its own Kubernetes pod ("Independent") or the Istio pod can be reconfigured to also host the Cequence Defender container.
Deploying the Istio Envoy Proxy and Cequence Defender, independently, each deployed to their own Pod, allows for independent allocation and management of resources, and independent reconfiguration and relaunch. However, this approach requires configuration of external communication between the two namespaces.
Alternatively, the Cequence Defender can be deployed directly into the Istio Gateway Pod. This allows for easier networking between the Envoy Proxy and Cequence Defender but creates a deployment dependency.
Both deployment methods are detailed below.
Traffic Flow
Both deployment styles result in the same basic traffic flow. Client requests to the Application are routed via the Envoy Proxy in the Istio Gateway Pod to Cequence Defender. Defender processes the request based on Bot Defense detection and mitigation policies and routes the request back to the Envoy Proxy with an injected header. The Envoy Proxy forwards this request to the Application. The Application response then flows back in the reverse direction, and again, is routed through Defender.
The Defender injection is handled via the Istio sidecar injection mechanism. The routing is defined as part of the VirtualService definition and achieved via header routing.
Component Configuration
Component Versions
Integration has been validated with:
Kubernetes
This Kubernetes deployment is built around a simple kubeadm deployment using the following:
-
Kubernetes AWS Cloud Provider - the interface between a Kubernetes cluster and AWS service APIs. (https://github.com/kubernetes/cloud-provider-aws)
- Calico/Canal - Kubernetes Networking .
(https://github.com/projectcalico/canal)
Terraform / Ansible may be used to build this infrastructure.
See: https://github.com/as679/supreme-octo-spork.
Istio Configuration and Deployment
Istio Control Plane
The Istio control plane configuration is based on the Istio minimal profile and uses auto-injection for the Gateways. The minimal profile allows separate profiles for control and data planes.
Example control plane deployment:
In this example, a single command invokes curl to download Istio and then immediately runs the installer.
$ curl -sL https://istio.io/downloadIstioctl | sh -
Downloading istioctl-1.10.0 from https://github.com/istio/istio/releases/download/1.10.0/istioctl-1.10.0-linux-amd64.tar.gz ...
istioctl-1.10.0-linux-amd64.tar.gz download complete!
Add the istioctl to your path with: 7 export PATH=$PATH:$HOME/.istioctl/bin
Begin the Istio pre-installation check by running: 10 istioctl x precheck
Need more information? Visit https://istio.io/docs/reference/commands/istioctl/
$ export PATH=$PATH:$HOME/.istioctl/bin
$ istioctl install --set profile=minimal --set meshConfig.accessLogFile=/dev/stdout
This will install the Istio 1.10.0 minimal profile with ["Istio core" "Istiod"] components into the cluster. Proceed? (y/N) y
✔ Istio core installed
✔ Istiod installed
✔ Installation complete
Thank you for installing Istio 1.10.
Independent Istio and Cequence Defender Pod Deployment
Deploying Istio and Cequence Defender containers to separate containers is a better approach.
Istio Data Plane Deployment (Independent Pods)
The Istio Data Plane is deployed as per the Istio documentation for installing gateways.
$ kubectl create namespace istio-ingress
namespace/istio-ingress created
$ kubectl apply -f https://raw.githubusercontent.com/as679/supreme-octo-spork/main/istio/istio-ingress-base.yml
service/istio-ingressgateway created
deployment.apps/istio-ingressgateway created
role.rbac.authorization.k8s.io/istio-ingressgateway-sds created
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds created
(Optional) Confirm Istio Deployment with Sample Application
It's helpful to test and confirm deployment with a sample application. Deploy the sample application as described in Sample Application Deployment and return here.
Verify connectivity by sending BookInfo a query and observing a successful response.
$ kubectl get svc -n istio-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.103.179.162 {id}.eu-west-1.elb.amazonaws.com 80:30820/TCP,443:31732/TCP 13m
$ curl -s -w "%{http_code}\n" -o /dev/null {id}.eu-west-1.elb.amazonaws.com/productpage
200
Service Discovery
When deployed to separate pods, Cequence Defender and Istio Gateway use the External-IP service to communicate between clusters.
Use the Kubernetes DNS service discovery mechanism ('kubectl get svc') to obtain the External IP from within the cluster itself as shown in the example below.
$ kubectl get svc -n istio-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.103.179.162 {id}.{region}.elb.amazonaws.com 80:30820/TCP,443:31732/TCP 25m
$ kubectl run -i --tty --rm debug --image=busybox --restart=Never -- sh
If you don't see a command prompt, try pressing enter.
/ # nc istio-ingressgateway.istio-ingress.svc.cluster.local 80
GET /productpage HTTP/1.1
Host: {id}.eu-west-1.elb.amazonaws.com
HTTP/1.1 200 OK
content-type: text/html; charset=utf-8
content-length: 4183
server: istio-envoy
Cequence Defender Deployment
The first three steps follow are standard Deploy Cequence deployment. Step 4 configures Defender upstream servers to the instio-ingressgateway. Step 7 configures the Envoy Filter to route to Cequence Defender.
- Add the Cequence chart to the local Helm repository.
$ helm repo add cequence https://cequence.gitlab.io/helm-charts/
helm search repo defender"cequence" has been added to your repositories
$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "cequence" chart repository
...Successfully got an update from the "prometheus-community" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm search repo defender
NAME CHART VERSION APP VERSION DESCRIPTION
cequence/defender 2.3.0 2.3.0 A Helm chart for Kubernetes - Create a Cequence namespace.
$ kubectl create namespace cequence
namespace/cequence created - Create and configure Secret credentials.
$ kubectl create secret docker-registry cequence \
--docker-server=registry.gitlab.com \
--docker-username=<registry username \
--docker-password=<registry password> \
--namespace istio-ingress
secret/regcred created - Create and configure the Override values file. Set upstream servers to the
instio-ingressgateway as in the example below.
imagePullSecrets:
- name: istio-regcred
config:
upstream:
config:
static:
http:
server: istio-ingressgateway.istio-ingress.svc.cluster.local
https:
server: istio-ingressgateway.istio-ingress.svc.cluster.local - Install Cequence Defender to the Cequence namespace.
$ helm install defender cequence/defender --namespace cequence --values defender-values.yml
LAST DEPLOYED: Fri Jun 25 11:31:14 2021
NAMESPACE: cequence
STATUS: deployed
REVISION: 1 - Verify that the Istio Envoy has registered the Cequence Defender as a destination.
$ istioctl proxy-config cluster istio-ingressgateway-788854c955-fxs4h \
-n istio-ingress | grep defender
defender-prometheus.cequence.svc.cluster.local 9122 - outbound EDS
defender-prometheus.cequence.svc.cluster.local 9145 - outbound EDS
defender.cequence.svc.cluster.local 80 - outbound EDS
defender.cequence.svc.cluster.local - Apply the EnvoyFilter by replacing the loopback route with the defender service.
Create a gateway.yml configuration file similar to the one below.
apiVersion: networking.istio.io/v1alpha3
and then apply it.
kind: EnvoyFilter
metadata:
name: defender-tweaks
namespace: istio-ingress
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
patch:
operation: INSERT_FIRST
value:
name: defender-route
match:
prefix: "/"
headers:
- name: cq-select
invert_match: true
route:
cluster: outbound|80||defender.cequence.svc.cluster.local
$ kubectl apply -f gateway.yml
gateway.networking.istio.io/bookinfo-gateway configured
(Optional) Re-test using Sample Application
Deploy the sample application as described in Sample Application Deployment, and return here.
Verify connectivity by sending BookInfo a query and observing a successful response.
$ kubectl get svc -n istio-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway LoadBalancer 10.103.179.162 {id}.eu-west-1.elb.amazonaws.com 80:30820/TCP,443:31732/TCP 13m
$ curl -s -w "%{http_code}\n" -o /dev/null {id}.eu-west-1.elb.amazonaws.com/productpage
200
_____________________________________________
Deployment with Cequence Defender co-located into Istio Defender Pod
Istio Data Plane Deployment (Shared Pod)
The Istio Gateway deployment configuration will be customized to include the Cequence Defender container. Because these two containers share the same pod they can communicate without using an external IP.
- Create an istio-ingress namespace, as below:
$ kubectl create namespace istio-ingress
namespace/istio-ingress created
- Create a Kubernetes secret docker-registry.
$ kubectl create secret docker-registry istio-regcred \
--docker-server=registry.gitlab.com \
--docker-username=<registry username \
--docker-password=<registry password> \
--namespace istio-ingress
secret/regcred created
- Create a Kubernetes configmap (Note that configmap creation allows the container to start but will requires 'trust material' for production deployment. Consult with Cequence Customer Success for that content.)
$ kubectl create configmap defender-kafka-certs-cm-0 \
--namespace istio-ingress
configmap/defender-kafka-certs-cm-0 created
- Download this istio ingress configuration file: istio-ingress.yml and modify it for your environment.
The configuration elements to note are the upstream servers and ports. These are in relation to the pod itself. Both the istio-proxy and the defender containers share the same loopback interface. The Defender should reference the loopback address for the upstream servers and the internal istio-proxy ports for the http and https schemes.
- Apply the modified istio-ingress.yml:
$ kubectl apply -f istio-ingress.yml
service/ingressgateway created
deployment.apps/ingressgateway created
role.rbac.authorization.k8s.io/istio-ingressgateway-sds created
rolebinding.rbac.authorization.k8s.io/istio-ingressgateway-sds created
- Configure Cequence Defender as the first destination. There are two methods for this: VirtualService configuration and (b) using an Envoy Filter to patch the Istio Gateway.
(a)VirtualService: Download the Istio Service Entry configuration file: serviceentry.yml and modify it for your environment. Specify the Cequence Defender address as the destination. Consult the Istio Virtual Service documentation.
Notes:
(i) destination name 'localhost' is allowed and will be expanded automatically to:
'localhost.{namespace}.svc.cluster.local';
(ii) IP address '127.0.0.1' is disallowed and will fail validation.
(b) Envoy Filter: This method works by inserting a header routing rule into the underlaying Envoy. This header rule will take all requests that do not have the Cequence Defender inserted header and route them via the Defender. See the Istio Envoy Filter documentation for more information.Sample Envoy Filter configuration file 'envoyfilter.yml':
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: defender-tweaks
namespace: istio-ingress
spec:
workloadSelector:
labels:
istio: ingressgateway
configPatches:
- applyTo: HTTP_ROUTE
match:
context: GATEWAY
patch:
operation: INSERT_FIRST
value:
name: defender-route
match:
prefix: "/"
headers:
- name: cq-select
invert_match: true
route:
cluster: outbound|8080||localhost.default.svc.cluster.localApplying the Envoy Filter configuration:
$ kubectl apply -f envoyfilter.yml
envoyfilter.networking.istio.io/defender-tweaks createdVerify
The inverted header match is the first entry in the virtual_hosts routes list in the proxy-config response below:
istioctl proxy-config all ingressgateway-6597cbf476-df9s9 -n istio-ingress -o json | \
grep -C12 cq-select
"virtual_hosts": [
{
"name": "*:80",
"domains": ["*"],
"routes": [
{
"match": {"prefix": "/", "headers": [{ "name": "cq-select","invert_match": true
}]
},
"route":
{"cluster": "outbound|8080||localhost.default.svc.cluster.local"
},
"name": "defender-route"
},
{ "match":
{ "path": "/productpage", - Apply the modified serviceentry.yml:
$ kubectl apply -f serviceentry.yml
serviceentry.networking.istio.io/localhost created
Sample Application - for Testing
The Istio sample BookInfo Application provides a good basis for confirming installation and deployment both with and without Cequence Defender integration. BookInfo is composed of four micro-services:
- Deploy BookInfo:
$ kubectl apply \
-f https://raw.githubusercontent.com/istio/istio/release-1.10/samples/bookinfo/platform/kube/bookinfo.yam - Deploy BookInfo ingress:
$ kubectl apply \
-f https://raw.githubusercontent.com/istio/istio/release-1.10/samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created - (Optional) Reconfigure the virtual service.
Download and modify bookinfo-virtualservice.yml This configuration will forward the incoming client request that does not have the headerCQ-Select
towards the Defender and the incoming Defender request traffic to theproductpage
destination service.
If you review the VirtualService prior to the reconfiguration, you will see that we are adding two main components:
a) The header matches to the original uri
b) The default destination of the Cequence Defender.
- Apply the new virtual service configuration:
$ kubectl apply -f bookinfo-virtualservice.yml
virtualservice.networking.istio.io/bookinfo configured
Test Using BookInfo Application
Obtain the Bookinfo Application Istio gateway ingress address:
$ kubectl get service --namespace istio-ingress
Sample response:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
ingressgateway LoadBalancer 10.104.203.135 {example-id}.{aws-region}.elb.amazonaws.com 80:30848/TCP,443:30905/TCP
where {example-id} and {aws-region} will be specific for your organization.)
Send queries
HTTP: Send a client request to the bookinfo productpage using http (port 80) ingress:
$ curl -s -w "%{http_code}\n" -o /dev/null \
http:{example-id}.{aws-zone}.elb.amazonaws.com/productpage
200
Test Using Security Ingress Configuration
You can also test the secure ingress via port 443. The gateway will be using SNI to identify the correct certificate, so the curl command will be different. Use Linux 'dig' (Domain Information Groper) command to obtain the secure ingress addresses:
$ dig {example-id}.{aws-zone}.eu-west-1.elb.amazonaws.com +short
{x}.{z}.{y}.35
{x}.{z}.{y}.161
{x}.{z}.{y}.88
$ curl -s -w "%{http_code}\n" -o /dev/null https://bookinfo.example/productpage \
--resolve bookinfo.example:443:{x}.{z}.{y}.35 -k
200
(where {x}.{z}.{y} is the subnet for your AWS vpc. )
Review Defender Logs
Get the name of the instio-ingress pod:
$ kubectl get pod --namespace istio-ingress
NAME READY STATUS RESTARTS AGE
ingressgateway-abcdefghij-rl67p 2/2 Running 0 23h
And using the result name, get the log:
$ kubectl logs ingressgateway-abcdefghij-rl67p --namespace istio-ingress \
--container defender | grep productpage
[ingressgateway-abcdefghij-rl67p][nginx-access] [04/Jun/2021:14:36:24 +0000]
127.0.0.1 36948 /productpage /productpage 200 5179 "curl/7.52.1" 1.260 "0.000"
"1.260" "1.260" "200" "346193" "1" "127.0.0.1:80"
"{example-id}.{aws-zone}.elb.amazonaws.com" 10.1.12.176 3066039861
Advanced Topics
Configure a Secure Gateway
See Istio Secure Gateways documentation.
1. Create a Key and Self Signed Certificate
$ openssl req -newkey rsa:2048 -nodes -keyout key.pem -x509 -days 365 -out certificate.pem
2. Store them in a Kubernetes secret:
$ kubectl create -n istio-ingress secret tls bookinfo-credential --key=key.pem --cert=certificate.pem
secret/bookinfo-credential-a created
3. Revise the gateway configuration to add HTTPS and tls:
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: bookinfo-gateway
namespace: default
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- bookinfo.example
port:
name: http
number: 80
protocol: HTTP
- hosts:
- bookinfo.example
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: bookinfo-credential
mode: SIMPLE
True Client IP
The default implementation of a Service with the AWS Cloud Controller is a Classic ELB. The Classic ELB does not forward the original Client IP address. This causes both Istio AND Cequence to see the private address of the ELB as the Client IP address. The AWS NLB corrects this.
Envoy populates correctly the X-Forwarded-For header with the client source IP address.
1. Change the external load balancer type
To convert the AWS Classic ELB associated with the Ingress Service to an NLB add the correct annotations to the Service associated with the Ingress Gateway.
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
2. Update the external traffic policy
Reconfigure the External Traffic Policy associated with the same service. See: Create an External Load Balancer
spec.externalTrafficPolicy = Local
3. Reconfigure the Istio Deployment
By default enabling the Local externalTrafficPolicy on the ingress service causes all traffic to be forwarded locally rather than to any Pod in the cluster.
To get change the default behavior and allow all target nodes to service the traffic, replace the current istio-ingresgateway deployment with a Daemonset. See: - Ingress Gateway for more information.
Debugging
See Debugging Envoy and Istiod to learn how to diagnose traffic management and configuration problems.
Attachments
- Istio Ingress
- Service Entry
- Bookinfo Virtual Service
Version History
Date | Version | Notes |
Aug 19, 2022 | 2 | Updated with shared pod deployment. |
Mar 29, 2022 | 1 | Initial version. |