Deploying CrowdStrike Falcon on Kubernetes
Last year our security team mandated CrowdStrike Falcon coverage across all compute — including Kubernetes. My job was to get the sensor deployed and verified across dev, staging, and production GKE clusters without disrupting running workloads. Here's what that actually looked like.
What the Falcon Sensor Does on Kubernetes
The Falcon sensor operates at the node level, not the pod level. It hooks into the Linux kernel to monitor process execution, system calls, file operations, and network connections. On Kubernetes, this means it has visibility into everything running on a node — all containers, all pods — from a single sensor process.
The threat detection happens in the sensor itself and in the Falcon cloud backend. The sensor streams telemetry to CrowdStrike's cloud; detection logic runs there and alerts come back in near real-time in the Falcon console. From an operator perspective, your job is just ensuring the sensor is running on every node and stays there.
DaemonSet vs Sidecar
CrowdStrike supports two deployment patterns: DaemonSet and sidecar.
The DaemonSet approach runs one Falcon pod per node. This is the right approach for production. It gives you kernel-level visibility regardless of what's running on the node, and it means one sensor to manage per node instead of one per pod.
The sidecar approach injects a container into each pod. It's used in specific constrained environments (some managed Kubernetes services where you can't run privileged node-level containers), but it has significantly reduced visibility — it can only observe the containers in its pod. If your nodes support DaemonSet with host access, use DaemonSet.
The Falcon Operator
CrowdStrike provides a Kubernetes operator that manages the DaemonSet lifecycle, handles sensor image updates, and gives you a CRD-based interface for configuration. Install it via Helm:
helm repo add crowdstrike https://crowdstrike.github.io/falcon-helm
helm repo update
helm install falcon-operator crowdstrike/falcon-operator \
--namespace falcon-operator \
--create-namespace
The operator introduces the FalconNodeSensor CRD. You create one of these resources and the operator takes care of deploying and maintaining the DaemonSet.
Configuring the FalconNodeSensor CRD
A minimal FalconNodeSensor manifest:
apiVersion: falcon.crowdstrike.com/v1alpha1
kind: FalconNodeSensor
metadata:
name: falcon-node-sensor
namespace: falcon-system
spec:
falcon:
cid: "YOUR_CUSTOMER_ID_HERE"
falcon_api:
client_id: "YOUR_API_CLIENT_ID"
client_secret: "YOUR_API_CLIENT_SECRET"
cloud_region: "us-1"
node:
image_override: ""
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: RollingUpdate
The cid is your CrowdStrike Customer ID — find it in the Falcon console under Sensor Downloads. The falcon_api credentials are for the operator to pull the sensor image from CrowdStrike's registry. You'll create an API client in the Falcon console with Falcon Images Download: Read scope.
In practice I stored client_id and client_secret as a Kubernetes secret and referenced them via secretRef rather than putting them inline — the CRD supports this.
GKE-Specific Considerations: COS vs Ubuntu, eBPF vs Kernel Module
This is the part that bit us. GKE node pools run Container-Optimized OS (COS) by default. COS is a locked-down OS — you cannot load arbitrary kernel modules into it. The Falcon sensor normally uses a kernel module for its deepest hooks.
On COS, the Falcon sensor falls back to eBPF mode. The eBPF sensor has somewhat reduced visibility compared to the kernel module version, but it still covers process execution, network connections, and most file operations. For our threat model it was sufficient.
On Ubuntu node pools, the full kernel module sensor runs without restriction.
If you're on COS and want full kernel module coverage, you'd need to switch your node pools to Ubuntu. We stayed on COS and documented the eBPF limitation for our security team.
The operator detects the OS type and deploys the appropriate sensor variant automatically — you don't configure this explicitly, but it's worth knowing which mode you're in. Check the Falcon console after deployment: each sensor reports its version and whether it's using eBPF or the kernel module.
Rolling Out Across Environments
We followed the standard rollout path: dev first, then staging, then production. For each cluster:
- Install the Falcon Operator
- Create a
FalconNodeSensorresource - Watch the DaemonSet come up
- Verify sensor count in the Falcon console matches node count
- Monitor for 48 hours before promoting to the next environment
The maxUnavailable: 1 in the rolling update strategy means the operator will update sensors one node at a time during version updates. For a DaemonSet that's essentially just a restart of the pod on each node; it's fast and doesn't affect application workloads.
One thing I didn't expect: in dev we had some nodes that had been around long enough to accumulate state that caused the sensor pod to enter CrashLoopBackOff. The fix was draining and deleting those nodes (normal GKE node rotation) — the sensor came up cleanly on fresh nodes. This wasn't a problem in staging or prod where we had more recent nodes.
Verifying Deployment
Basic verification:
# Check all sensor pods are running
kubectl get pods -n falcon-system -o wide
# Check the DaemonSet desired vs ready
kubectl get daemonset -n falcon-system
# Count nodes vs running sensors
kubectl get nodes --no-headers | wc -l
kubectl get pods -n falcon-system --no-headers | grep Running | wc -l
These numbers should match. If you have nodes that aren't running the sensor, the DaemonSet will show DESIRED vs READY mismatch.
The more important check is in the Falcon console itself. Under Sensor Management, you should see 100% coverage for each cluster. The console shows sensor version, OS type, and last check-in time for each sensor. We bookmarked this view and checked it as part of our daily ops routine.
Monitoring Sensor Health
We set up an alert in our monitoring stack: if Falcon console sensor coverage for any cluster drops below 100%, page the on-call.
For this we used CrowdStrike's API to export sensor counts and feed them into Prometheus:
import requests
def get_sensor_coverage(api_base, token, cluster_name):
# Query Falcon API for device count by cluster
resp = requests.get(
f"{api_base}/devices/queries/devices/v1",
headers={"Authorization": f"Bearer {token}"},
params={"filter": f"groups:'{cluster_name}'+status:'normal'"}
)
total = resp.json()["meta"]["pagination"]["total"]
return total
We compared the sensor count returned by the API against the node count from the GKE API. Any discrepancy fired an alert.
Privilege Requirements
The Falcon DaemonSet needs privileges that will make your security team ask questions. It requires:
privileged: truein the security contexthostPID: true— access to the host process namespacehostNetwork: true— in some configurations- Host path volume mounts for
/dev,/proc,/sys
This is not optional — it's how the sensor achieves kernel-level visibility. When our security team reviewed the manifest, the response was "this looks like exactly what a container escape payload would request." True. The response is: this is a security tool, it needs these privileges to do its job, and it's signed software from CrowdStrike. Document it, get explicit sign-off, and move on.
We added a comment block at the top of our FalconNodeSensor resource in Terraform linking to the security team's approval ticket.
End result: all three clusters at 100% sensor coverage, no production incidents during rollout. The main lesson was handling COS/eBPF versus Ubuntu/kernel-module early — know which you're running before you start, so you can set expectations with the security team about coverage depth before sensors are deployed and they're looking at coverage reports.