参考资料:https://falco.org/zh/docs/installation/https://jishuin.proginn.com/p/763bfbd3012c
环境:kubernetes 17.3 docker模拟3个节点的集群

将Falco作为Kubernetes的DaemonSet运行

  1. 克隆Falco仓库并切换,清单目录

    git clone https://github.com/falcosecurity/falco/
    # 当前master分支,已经没有了integrations目录 切换到 add-context-to-rules-errors 分支
    cd falco
    git checkout add-context-to-rules-errors
    cd integrations/k8s-using-daemonset
    
  2. 创建一个Kubernetes service account并提供必要的RBAC权限。Falco使用这个service account连接到Kubernetes API服务器并获取资源元数据。

    kubectl apply -f k8s-with-rbac/falco-account.yaml
    

    GKE 中执行报 警告
    serviceaccount/falco-account created
    Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
    clusterrole.rbac.authorization.k8s.io/falco-cluster-role created
    Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
    clusterrolebinding.rbac.authorization.k8s.io/falco-cluster-role-binding created

  3. 为Falco pods创建一个Kubernetes service

    kubectl apply -f k8s-with-rbac/falco-service.yaml
    
  4. 部署DaemonSet还依赖Kubernetes ConfigMap来存储Falco配置,并使Falco pod可以使用该配置。

    mkdir -p k8s-with-rbac/falco-config
    k8s-using-daemonset$ cp ../../falco.yaml k8s-with-rbac/falco-config/
    k8s-using-daemonset$ cp ../../rules/falco_rules.* k8s-with-rbac/falco-config/
    k8s-using-daemonset$ cp ../../rules/k8s_audit_rules.yaml k8s-with-rbac/falco-config/
    
  5. 将环境中的自定义规则添加到falco_rules.local.yaml,它们将被Falco启动时候,读取。按照以下方式创建configMap:

    kubectl create configmap falco-config --from-file=k8s-with-rbac/falco-config
    
  6. 创建完成configMpa依赖后,就可以创建DaemonSet了

    kubectl apply -f k8s-with-rbac/falco-daemonset-configmap.yaml
    

    falco-daemonset-configmap.yaml文件中需要修改两处:第1行改为apps/v1;添加第9~11行的selector。

    apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: falco-daemonset
      labels:
        app: falco-example
        role: security
    spec:
      selector:
        matchLabels:
          app: falco-example
      template:
        metadata:
          labels:
            app: falco-example
            role: security
        spec:
          serviceAccount: falco-account
          containers:
            - name: falco
              image: falcosecurity/falco:latest
              securityContext:
                privileged: true
    # Uncomment the 3 lines below to enable eBPF support for Falco.
    # This allows Falco to run on Google COS.
    # Leave blank for the default probe location, or set to the path
    # of a precompiled probe.
    #          env:
    #          - name: SYSDIG_BPF_PROBE
    #            value: ""
              args: [ "/usr/bin/falco", "--cri", "/host/run/containerd/containerd.sock", "-K", "/var/run/secrets/kubernetes.io/serviceaccount/token", "-k", "https://$(KUBERNETES_SERVICE_HOST)
    ", "-pk"]
              volumeMounts:
                - mountPath: /host/var/run/docker.sock
                  name: docker-socket
                - mountPath: /host/run/containerd/containerd.sock
                  name: containerd-socket
                - mountPath: /host/dev
                  name: dev-fs
                - mountPath: /host/proc
                  name: proc-fs
                  readOnly: true
                - mountPath: /host/boot
                  name: boot-fs
                  readOnly: true
                - mountPath: /host/lib/modules
                  name: lib-modules
                  readOnly: true
                - mountPath: /host/usr
                  name: usr-fs
                  readOnly: true
                - mountPath: /host/etc/
                  name: etc-fs
                  readOnly: true
                - mountPath: /etc/falco
                  name: falco-config
          volumes:
            - name: docker-socket
              hostPath:
                path: /var/run/docker.sock
            - name: containerd-socket
              hostPath:
                path: /run/containerd/containerd.sock
            - name: dev-fs
              hostPath:
                path: /dev
            - name: proc-fs
              hostPath:
                path: /proc
            - name: boot-fs
              hostPath:
                path: /boot
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: usr-fs
              hostPath:
                path: /usr
            - name: etc-fs
              hostPath:
                path: /etc
            - name: falco-config
              configMap:
                name: falco-config
    
    
  7. 验证Falco正确启动。

    kubectl logs -l app=falco-example
    

测试

  1. 创建nginx pod

    kubectl run --generator=run-pod/v1 nginx --image=nginx
    

    或者通过下面方式创建 nginx pod

    apiVersion: v1
    kind: Pod
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
    
    kubectl apply -f nginx-pod.yml
    

    观察pod是running状态

    kubectl get pod nginx -o wide
    

    在这里插入图片描述

  2. 获取pods

    kubectl get pods
    

    在这里插入图片描述

  3. 打开一个命令窗口,执行

    kubectl logs -f falco-daemonset-lr5gz
    
  4. 再打开另一个命令窗口,执行如下命令,并观察第一个命令窗口是否有日志输出。

    [root@k8s-node1 k8s-using-daemonset]# kubectl exec -it nginx -- bash
    或者  kubectl exec -it nginx -- /bin/bash
    或者  kubectl exec -it nginx -- /bin/sh
    root@nginx:/# cat /etc/shadow
    root@nginx:/# exit
    

    操作截图
    在这里插入图片描述
    日志截图
    在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐