一:ilogtail

    iLogtail是阿里云日志服务(SLS)团队自研的可观测数据采集Agent,拥有的轻量级、高性能、自动化配置等诸多生产级别特性,可以署于物理机、虚拟机、Kubernetes等多种环境中来采集遥测数据。iLogtail在阿里云上服务了数万家客户主机和容器的可观测性采集工作,在阿里巴巴集团的核心产品线,如淘宝、天猫、支付宝、菜鸟、高德地图等也是默认的日志、监控、Trace等多种可观测数据的采集工具。目前iLogtail已有千万级的安装量,每天采集数十PB的可观测数据,广泛应用于线上监控、问题分析/定位、运营分析、安全分析等多种场景,在实战中验证了其强大的性能和稳定性。

     https://ilogtail.gitbook.io/ilogtail-docs/plugins/flusher/flusher-kafka_v2

https://github.com/alibaba/ilogtail/blob/main/k8s_templates/ilogtail-daemonset-file-to-kafka.yaml 

二:ilogtail 和 filebeat 性能对比

     https://developer.aliyun.com/article/850614

三:k8s以 dameseat 方式部署 ilogtail

1)  kubectl apply -f  ilogtail-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ilogtail

2) kubectl apply -f  ilogtail-user-configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: ilogtail-user-cm
  namespace: ilogtail
data:
  ruoyi_log.yaml: |
    enable: true
    inputs:
      - Type: input_file
        FilePaths:
          - /data/logs/logs/*/info.log
        EnableContainerDiscovery: true  #k8s集群里动态发现pod
        ContainerFilters:
          K8sNamespaceRegex: default  #k8s条件为default namespace
    flushers:
      - Type: flusher_kafka_v2
        Brokers:
          - 192.168.3.110:9092   #kafka 地址
        Topic: test_%{tag.container.name}  # topIC 为动态topic

3) kubectl apply -f ilogtail-deployment.yaml

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: ilogtail-ds
  namespace: ilogtail
  labels:
    k8s-app: logtail-ds
spec:
  selector:
    matchLabels:
      k8s-app: logtail-ds
  template:
    metadata:
      labels:
        k8s-app: logtail-ds
    spec:
      containers:
      - name: logtail
        env:
          - name: cpu_usage_limit
            value: "1"
          - name: mem_usage_limit
            value: "512"
        image: >-
          sls-opensource-registry.cn-shanghai.cr.aliyuncs.com/ilogtail-community-edition/ilogtail:latest
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            cpu: 1000m
            memory: 1Gi
          requests:
            cpu: 400m
            memory: 384Mi
        volumeMounts:
          - mountPath: /var/run
            name: run
          - mountPath: /logtail_host
            mountPropagation: HostToContainer
            name: root
            readOnly: true
          - mountPath: /usr/local/ilogtail/checkpoint
            name: checkpoint
          - mountPath: /usr/local/ilogtail/config/local  #此处需要配置到正确位置
            name: user-config
            readOnly: true
      dnsPolicy: ClusterFirst
      hostNetwork: true
      volumes:
        - hostPath:
            path: /var/run
            type: Directory
          name: run
        - hostPath:
            path: /
            type: Directory
          name: root
        - hostPath:
            path: /lib/var/ilogtail-ilogtail-ds/checkpoint
            type: DirectoryOrCreate
          name: checkpoint
        - configMap:
            defaultMode: 420
            name: ilogtail-user-cm
          name: user-config

四: kafka安装

     单节点  kafk-3.3.1 安装

 1) 启动 zk

bin/zookeeper-server-start.sh config/zookeeper.properties > /dev/null &

2) 启动 kafka

修改配置

vim config/server.properties

broker.id=0
listeners=PLAINTEXT://192.168.3.110:9092
log.dirs=/tmp/kafka-logs
zookeeper.connect=127.0.0.1:2181



启动 kafka
bin/kafka-server-start.sh config/server.properties > /dev/null &

五: 安装kafka-eagle

版本: kafka-eagle-bin-2.0.4

vim /etc/profile

export KE_HOME=/data/servers/efak
export PATH=$PATH:$KE_HOME/bin



vim conf/system-config.properties

######################################
# multi zookeeper & kafka cluster list
######################################
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=192.168.3.110:2181


######################################
# kafka offset storage
######################################
cluster1.kafka.eagle.offset.storage=kafka
#cluster2.kafka.eagle.offset.storage=zk


######################################
# kafka sqlite jdbc driver address
######################################
kafka.eagle.driver=org.sqlite.JDBC
kafka.eagle.url=jdbc:sqlite:/opt/kafka-eagle/db/ke.db
kafka.eagle.username=root
kafka.eagle.password=www.kafka-eagle.org

启动kafka-eagle

bin/ke.sh start

账号:admin  密码:123456

六:测试 kafka消息

bin/kafka-console-consumer.sh --bootstrap-server 192.168.3.110:9092 --topic test_ruoyi-gateway --from-beginning

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐