kafka开启了SASL(kerberos), server.properties配置为

sasl.enabled.mechanisms: GSSAPI
security.inter.broker.protocol: SASL_PLAINTEXT
ssl.mode.enable: false
allow.everyone.if.no.acl.found: true
sasl.port: 19092

服务端的jaas.conf内容为

KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
debug=false
keyTab="/opt/kafka/keytabs/kafka.keytab"
useTicketCache=false
storeKey=true
principal="kafka/hadoop.test.com@TEST.COM"
useKeyTab=true;
};

KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/opt/kafka/keytabs/kafka.keytab"
principal="kafka/hadoop.test.com@TEST.COM"
storeKey=true
debug=false
useTicketCache=false;
};

Client {
com.sun.security.auth.module.Krb5LoginModule required
storeKey=true
principal="kafka/hadoop.test.com@TEST.COM"
useTicketCache=false
keyTab="/opt/kafka/keytabs/kafka.keytab"
debug=false
useKeyTab=true;
};

在客户端查询kafka集群中所有节点的API版本信息

kafka-broker-api-versions.sh --bootstrap-server  192.168.1.140:19092

报错

Request METADATA failed on brokers List

这是因为客户端没有开启SASL
编辑client.properties

sasl.mechanism=GSSAPI
security.protocol=SASL_PLAINTEXT 
sasl.kerberos.service.name=kafka
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    storeKey=true \
    keyTab="/opt/kafkaclient/keytabs/kafka.keytab" \
    principal="kafka/hadoop.test.com@TEST.COM" \
    renewTGT=true \
    useTicketCache=true;

运行命令

kafka-broker-api-versions.sh --bootstrap-server  192.168.1.140:19092 --command-config client.properties

报错

no vailid crdentials provided
server not found in kerberos database
identifier doesn't match expected value

查看kerberos的日志krb5kdc.log

LOOKING_UP_SERVER: kafka@TEST.COM for kafka/test.com@TEST.COM,Server not found in Kerberos database

发现是服务名不对,正确的服务名是: kafka/hadoop.test.com@TEST.COM

修改client.properties

sasl.mechanism=GSSAPI
security.protocol=SASL_PLAINTEXT 
sasl.kerberos.service.name=kafka
kerberos.domain.name=hadoop.test.com
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
    useKeyTab=true \
    storeKey=true \
    keyTab="/opt/kafkaclient/keytabs/kafka.keytab" \
    principal="kafka/hadoop.test.com@TEST.COM" \
    renewTGT=true \
    useTicketCache=true;

再次运行

kafka-broker-api-versions.sh --bootstrap-server 192.168.1.140:19092 --command-config client.properties

可获得正确的结果

principal的主机名

查看kerberos的日志krb5kdc.log,如果报错

LOOKING_UP_SERVER: kafka/hadoop.test.com@TEST.COM for kafka/w120pc05@TEST.COM

可知是反解析hosts里的ip得到hostname,从而构建service principal;
principal的格式为primary/hostname@REALM

因此,另一种解决方法是,修改/etc/hosts

192.168.1.140 hadoop.test.com
Logo

大数据从业者之家,一起探索大数据的无限可能!

更多推荐