要了解kerberos除了它的 官网
找了很多久,比较好的中文说明居然在 oracle-solaris上面,大公司就是大公司
选择一套机器做KDC,测试环境选择了192.168.11.63 - hadooptest-11-63.pconline.ctc
Client有四台机器:
192.168.11.64 - hadooptest-11-64.pconline.ctc
192.168.11.65 - hadooptest-11-65.pconline.ctc
192.168.11.66 - hadooptest-11-66.pconline.ctc
192.168.11.67 - hadooptest-11-67.pconline.ctc
tar -zxvf krb5-1.10.3.tar.gz
cd krb5-1.10.3/src
./configure
make
make install
- 编辑 11.63 上/etc/krb5.conf 文件
logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = LOCALDOMAIN
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
[realms]
LOCALDOMAIN = {
kdc = hadooptest-11-63.pconline.ctc:88
admin_server = hadooptest-11-63.pconline.ctc:749
default_domain = localdomain
}
[domain_realm]
.localdomain = LOCALDOMAIN
localdomain = LOCALDOMAIN
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
2.新建 /usr/local/var/krb5kdc/kdc.conf文件
[kdcdefaults]
v4_mode = nopreauth
kdc_ports = 750,88
kdc_tcp_ports = 88
[realms]
LOCALDOMAIN = {
acl_file = /usr/local/var/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /usr/local/var/krb5kdc/kadm5.keytab
kdc_ports = 750,88
max_life = 1d 0h 0m 0s
max_renewable_life = 7d 0h 0m 0s
supported_enctypes = des3-hmac-sha1:normal des-cbc-crc:normal des:normal des:v4 des:norealm des:onlyrealm
default_principal_flags = +preauth
}
3.新建Kerberos数据库
# /usr/local/sbin/kdb5_util create -r LOCALDOMAIN -s[[BR]]
4.在/usr/local/var/krb5kdc/目录下新建kadm5.acl文件,内容如下:
*/admin@LOCALDOMAIN *
5.开始为KDC设置初始用户信息,这里需要在KDC上执行kadmin.local命令(该命令仅能在KDC上运行,如果你需要在其他机器上管理kerberos的话,直接运行kadmin)
# /usr/local/sbin/kadmin.local
Enter password forprincipal "admin/admin@LOCALDOMAIN":
kadmin.local: addprinc admin/admin@LOCALDOMAIN
Re-enter password forprincipal "admin/admin@LOCALDOMAIN":
Principal "admin/admin@LOCALDOMAIN"created.
生成admin keytab文件:
kadmin.local: ktadd -k /usr/local/var/krb5kdc/kadm5.keytab kadmin/admin kadmin/changepw
Entry forprincipal kadmin/admin with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.[[BR]]
Entry forprincipal kadmin/changepw with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.[[BR]]
6.启动KDC和kadmind
# /usr/local/sbin/krb5kdc
# /usr/local/sbin/kadmind
7.为了使集群内所有机器都有Kerberos工具,你需要在集群中每个机器上安装Kerberos程序。并给出/etc/krb5.conf配置文件,并不需要做其他配置。
8.首先在KDC上为Kerberos添加一个新的管理员hadoop/admin:
# /usr/local/sbin/kadmin.local
Enter password forprincipal "hadoop/admin@LOCALDOMAIN":
kadmin.local: addprinc hadoop/admin@LOCALDOMAIN
Re-enter password forprincipal "hadoop/admin@LOCALDOMAIN":
Principal "hadoop/admin@LOCALDOMAIN"created.
9.在各机器上增加hadoop用户
/usr/local/bin/kadmin
addprinc -randkey host/hadooptest-11-64.pconline.ctc@LOCALDOMAIN
addprinc -randkey hadoop/hadooptest-11-64.pconline.ctc@LOCALDOMAIN
这里遇到一个问题,我们的hadoop core-site.xml使用的短名 hadooptest-11-64,要改为长名 hadooptest-11-64.pconline.ctc,不然kerberos里面会认为不同的实体
ktadd -k /data/hadoop-1.0.3/conf/hadoop.keytab hadoop/hadooptest-11-64.pconline.ctc@LOCALDOMAIN host/hadooptest-11-64.pconline.ctc@LOCALDOMAIN
查看keytab文件内容,这个不用执行
/usr/local/bin/klist -e -k -t hadoop.keytab
- 修改配置文件,这些是固定的,网上很多地方有说, _HOST类似宏定义,有些地方读配置,有些地方读本机的长名
配置 core-site.xml <property> <name>hadoop.security.authorization</name> <value>true</value> <description>Is service-level authorization enabled?</description> </property> <property> <name>hadoop.security.authentication</name> <value>kerberos</value> <description>Possible values are simple (no authentication), and kerberos </description> </property> 配置 hdfs-site.xml <!-- kerberos -nameNode -config --> <property> <name>dfs.https.address</name> <value>hadooptest-11-63.pconline.ctc:50470</value> </property> <property> <name>dfs.https.port</name> <value>50470</value> </property> <property> <name>dfs.block.access.token.enable</name> <value>true</value> </property> <property> <name>dfs.namenode.keytab.file</name> <value>/data/hadoop-1.0.3/conf/hadoop.keytab</value> </property> <property> <name>dfs.namenode.kerberos.principal</name> <value>hadoop/_HOST@LOCALDOMAIN</value> </property> <property> <name>dfs.namenode.kerberos.https.principal</name> <value>host/_HOST@LOCALDOMAIN</value> </property> <!-- kerberos secondNameNode config --> <property> <name>dfs.secondary.http.address</name> <value>hadooptest-11-64.pconline.ctc:50090</value> </property> <property> <name>dfs.secondary.https.address</name> <value>0.0.0.0:50495</value> </property> <property> <name>dfs.secondary.https.port</name> <value>50495</value> </property> <property> <name>dfs.secondary.namenode.keytab.file</name> <value>/data/hadoop-1.0.3/conf/hadoop.keytab</value> </property> <property> <name>dfs.secondary.namenode.kerberos.principal</name> <value>hadoop/_HOST@LOCALDOMAIN</value> </property> <property> <name>dfs.secondary.namenode.kerberos.https.principal</name> <value>host/_HOST@LOCALDOMAIN</value> </property> <!-- kerberos DataNode config --> <property> <name>dfs.datanode.data.dir.perm</name> <value>700</value> <description>Permissions for the directories on on the local filesystem where the DFS data node store its blocks. The permissions can either be octal or symbolic.</description> </property> <property> <name>dfs.datanode.address</name> <value>0.0.0.0:1004</value> </property> <property> <name>dfs.datanode.http.address</name> <value>0.0.0.0:1006</value> </property> <property> <name>dfs.datanode.keytab.file</name> <value>/data/hadoop-1.0.3/conf/hadoop.keytab</value> </property> <property> <name>dfs.datanode.kerberos.principal</name> <value>hadoop/_HOST@LOCALDOMAIN</value> </property> <property> <name>dfs.datanode.kerberos.https.principal</name> <value>host/_HOST@LOCALDOMAIN</value> </property>
- 启动dfs,
/data/hadoop-1.0.3/bin/hadoop-daemon.sh start namenode
/data/hadoop-1.0.3/bin/hadoop-daemons.sh --config /data/hadoop-1.0.3/conf/ --hosts masters start secondarynamenode
hadoop-env.sh 新增一行 export HADOOP_SECURE_DN_USER=hadoop 指定以hadoop身份启动datanode
启动datanode遇到一个bug
https://issues.apache.org/jira/browse/HDFS-3402
要修改bin/hadoop文件,使用sudo 来启动datanode, datanode 启动完有两个进程,一个是root的SecureDataNodeStarter,一个是hadoop的SecureDataNodeStarter
12.修改mapred-site.xml 文件
<!-- kerberos config --> <!-- JobTracker security configs --> <property> <name>mapreduce.jobtracker.kerberos.principal</name> <value>hadoop/_HOST@LOCALDOMAIN</value> </property> <property> <name>mapreduce.jobtracker.kerberos.https.principal</name> <value>host/_HOST@LOCALDOMAIN</value> </property> <property> <name>mapreduce.jobtracker.keytab.file</name> <value>/data/hadoop-1.0.3/conf/hadoop.keytab</value> </property> <!-- TaskTracker security configs --> <property> <name>mapreduce.tasktracker.kerberos.principal</name> <value>hadoop/_HOST@LOCALDOMAIN</value> </property> <property> <name>mapreduce.tasktracker.kerberos.https.principal</name> <value>host/_HOST@LOCALDOMAIN</value> </property> <property> <name>mapreduce.tasktracker.keytab.file</name> <value>/data/hadoop-1.0.3/conf/hadoop.keytab</value> <!-- path to the MapReduce keytab --> </property> <!-- TaskController settings --> <property> <name>mapred.task.tracker.task-controller</name> <value>org.apache.hadoop.mapred.DefaultTaskController</value> </property> <property> <name>mapreduce.tasktracker.group</name> <value>hadoop</value> </property>
13.启动mapred
可能要修改 taskcontroller.cfg
start-mapred.sh
14.配置hbase
<property>
<name>hbase.regionserver.kerberos.principal</name>
<value>hadoop/_HOST@LOCALDOMAIN</value>
</property>
<property>
<name>hbase.regionserver.keytab.file</name>
<value>/data/hadoop-1.0.3/conf/hadoop.keytab</value>
</property>
<property>
<name>hbase.master.kerberos.principal</name>
<value>hadoop/_HOST@LOCALDOMAIN</value>
</property>
<property>
<name>hbase.master.keytab.file</name>
<value>/data/hadoop-1.0.3/conf/hadoop.keytab</value>
</property>
<property>
<name>hbase.security.authentication</name>
<value>kerberos</value>
</property>
<property>
<name>hbase.security.authorization</name>
<value>true</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider</value>
</property>
<property>
<name>hbase.coprocessor.master.classes</name>
<value>org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
<property>
<name>hbase.coprocessor.region.classes</name>
<value>org.apache.hadoop.hbase.security.token.TokenProvider,
org.apache.hadoop.hbase.security.access.AccessController</value>
</property>
不过,装到最后还是悲剧了,因为我们的kerberos版本太高
参考 cloudera网站 里面机关重重,各种版本一不小心就悲剧了
![(please configure the [header_logo] section in trac.ini)](http://www1.pconline.com.cn/hr/2009/global/images/logo.gif)