要了解kerberos除了它的[http://web.mit.edu/kerberos/krb5-1.10/ 官网][[BR]]
找了很多久,比较好的中文说明居然在[http://docs.oracle.com/cd/E26926_01/html/E25889/intro-1.html#scrolltoc oracle-solaris上面],大公司就是大公司
选择一套机器做KDC,测试环境选择了192.168.11.63 - hadooptest-11-63.pconline.ctc[[BR]]
Client有四台机器:
192.168.11.64 - hadooptest-11-64.pconline.ctc[[BR]]
192.168.11.65 - hadooptest-11-65.pconline.ctc[[BR]]
192.168.11.66 - hadooptest-11-66.pconline.ctc[[BR]]
192.168.11.67 - hadooptest-11-67.pconline.ctc[[BR]]
tar -zxvf krb5-1.10.3.tar.gz[[BR]]
cd krb5-1.10.3/src[[BR]]
./configure[[BR]]
make[[BR]]
make install
1. 编辑 11.63 上/etc/krb5.conf 文件[[BR]]
{{{
logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = LOCALDOMAIN
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
forwardable = yes
[realms]
LOCALDOMAIN = {
kdc = hadooptest-11-63.pconline.ctc:88
admin_server = hadooptest-11-63.pconline.ctc:749
default_domain = localdomain
}
[domain_realm]
.localdomain = LOCALDOMAIN
localdomain = LOCALDOMAIN
[appdefaults]
pam = {
debug = false
ticket_lifetime = 36000
renew_lifetime = 36000
forwardable = true
krb4_convert = false
}
}}}
2.新建 /usr/local/var/krb5kdc/kdc.conf文件[[BR]]
{{{
[kdcdefaults]
v4_mode = nopreauth
kdc_ports = 750,88
kdc_tcp_ports = 88
[realms]
LOCALDOMAIN = {
acl_file = /usr/local/var/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /usr/local/var/krb5kdc/kadm5.keytab
kdc_ports = 750,88
max_life = 1d 0h 0m 0s
max_renewable_life = 7d 0h 0m 0s
supported_enctypes = des3-hmac-sha1:normal des-cbc-crc:normal des:normal des:v4 des:norealm des:onlyrealm
default_principal_flags = +preauth
}
}}}
3.新建Kerberos数据库[[BR]]
{{{
# /usr/local/sbin/kdb5_util create -r LOCALDOMAIN -s[[BR]]
}}}
4.在/usr/local/var/krb5kdc/目录下新建kadm5.acl文件,内容如下:
*/admin@LOCALDOMAIN *
5.开始为KDC设置初始用户信息,这里需要在KDC上执行kadmin.local命令(该命令仅能在KDC上运行,如果你需要在其他机器上管理kerberos的话,直接运行kadmin)
# /usr/local/sbin/kadmin.local[[BR]]
Enter password forprincipal "admin/admin@LOCALDOMAIN":[[BR]]
kadmin.local: addprinc admin/admin@LOCALDOMAIN[[BR]]
Re-enter password forprincipal "admin/admin@LOCALDOMAIN":[[BR]]
Principal "admin/admin@LOCALDOMAIN"created.[[BR]]
生成admin keytab文件:[[BR]]
kadmin.local: ktadd -k /usr/local/var/krb5kdc/kadm5.keytab kadmin/admin kadmin/changepw[[BR]]
Entry forprincipal kadmin/admin with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.[[BR]]
Entry forprincipal kadmin/changepw with kvno 3, encryption type DES-CBC-CRC added to keytab WRFILE:/usr/local/var/krb5kdc/kadm5.keytab.[[BR]]
6.启动KDC和kadmind[[BR]]
# /usr/local/sbin/krb5kdc[[BR]]
# /usr/local/sbin/kadmind[[BR]]
7.为了使集群内所有机器都有Kerberos工具,你需要在集群中每个机器上安装Kerberos程序。并给出/etc/krb5.conf配置文件,并不需要做其他配置。
8.首先在KDC上为Kerberos添加一个新的管理员hadoop/admin:[[BR]]
# /usr/local/sbin/kadmin.local[[BR]]
Enter password forprincipal "hadoop/admin@LOCALDOMAIN":[[BR]]
kadmin.local: addprinc hadoop/admin@LOCALDOMAIN[[BR]]
Re-enter password forprincipal "hadoop/admin@LOCALDOMAIN":[[BR]]
Principal "hadoop/admin@LOCALDOMAIN"created.[[BR]]
9.在各机器上增加hadoop用户
/usr/local/bin/kadmin[[BR]]
addprinc -randkey host/hadooptest-11-64.pconline.ctc@LOCALDOMAIN[[BR]]
addprinc -randkey hadoop/hadooptest-11-64.pconline.ctc@LOCALDOMAIN[[BR]]
''这里遇到一个问题,我们的hadoop core-site.xml使用的短名 hadooptest-11-64,要改为长名 hadooptest-11-64.pconline.ctc,不然kerberos里面会认为不同的实体[[BR]]''
ktadd -k /data/hadoop-1.0.3/conf/hadoop.keytab hadoop/hadooptest-11-64.pconline.ctc@LOCALDOMAIN host/hadooptest-11-64.pconline.ctc@LOCALDOMAIN
----
查看keytab文件内容,这个不用执行[[BR]]
/usr/local/bin/klist -e -k -t hadoop.keytab
10. 修改配置文件,这些是固定的,网上很多地方有说, _HOST类似宏定义,有些地方读配置,有些地方读本机的长名[[BR]]
{{{
配置 core-site.xml
hadoop.security.authorization
true
Is service-level authorization enabled?
hadoop.security.authentication
kerberos
Possible values are simple (no authentication), and kerberos
配置 hdfs-site.xml
dfs.https.address
hadooptest-11-63.pconline.ctc:50470
dfs.https.port
50470
dfs.block.access.token.enable
true
dfs.namenode.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
dfs.namenode.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
dfs.namenode.kerberos.https.principal
host/_HOST@LOCALDOMAIN
dfs.secondary.http.address
hadooptest-11-64.pconline.ctc:50090
dfs.secondary.https.address
0.0.0.0:50495
dfs.secondary.https.port
50495
dfs.secondary.namenode.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
dfs.secondary.namenode.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
dfs.secondary.namenode.kerberos.https.principal
host/_HOST@LOCALDOMAIN
dfs.datanode.data.dir.perm
700
Permissions for the directories on on the local filesystem where
the DFS data node store its blocks. The permissions can either be octal or
symbolic.
dfs.datanode.address
0.0.0.0:1004
dfs.datanode.http.address
0.0.0.0:1006
dfs.datanode.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
dfs.datanode.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
dfs.datanode.kerberos.https.principal
host/_HOST@LOCALDOMAIN
}}}
11. 启动dfs,
/data/hadoop-1.0.3/bin/hadoop-daemon.sh start namenode
/data/hadoop-1.0.3/bin/hadoop-daemons.sh --config /data/hadoop-1.0.3/conf/ --hosts masters start secondarynamenode
hadoop-env.sh 新增一行
export HADOOP_SECURE_DN_USER=hadoop 指定以hadoop身份启动datanode
启动datanode遇到一个bug[[BR]]
https://issues.apache.org/jira/browse/HDFS-3402
要修改bin/hadoop文件,使用sudo 来启动datanode, datanode 启动完有两个进程,一个是root的SecureDataNodeStarter,一个是hadoop的SecureDataNodeStarter
12.修改mapred-site.xml 文件
{{{
mapreduce.jobtracker.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
mapreduce.jobtracker.kerberos.https.principal
host/_HOST@LOCALDOMAIN
mapreduce.jobtracker.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
mapreduce.tasktracker.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
mapreduce.tasktracker.kerberos.https.principal
host/_HOST@LOCALDOMAIN
mapreduce.tasktracker.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
mapred.task.tracker.task-controller
org.apache.hadoop.mapred.DefaultTaskController
mapreduce.tasktracker.group
hadoop
}}}
13.启动mapred
可能要修改 taskcontroller.cfg
start-mapred.sh
----
14.配置hbase
{{{
hbase.regionserver.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
hbase.regionserver.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
hbase.master.kerberos.principal
hadoop/_HOST@LOCALDOMAIN
hbase.master.keytab.file
/data/hadoop-1.0.3/conf/hadoop.keytab
hbase.security.authentication
kerberos
hbase.security.authorization
true
hbase.coprocessor.region.classes
org.apache.hadoop.hbase.security.token.TokenProvider
hbase.coprocessor.master.classes
org.apache.hadoop.hbase.security.access.AccessController
hbase.coprocessor.region.classes
org.apache.hadoop.hbase.security.token.TokenProvider,
org.apache.hadoop.hbase.security.access.AccessController
}}}
不过,装到最后还是悲剧了,因为我们的kerberos版本太高
[https://ccp.cloudera.com/display/CDHDOC/Appendix+A+-+Troubleshooting 参考 cloudera网站] 里面机关重重,各种版本一不小心就悲剧了