Newer releases of Apache HBase (>= 0.92) will support connecting to a ZooKeeper Quorum that supports SASL authentication (which is available in Zookeeper versions 3.4.0 or later).
This describes how to set up HBase to mutually authenticate with a ZooKeeper Quorum. ZooKeeper/HBase mutual authentication (HBASE-2418) is required as part of a complete secure HBase configuration (HBASE-3025). For simplicity of explication, this section ignores additional configuration required (Secure HDFS and Coprocessor configuration). It's recommended to begin with an HBase-managed Zookeeper configuration (as opposed to a standalone Zookeeper quorum) for ease of learning.
You need to have a working Kerberos KDC setup. For each $HOST
that will
run a ZooKeeper server, you should have a principle zookeeper/$HOST
. For each
such host, add a service key (using the kadmin
or kadmin.local
tool's ktadd
command) for zookeeper/$HOST
and copy this file to
$HOST
, and make it readable only to the user that will run zookeeper on
$HOST
. Note the location of this file, which we will use below as
$PATH_TO_ZOOKEEPER_KEYTAB
.
Similarly, for each $HOST
that will run an HBase server (master or
regionserver), you should have a principle: hbase/$HOST
. For each host, add a
keytab file called hbase.keytab
containing a service key for
hbase/$HOST
, copy this file to $HOST
, and make it readable only
to the user that will run an HBase service on $HOST
. Note the location of this
file, which we will use below as $PATH_TO_HBASE_KEYTAB
.
Each user who will be an HBase client should also be given a Kerberos principal. This
principal should usually have a password assigned to it (as opposed to, as with the HBase
servers, a keytab file) which only this user knows. The client's principal's
maxrenewlife
should be set so that it can be renewed enough so that the user
can complete their HBase client processes. For example, if a user runs a long-running HBase
client process that takes at most 3 days, we might create this user's principal within
kadmin
with: addprinc -maxrenewlife 3days
. The Zookeeper client
and server libraries manage their own ticket refreshment by running threads that wake up
periodically to do the refreshment.
On each host that will run an HBase client (e.g. hbase shell
), add the
following file to the HBase home directory's conf
directory:
Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=false useTicketCache=true; };
We'll refer to this JAAS configuration file as $CLIENT_CONF
below.
On each node that will run a zookeeper, a master, or a regionserver, create a JAAS
configuration file in the conf directory of the node's HBASE_HOME
directory that looks like the following:
Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="$PATH_TO_ZOOKEEPER_KEYTAB" storeKey=true useTicketCache=false principal="zookeeper/$HOST"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false keyTab="$PATH_TO_HBASE_KEYTAB" principal="hbase/$HOST"; };
where the $PATH_TO_HBASE_KEYTAB
and
$PATH_TO_ZOOKEEPER_KEYTAB
files are what you created above, and
$HOST
is the hostname for that node.
The Server
section will be used by the Zookeeper quorum server, while the
Client
section will be used by the HBase master and regionservers. The path
to this file should be substituted for the text $HBASE_SERVER_CONF
in
the hbase-env.sh
listing below.
The path to this file should be substituted for the text
$CLIENT_CONF
in the hbase-env.sh
listing below.
Modify your hbase-env.sh
to include the following:
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF" export HBASE_MANAGES_ZK=true export HBASE_ZOOKEEPER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
where $HBASE_SERVER_CONF
and $CLIENT_CONF
are
the full paths to the JAAS configuration files created above.
Modify your hbase-site.xml
on each node that will run zookeeper,
master or regionserver to contain:
<configuration> <property> <name>hbase.zookeeper.quorum</name> <value>$ZK_NODES</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.authProvider.1</name> <value>org.apache.zookeeper.server.auth.SASLAuthenticationProvider</value> </property> <property> <name>hbase.zookeeper.property.kerberos.removeHostFromPrincipal</name> <value>true</value> </property> <property> <name>hbase.zookeeper.property.kerberos.removeRealmFromPrincipal</name> <value>true</value> </property> </configuration>
where $ZK_NODES
is the comma-separated list of hostnames of the Zookeeper
Quorum hosts.
Start your hbase cluster by running one or more of the following set of commands on the appropriate hosts:
bin/hbase zookeeper start bin/hbase master start bin/hbase regionserver start
Add a JAAS configuration file that looks like:
Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true useTicketCache=false keyTab="$PATH_TO_HBASE_KEYTAB" principal="hbase/$HOST"; };
where the $PATH_TO_HBASE_KEYTAB
is the keytab created above for
HBase services to run on this host, and $HOST
is the hostname for that node.
Put this in the HBase home's configuration directory. We'll refer to this file's full
pathname as $HBASE_SERVER_CONF
below.
Modify your hbase-env.sh to include the following:
export HBASE_OPTS="-Djava.security.auth.login.config=$CLIENT_CONF" export HBASE_MANAGES_ZK=false export HBASE_MASTER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF" export HBASE_REGIONSERVER_OPTS="-Djava.security.auth.login.config=$HBASE_SERVER_CONF"
Modify your hbase-site.xml
on each node that will run a master or
regionserver to contain:
<configuration> <property> <name>hbase.zookeeper.quorum</name> <value>$ZK_NODES</value> </property> <property> <name>hbase.cluster.distributed</name> <value>true</value> </property> </configuration>
where $ZK_NODES
is the comma-separated list of hostnames of the Zookeeper
Quorum hosts.
Add a zoo.cfg
for each Zookeeper Quorum host containing:
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider kerberos.removeHostFromPrincipal=true kerberos.removeRealmFromPrincipal=true
Also on each of these hosts, create a JAAS configuration file containing:
Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="$PATH_TO_ZOOKEEPER_KEYTAB" storeKey=true useTicketCache=false principal="zookeeper/$HOST"; };
where $HOST
is the hostname of each Quorum host. We will refer to the full
pathname of this file as $ZK_SERVER_CONF
below.
Start your Zookeepers on each Zookeeper Quorum host with:
SERVER_JVMFLAGS="-Djava.security.auth.login.config=$ZK_SERVER_CONF" bin/zkServer start
Start your HBase cluster by running one or more of the following set of commands on the appropriate nodes:
bin/hbase master start bin/hbase regionserver start
If the configuration above is successful, you should see something similar to the following in your Zookeeper server logs:
11/12/05 22:43:39 INFO zookeeper.Login: successfully logged in. 11/12/05 22:43:39 INFO server.NIOServerCnxnFactory: binding to port 0.0.0.0/0.0.0.0:2181 11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh thread started. 11/12/05 22:43:39 INFO zookeeper.Login: TGT valid starting at: Mon Dec 05 22:43:39 UTC 2011 11/12/05 22:43:39 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:39 UTC 2011 11/12/05 22:43:39 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:36:42 UTC 2011 .. 11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Successfully authenticated client: authenticationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN; authorizationID=hbase/ip-10-166-175-249.us-west-1.compute.internal@HADOOP.LOCALDOMAIN. 11/12/05 22:43:59 INFO auth.SaslServerCallbackHandler: Setting authorizedID: hbase 11/12/05 22:43:59 INFO server.ZooKeeperServer: adding SASL authorization for authorizationID: hbase
On the Zookeeper client side (HBase master or regionserver), you should see something similar to the following:
11/12/05 22:43:59 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=ip-10-166-175-249.us-west-1.compute.internal:2181 sessionTimeout=180000 watcher=master:60000 11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Opening socket connection to server /10.166.175.249:2181 11/12/05 22:43:59 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 14851@ip-10-166-175-249 11/12/05 22:43:59 INFO zookeeper.Login: successfully logged in. 11/12/05 22:43:59 INFO client.ZooKeeperSaslClient: Client will use GSSAPI as SASL mechanism. 11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh thread started. 11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Socket connection established to ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, initiating session 11/12/05 22:43:59 INFO zookeeper.Login: TGT valid starting at: Mon Dec 05 22:43:59 UTC 2011 11/12/05 22:43:59 INFO zookeeper.Login: TGT expires: Tue Dec 06 22:43:59 UTC 2011 11/12/05 22:43:59 INFO zookeeper.Login: TGT refresh sleeping until: Tue Dec 06 18:30:37 UTC 2011 11/12/05 22:43:59 INFO zookeeper.ClientCnxn: Session establishment complete on server ip-10-166-175-249.us-west-1.compute.internal/10.166.175.249:2181, sessionid = 0x134106594320000, negotiated timeout = 180000
This has been tested on the current standard Amazon Linux AMI. First setup KDC and principals as described above. Next checkout code and run a sanity check.
git clone git://git.apache.org/hbase.git cd hbase mvn clean test -Dtest=TestZooKeeperACL
Then configure HBase as described above. Manually edit target/cached_classpath.txt (see below):
bin/hbase zookeeper & bin/hbase master & bin/hbase regionserver &
You must override the standard hadoop-core jar file from the
target/cached_classpath.txt
file with the version containing the
HADOOP-7070 fix. You can use the following script to do this:
echo `find ~/.m2 -name "*hadoop-core*7070*SNAPSHOT.jar"` ':' `cat target/cached_classpath.txt` | sed 's/ //g' > target/tmp.txt mv target/tmp.txt target/cached_classpath.txt
This would avoid the need for a separate Hadoop jar that fixes HADOOP-7070.