Table of Contents
Newer releases of Apache HBase (>= 0.92) support optional SASL authentication of clients. See also Matteo Bertozzi's article on Understanding User Authentication and Authorization in Apache HBase.
This describes how to set up Apache HBase and clients for connection to secure HBase resources.
To run HBase RPC with strong authentication, you must set
hbase.security.authentication
to true
. In this case,
you must also set hadoop.security.authentication
to
true
. Otherwise, you would be using strong authentication for
HBase but not for the underlying HDFS, which would cancel out any benefit.
You need to have a working Kerberos KDC.
A HBase configured for secure client access is expected to be running on top of a secured HDFS cluster. HBase must be able to authenticate to HDFS services. HBase needs Kerberos credentials to interact with the Kerberos-enabled HDFS daemons. Authenticating a service should be done using a keytab file. The procedure for creating keytabs for HBase service is the same as for creating keytabs for Hadoop. Those steps are omitted here. Copy the resulting keytab files to wherever HBase Master and RegionServer processes are deployed and make them readable only to the user account under which the HBase daemons will run.
A Kerberos principal has three parts, with the form
username/fully.qualified.domain.name@YOUR-REALM.COM
. We recommend using
hbase
as the username portion.
The following is an example of the configuration properties for Kerberos
operation that must be added to the hbase-site.xml
file on every server
machine in the cluster. Required for even the most basic interactions with a secure
Hadoop configuration, independent of HBase security.
<property> <name>hbase.regionserver.kerberos.principal</name> <value>hbase/_HOST@YOUR-REALM.COM</value> </property> <property> <name>hbase.regionserver.keytab.file</name> <value>/etc/hbase/conf/keytab.krb5</value> </property> <property> <name>hbase.master.kerberos.principal</name> <value>hbase/_HOST@YOUR-REALM.COM</value> </property> <property> <name>hbase.master.keytab.file</name> <value>/etc/hbase/conf/keytab.krb5</value> </property>
Each HBase client user should also be given a Kerberos principal. This principal
should have a password assigned to it (as opposed to a keytab file). The client
principal's maxrenewlife
should be set so that it can be renewed enough
times for the HBase client process to complete. For example, if a user runs a
long-running HBase client process that takes at most 3 days, we might create this
user's principal within kadmin
with: addprinc -maxrenewlife
3days
Long running daemons with indefinite lifetimes that require client access to
HBase can instead be configured to log in from a keytab. For each host running such
daemons, create a keytab with kadmin
or kadmin.local
. The
procedure for creating keytabs for HBase service is the same as for creating keytabs
for Hadoop. Those steps are omitted here. Copy the resulting keytab files to where the
client daemon will execute and make them readable only to the user account under which
the daemon will run.
First, refer to Section 8.1.1, “Prerequisites” and ensure that your underlying HDFS configuration is secure.
Add the following to the hbase-site.xml
file on every server machine in
the cluster:
<property> <name>hbase.security.authentication</name> <value>kerberos</value> </property> <property> <name>hbase.security.authorization</name> <value>true</value> </property> <property> <name>hbase.coprocessor.region.classes</name> <value>org.apache.hadoop.hbase.security.token.TokenProvider</value> </property>
A full shutdown and restart of HBase service is required when deploying these configuration changes.
First, refer to Section 8.1.1, “Prerequisites” and ensure that your underlying HDFS configuration is secure.
Add the following to the hbase-site.xml
file on every client:
<property> <name>hbase.security.authentication</name> <value>kerberos</value> </property>
The client environment must be logged in to Kerberos from KDC or keytab via the
kinit
command before communication with the HBase cluster will be possible.
Be advised that if the hbase.security.authentication
in the client- and
server-side site files do not match, the client will not be able to communicate with the
cluster.
Once HBase is configured for secure RPC it is possible to optionally configure
encrypted communication. To do so, add the following to the hbase-site.xml
file
on every client:
<property> <name>hbase.rpc.protection</name> <value>privacy</value> </property>
This configuration property can also be set on a per connection basis. Set it in the
Configuration
supplied to HTable
:
Configuration conf = HBaseConfiguration.create(); conf.set("hbase.rpc.protection", "privacy"); HTable table = new HTable(conf, tablename);
Expect a ~10% performance penalty for encrypted communication.
Add the following to the hbase-site.xml
file for every Thrift gateway:
<property> <name>hbase.thrift.keytab.file</name> <value>/etc/hbase/conf/hbase.keytab</value> </property> <property> <name>hbase.thrift.kerberos.principal</name> <value>$USER/_HOST@HADOOP.LOCALDOMAIN</value> <!-- TODO: This may need to be HTTP/_HOST@<REALM> and _HOST may not work. You may have to put the concrete full hostname. --> </property>
Substitute the appropriate credential and keytab for $USER
and $KEYTAB
respectively.
In order to use the Thrift API principal to interact with HBase, it is also necessary to
add the hbase.thrift.kerberos.principal
to the _acl_
table. For
example, to give the Thrift API principal, thrift_server
, administrative
access, a command such as this one will suffice:
grant 'thrift_server', 'RWCA'
For more information about ACLs, please see the Access Control section
The Thrift gateway will authenticate with HBase using the supplied credential. No authentication will be performed by the Thrift gateway itself. All client access via the Thrift gateway will use the Thrift gateway's credential and have its privilege.
Section 8.1.4, “Client-side Configuration for Secure Operation - Thrift Gateway” describes how to authenticate a Thrift client to HBase using a fixed user. As an alternative, you can configure the Thrift gateway to authenticate to HBase on the client's behalf, and to access HBase using a proxy user. This was implemented in HBASE-11349 for Thrift 1, and HBASE-11474 for Thrift 2.
If you use framed transport, you cannot yet take advantage of this feature, because SASL does not work with Thrift framed transport at this time.
To enable it, do the following.
Be sure Thrift is running in secure mode, by following the procedure described in Section 8.1.4, “Client-side Configuration for Secure Operation - Thrift Gateway”.
Be sure that HBase is configured to allow proxy users, as described in Section 8.1.7, “REST Gateway Impersonation Configuration”.
In hbase-site.xml
for each cluster node running a Thrift
gateway, set the property hbase.thrift.security.qop
to one of the following
three values:
auth-conf
- authentication, integrity, and confidentiality
checking
auth-int
- authentication and integrity checking
auth
- authentication checking only
Restart the Thrift gateway processes for the changes to take effect. If a node is
running Thrift, the output of the jps command will list a
ThriftServer
process. To stop Thrift on a node, run the command
bin/hbase-daemon.sh stop thrift. To start Thrift on a node, run the
command bin/hbase-daemon.sh start thrift.
Add the following to the hbase-site.xml
file for every REST gateway:
<property> <name>hbase.rest.keytab.file</name> <value>$KEYTAB</value> </property> <property> <name>hbase.rest.kerberos.principal</name> <value>$USER/_HOST@HADOOP.LOCALDOMAIN</value> </property>
Substitute the appropriate credential and keytab for $USER
and $KEYTAB
respectively.
The REST gateway will authenticate with HBase using the supplied credential. No authentication will be performed by the REST gateway itself. All client access via the REST gateway will use the REST gateway's credential and have its privilege.
In order to use the REST API principal to interact with HBase, it is also necessary to
add the hbase.rest.kerberos.principal
to the _acl_
table. For
example, to give the REST API principal, rest_server
, administrative access, a
command such as this one will suffice:
grant 'rest_server', 'RWCA'
For more information about ACLs, please see the Access Control section
It should be possible for clients to authenticate with the HBase cluster through the REST gateway in a pass-through manner via SPEGNO HTTP authentication. This is future work.
By default, the REST gateway doesn't support impersonation. It accesses the HBase on behalf of clients as the user configured as in the previous section. To the HBase server, all requests are from the REST gateway user. The actual users are unknown. You can turn on the impersonation support. With impersonation, the REST gateway user is a proxy user. The HBase server knows the acutal/real user of each request. So it can apply proper authorizations.
To turn on REST gateway impersonation, we need to configure HBase servers (masters and region servers) to allow proxy users; configure REST gateway to enable impersonation.
To allow proxy users, add the following to the hbase-site.xml
file for
every HBase server:
<property> <name>hadoop.security.authorization</name> <value>true</value> </property> <property> <name>hadoop.proxyuser.$USER.groups</name> <value>$GROUPS</value> </property> <property> <name>hadoop.proxyuser.$USER.hosts</name> <value>$GROUPS</value> </property>
Substitute the REST gateway proxy user for $USER, and the allowed group list for $GROUPS.
To enable REST gateway impersonation, add the following to the
hbase-site.xml
file for every REST gateway.
<property> <name>hbase.rest.authentication.type</name> <value>kerberos</value> </property> <property> <name>hbase.rest.authentication.kerberos.principal</name> <value>HTTP/_HOST@HADOOP.LOCALDOMAIN</value> </property> <property> <name>hbase.rest.authentication.kerberos.keytab</name> <value>$KEYTAB</value> </property>
Substitute the keytab for HTTP for $KEYTAB.