N.4. Notes

N.4.1. Constraints

In some part, current wire-format -- i.e. all requests and responses preceeded by a length -- has been dictated by current server non-async architecture.

N.4.2. One fat pb request or header+param

We went with pb header followed by pb param making a request and a pb header followed by pb response for now. Doing header+param rather than a single protobuf Message with both header and param content:

  1. Is closer to what we currently have

  2. Having a single fat pb requires extra copying putting the already pb’d param into the body of the fat request pb (and same making result)

  3. We can decide whether to accept the request or not before we read the param; for example, the request might be low priority.  As is, we read header+param in one go as server is currently implemented so this is a TODO.

The advantages are minor.  If later, fat request has clear advantage, can roll out a v2 later.

N.4.3. RPC Configurations

N.4.3.1. CellBlock Codecs

To enable a codec other than the default KeyValueCodec, set hbase.client.rpc.codec to the name of the Codec class to use. Codec must implement hbase's Codec Interface. After connection setup, all passed cellblocks will be sent with this codec. The server will return cellblocks using this same codec as long as the codec is on the servers' CLASSPATH (else you will get UnsupportedCellCodecException).

To change the default codec, set hbase.client.default.rpc.codec.

To disable cellblocks completely and to go pure protobuf, set the default to the empty String and do not specify a codec in your Configuration. So, set hbase.client.default.rpc.codec to the empty string and do not set hbase.client.rpc.codec. This will cause the client to connect to the server with no codec specified. If a server sees no codec, it will return all responses in pure protobuf. Running pure protobuf all the time will be slower than running with cellblocks.

N.4.3.2. Compression

Uses hadoops compression codecs. To enable compressing of passed CellBlocks, set hbase.client.rpc.compressor to the name of the Compressor to use. Compressor must implement Hadoops' CompressionCodec Interface. After connection setup, all passed cellblocks will be sent compressed. The server will return cellblocks compressed using this same compressor as long as the compressor is on its CLASSPATH (else you will get UnsupportedCompressionCodecException).

comments powered by Disqus