org.apache.hadoop.hbase.client.HTable.batch(List extends Row>)
If any exception is thrown by one of the actions, there is no way to
retrieve the partially executed results. Use HTable.batch(List, Object[]) instead.
|
org.apache.hadoop.hbase.client.HTableInterface.batch(List extends Row>)
|
org.apache.hadoop.hbase.client.HTable.batchCallback(List extends Row>, Batch.Callback)
|
org.apache.hadoop.hbase.client.HTableInterface.batchCallback(List extends Row>, Batch.Callback)
|
org.apache.hadoop.hbase.client.HConnection.clearRegionCache(byte[]) |
org.apache.hadoop.hbase.client.HConnectionManager.deleteAllConnections()
kept for backward compatibility, but the behavior is broken. HBASE-8983
|
org.apache.hadoop.hbase.client.HConnectionManager.deleteAllConnections(boolean) |
org.apache.hadoop.hbase.client.HConnectionManager.deleteConnection(Configuration) |
org.apache.hadoop.hbase.client.HConnectionManager.deleteStaleConnection(HConnection) |
org.apache.hadoop.hbase.filter.Filter.filterRow(List) |
org.apache.hadoop.hbase.filter.FilterList.filterRow(List) |
org.apache.hadoop.hbase.client.Query.getACLStrategy()
No effect
|
org.apache.hadoop.hbase.client.Mutation.getACLStrategy()
No effect
|
org.apache.hadoop.hbase.client.HConnection.getAdmin(ServerName, boolean)
You can pass master flag but nothing special is done.
|
org.apache.hadoop.hbase.client.Result.getColumn(byte[], byte[])
|
org.apache.hadoop.hbase.client.Result.getColumnLatest(byte[], byte[])
|
org.apache.hadoop.hbase.client.Result.getColumnLatest(byte[], int, int, byte[], int, int)
|
org.apache.hadoop.hbase.client.HTable.getConnection()
This method will be changed from public to package protected.
|
org.apache.hadoop.hbase.client.HConnectionManager.getConnection(Configuration) |
org.apache.hadoop.hbase.client.HConnection.getCurrentNrHRS()
This method will be changed from public to package protected.
|
org.apache.hadoop.hbase.HColumnDescriptor.getDataBlockEncodingOnDisk() |
org.apache.hadoop.hbase.Cell.getFamily()
|
org.apache.hadoop.hbase.client.Mutation.getFamilyMap()
|
org.apache.hadoop.hbase.client.HConnection.getHTableDescriptor(byte[]) |
org.apache.hadoop.hbase.client.HConnection.getHTableDescriptors(List) |
org.apache.hadoop.hbase.client.HConnection.getKeepAliveMasterService()
Since 0.96.0
|
org.apache.hadoop.hbase.filter.Filter.getNextKeyHint(KeyValue) |
org.apache.hadoop.hbase.filter.FilterList.getNextKeyHint(KeyValue) |
org.apache.hadoop.hbase.HTableDescriptor.getOwnerString() |
org.apache.hadoop.hbase.Cell.getQualifier()
|
org.apache.hadoop.hbase.client.HConnection.getRegionLocation(byte[], byte[], boolean) |
org.apache.hadoop.hbase.Cell.getRow()
|
org.apache.hadoop.hbase.client.HTableInterface.getRowOrBefore(byte[], byte[])
As of version 0.92 this method is deprecated without
replacement.
getRowOrBefore is used internally to find entries in hbase:meta and makes
various assumptions about the table (which are true for hbase:meta but not
in general) to be efficient.
|
org.apache.hadoop.hbase.client.HTable.getScannerCaching()
|
org.apache.hadoop.hbase.ClusterStatus.getServerInfo()
|
org.apache.hadoop.hbase.io.ImmutableBytesWritable.getSize()
|
org.apache.hadoop.hbase.HTableDescriptor.getTableDir(Path, byte[]) |
org.apache.hadoop.hbase.HRegionInfo.getTableName()
Since 0.96.0; use #getTable()
|
org.apache.hadoop.hbase.client.ClientScanner.getTableName()
|
org.apache.hadoop.hbase.HRegionInfo.getTableName(byte[])
Since 0.96.0; use #getTable(byte[])
|
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames() |
org.apache.hadoop.hbase.client.HConnection.getTableNames() |
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames(Pattern) |
org.apache.hadoop.hbase.client.HBaseAdmin.getTableNames(String) |
org.apache.hadoop.hbase.Cell.getTagsLength()
|
org.apache.hadoop.hbase.Cell.getTagsLengthUnsigned()
From next major version this will be renamed to getTagsLength() which returns int.
|
org.apache.hadoop.hbase.Cell.getValue()
|
org.apache.hadoop.hbase.HRegionInfo.getVersion()
HRI is no longer a VersionedWritable
|
org.apache.hadoop.hbase.client.HTable.getWriteBuffer()
since 0.96. This is an internal buffer that should not be read nor write.
|
org.apache.hadoop.hbase.client.Mutation.getWriteToWAL()
|
org.apache.hadoop.hbase.client.HTable.incrementColumnValue(byte[], byte[], byte[], long, boolean)
|
org.apache.hadoop.hbase.client.HTableInterface.incrementColumnValue(byte[], byte[], byte[], long, boolean)
|
org.apache.hadoop.hbase.HTableDescriptor.isDeferredLogFlush() |
org.apache.hadoop.hbase.client.HConnection.isTableAvailable(byte[]) |
org.apache.hadoop.hbase.client.HConnection.isTableAvailable(byte[], byte[][]) |
org.apache.hadoop.hbase.client.HConnection.isTableDisabled(byte[]) |
org.apache.hadoop.hbase.client.HConnection.isTableEnabled(byte[]) |
org.apache.hadoop.hbase.client.HTable.isTableEnabled(byte[])
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, byte[])
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, String)
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(Configuration, TableName)
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(String)
|
org.apache.hadoop.hbase.client.HTable.isTableEnabled(TableName)
|
org.apache.hadoop.hbase.client.Result.list()
|
org.apache.hadoop.hbase.client.HConnection.locateRegion(byte[], byte[]) |
org.apache.hadoop.hbase.client.HConnection.locateRegions(byte[]) |
org.apache.hadoop.hbase.client.HConnection.locateRegions(byte[], boolean, boolean) |
org.apache.hadoop.hbase.client.HConnection.processBatch(List extends Row>, byte[], ExecutorService, Object[]) |
org.apache.hadoop.hbase.client.HConnection.processBatch(List extends Row>, TableName, ExecutorService, Object[])
|
org.apache.hadoop.hbase.client.HConnection.processBatchCallback(List extends Row>, byte[], ExecutorService, Object[], Batch.Callback) |
org.apache.hadoop.hbase.client.HConnection.processBatchCallback(List extends Row>, TableName, ExecutorService, Object[], Batch.Callback)
|
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], List) |
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], Put) |
org.apache.hadoop.hbase.client.HTableMultiplexer.put(byte[], Put, int) |
org.apache.hadoop.hbase.client.Result.raw()
|
org.apache.hadoop.hbase.HColumnDescriptor.readFields(DataInput)
|
org.apache.hadoop.hbase.HTableDescriptor.readFields(DataInput)
|
org.apache.hadoop.hbase.HRegionInfo.readFields(DataInput)
Use protobuf deserialization instead.
|
org.apache.hadoop.hbase.client.HConnection.relocateRegion(byte[], byte[]) |
org.apache.hadoop.hbase.client.Query.setACLStrategy(boolean)
No effect
|
org.apache.hadoop.hbase.client.Mutation.setACLStrategy(boolean)
No effect
|
org.apache.hadoop.hbase.client.HTable.setAutoFlush(boolean) |
org.apache.hadoop.hbase.client.HTableInterface.setAutoFlush(boolean)
in 0.96. When called with setAutoFlush(false), this function also
set clearBufferOnFail to true, which is unexpected but kept for historical reasons.
Replace it with setAutoFlush(false, false) if this is exactly what you want, or by
HTableInterface.setAutoFlushTo(boolean) for all other cases.
|
org.apache.hadoop.hbase.HTableDescriptor.setDeferredLogFlush(boolean) |
org.apache.hadoop.hbase.HColumnDescriptor.setEncodeOnDisk(boolean) |
org.apache.hadoop.hbase.client.Mutation.setFamilyMap(NavigableMap>)
|
org.apache.hadoop.hbase.HTableDescriptor.setName(byte[]) |
org.apache.hadoop.hbase.HTableDescriptor.setName(TableName) |
org.apache.hadoop.hbase.HTableDescriptor.setOwner(User) |
org.apache.hadoop.hbase.HTableDescriptor.setOwnerString(String) |
org.apache.hadoop.hbase.client.HTable.setScannerCaching(int)
|
org.apache.hadoop.hbase.client.Mutation.setWriteToWAL(boolean)
|
org.apache.hadoop.hbase.filter.Filter.transform(KeyValue) |
org.apache.hadoop.hbase.filter.FilterList.transform(KeyValue) |
org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.tryAtomicRegionLoad(HConnection, byte[], byte[], Collection)
|
org.apache.hadoop.hbase.client.HConnection.updateCachedLocations(byte[], byte[], Object, HRegionLocation) |
org.apache.hadoop.hbase.HColumnDescriptor.write(DataOutput)
|
org.apache.hadoop.hbase.HTableDescriptor.write(DataOutput)
Writables are going away.
Use MessageLite.toByteArray() instead.
|
org.apache.hadoop.hbase.HRegionInfo.write(DataOutput)
|