@InterfaceAudience.Private public class DefaultCompactor extends Compactor
compact(CompactionRequest)Compactor.CellSink, Compactor.FileDetailscompactionCompression, conf, progress, store| Constructor and Description |
|---|
DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
Store store) |
| Modifier and Type | Method and Description |
|---|---|
List<org.apache.hadoop.fs.Path> |
compact(CompactionRequest request)
Do a minor/major compaction on an explicit set of storefiles from a Store.
|
List<org.apache.hadoop.fs.Path> |
compactForTesting(Collection<StoreFile> filesToCompact,
boolean isMajor)
Compact a list of files for testing.
|
createFileScanners, createScanner, createScanner, getFileDetails, getProgress, getSmallestReadPoint, performCompaction, postCreateCoprocScanner, preCreateCoprocScannerpublic DefaultCompactor(org.apache.hadoop.conf.Configuration conf,
Store store)
public List<org.apache.hadoop.fs.Path> compact(CompactionRequest request) throws IOException
IOExceptionpublic List<org.apache.hadoop.fs.Path> compactForTesting(Collection<StoreFile> filesToCompact, boolean isMajor) throws IOException
CompactionRequest to pass to
compact(CompactionRequest);filesToCompact - the files to compact. These are used as the compactionSelection for
the generated CompactionRequest.isMajor - true to major compact (prune all deletes, max versions, etc)IOExceptionCopyright © 2014 The Apache Software Foundation. All rights reserved.