public abstract class HadoopShimsSecure extends Object implements HadoopShims
| Modifier and Type | Class and Description |
|---|---|
static class |
HadoopShimsSecure.CombineFileInputFormatShim<K,V> |
static class |
HadoopShimsSecure.CombineFileRecordReader<K,V> |
static class |
HadoopShimsSecure.InputSplitShim |
HadoopShims.ByteBufferPoolShim, HadoopShims.DirectCompressionType, HadoopShims.DirectDecompressorShim, HadoopShims.HCatHadoopShims, HadoopShims.HdfsEncryptionShim, HadoopShims.HdfsFileStatus, HadoopShims.JobTrackerState, HadoopShims.KerberosNameShim, HadoopShims.MiniDFSShim, HadoopShims.MiniMrShim, HadoopShims.NoopHdfsEncryptionShim, HadoopShims.StoragePolicyShim, HadoopShims.StoragePolicyValue, HadoopShims.WebHCatJTShim, HadoopShims.ZeroCopyReaderShim| Constructor and Description |
|---|
HadoopShimsSecure() |
| Modifier and Type | Method and Description |
|---|---|
abstract void |
addDelegationTokens(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.security.Credentials cred,
String uname)
Get Delegation token and add it to Credential.
|
void |
checkFileAccess(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat,
org.apache.hadoop.fs.permission.FsAction action)
Check if the configured UGI has access to the path for the given file system action.
|
abstract org.apache.hadoop.fs.FileSystem |
createProxyFileSystem(org.apache.hadoop.fs.FileSystem fs,
URI uri)
Create a proxy file system that can serve a given scheme/authority using some
other file system.
|
abstract long |
getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
Get the default block size for the path.
|
abstract short |
getDefaultReplication(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
Get the default replication for a path.
|
abstract String |
getJobLauncherHttpAddress(org.apache.hadoop.conf.Configuration conf)
All references to jobtracker/resource manager http address
in the configuration should be done through this shim
|
abstract String |
getJobLauncherRpcAddress(org.apache.hadoop.conf.Configuration conf)
All retrieval of jobtracker/resource manager rpc address
in the configuration should be done through this shim
|
abstract HadoopShims.JobTrackerState |
getJobTrackerState(org.apache.hadoop.mapred.ClusterStatus clusterStatus)
Convert the ClusterStatus to its Thrift equivalent: JobTrackerState.
|
abstract org.apache.hadoop.fs.FileSystem |
getNonCachedFileSystem(URI uri,
org.apache.hadoop.conf.Configuration conf) |
abstract boolean |
isLocalMode(org.apache.hadoop.conf.Configuration conf)
Check wether MR is configured to run in local-mode
|
abstract boolean |
moveToAppropriateTrash(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
Move the directory/file to trash.
|
abstract org.apache.hadoop.mapreduce.JobContext |
newJobContext(org.apache.hadoop.mapreduce.Job job) |
abstract org.apache.hadoop.mapreduce.TaskAttemptContext |
newTaskAttemptContext(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.util.Progressable progressable) |
protected void |
run(org.apache.hadoop.fs.FsShell shell,
String[] command) |
abstract void |
setJobLauncherRpcAddress(org.apache.hadoop.conf.Configuration conf,
String val)
All updates to jobtracker/resource manager rpc address
in the configuration should be done through this shim
|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitcreateHdfsEncryptionShim, getCombineFileInputFormat, getConfiguration, getCurrentTrashPath, getDirectDecompressor, getFullFileStatus, getHadoopConfNames, getHCatShim, getJobConf, getKerberosNameShim, getLocations, getLocationsWithOffset, getLongComparator, getMergedCredentials, getMiniDfs, getMiniMrCluster, getMiniSparkCluster, getMiniTezCluster, getPassword, getPathWithoutSchemeAndAuthority, getStoragePolicyShim, getTaskAttemptLogUrl, getWebHCatShim, getZeroCopyReader, hasStickyBit, hflush, isDirectory, listLocatedStatus, mergeCredentials, newTaskAttemptID, readByteBuffer, refreshDefaultQueue, runDistCp, setFullFileStatus, setTotalOrderPartitionFile, startPauseMonitor, supportStickyBit, supportTrashFeaturepublic abstract HadoopShims.JobTrackerState getJobTrackerState(org.apache.hadoop.mapred.ClusterStatus clusterStatus) throws Exception
HadoopShimsgetJobTrackerState in interface HadoopShimsException - if no equivalent JobTrackerState existspublic abstract org.apache.hadoop.mapreduce.TaskAttemptContext newTaskAttemptContext(org.apache.hadoop.conf.Configuration conf,
org.apache.hadoop.util.Progressable progressable)
newTaskAttemptContext in interface HadoopShimspublic abstract org.apache.hadoop.mapreduce.JobContext newJobContext(org.apache.hadoop.mapreduce.Job job)
newJobContext in interface HadoopShimspublic abstract boolean isLocalMode(org.apache.hadoop.conf.Configuration conf)
HadoopShimsisLocalMode in interface HadoopShimspublic abstract void setJobLauncherRpcAddress(org.apache.hadoop.conf.Configuration conf,
String val)
HadoopShimssetJobLauncherRpcAddress in interface HadoopShimspublic abstract String getJobLauncherHttpAddress(org.apache.hadoop.conf.Configuration conf)
HadoopShimsgetJobLauncherHttpAddress in interface HadoopShimspublic abstract String getJobLauncherRpcAddress(org.apache.hadoop.conf.Configuration conf)
HadoopShimsgetJobLauncherRpcAddress in interface HadoopShimspublic abstract short getDefaultReplication(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
HadoopShimsgetDefaultReplication in interface HadoopShimspublic abstract long getDefaultBlockSize(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path)
HadoopShimsgetDefaultBlockSize in interface HadoopShimspublic abstract boolean moveToAppropriateTrash(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.Path path,
org.apache.hadoop.conf.Configuration conf)
throws IOException
HadoopShimsmoveToAppropriateTrash in interface HadoopShimsIOExceptionpublic abstract org.apache.hadoop.fs.FileSystem createProxyFileSystem(org.apache.hadoop.fs.FileSystem fs,
URI uri)
HadoopShimscreateProxyFileSystem in interface HadoopShimspublic abstract org.apache.hadoop.fs.FileSystem getNonCachedFileSystem(URI uri, org.apache.hadoop.conf.Configuration conf) throws IOException
getNonCachedFileSystem in interface HadoopShimsIOExceptionprotected void run(org.apache.hadoop.fs.FsShell shell,
String[] command)
throws Exception
Exceptionpublic void checkFileAccess(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.fs.FileStatus stat,
org.apache.hadoop.fs.permission.FsAction action)
throws IOException,
AccessControlException,
Exception
HadoopShimscheckFileAccess in interface HadoopShimsIOExceptionAccessControlExceptionExceptionpublic abstract void addDelegationTokens(org.apache.hadoop.fs.FileSystem fs,
org.apache.hadoop.security.Credentials cred,
String uname)
throws IOException
HadoopShimsaddDelegationTokens in interface HadoopShimsfs - FileSystem object to HDFScred - Credentials object to add the token to.uname - user name.IOException - If an error occurred on adding the token.Copyright © 2017 The Apache Software Foundation. All rights reserved.