程序包 | 说明 |
---|---|
org.apache.hadoop.hdfs |
A distributed implementation of
FileSystem . |
org.apache.hadoop.hdfs.protocol | |
org.apache.hadoop.hdfs.server.datanode | |
org.apache.hadoop.hdfs.server.namenode | |
org.apache.hadoop.hdfs.server.protocol |
限定符和类型 | 方法和说明 |
---|---|
DatanodeInfo[] |
DFSClient.datanodeReport(FSConstants.DatanodeReportType type) |
DatanodeInfo |
DFSClient.DFSInputStream.getCurrentDatanode()
Returns the datanode from which the stream is currently reading.
|
DatanodeInfo |
DFSClient.DFSDataInputStream.getCurrentDatanode()
Returns the datanode from which the stream is currently reading.
|
DatanodeInfo[] |
DistributedFileSystem.getDataNodeStats()
Return statistics for each datanode.
|
DatanodeInfo[] |
ChecksumDistributedFileSystem.getDataNodeStats()
Return statistics for each datanode.
|
限定符和类型 | 方法和说明 |
---|---|
int |
DFSUtil.StaleComparator.compare(DatanodeInfo a,
DatanodeInfo b) |
限定符和类型 | 方法和说明 |
---|---|
DatanodeInfo[] |
ClientProtocol.getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes.
|
DatanodeInfo[] |
LocatedBlock.getLocations() |
限定符和类型 | 方法和说明 |
---|---|
LocatedBlock |
ClientProtocol.addBlock(String src,
String clientName,
DatanodeInfo[] excludedNodes)
A client that wants to write an additional block to the
indicated filename (which must currently be open for writing)
should call addBlock().
|
LocatedBlock |
ClientDatanodeProtocol.recoverBlock(Block block,
boolean keepLength,
DatanodeInfo[] targets)
Start generation-stamp recovery for specified block
|
构造器和说明 |
---|
DatanodeInfo(DatanodeInfo from) |
LocatedBlock(Block b,
DatanodeInfo[] locs) |
LocatedBlock(Block b,
DatanodeInfo[] locs,
long startOffset) |
LocatedBlock(Block b,
DatanodeInfo[] locs,
long startOffset,
boolean corrupt) |
UnregisteredDatanodeException(DatanodeID nodeID,
DatanodeInfo storedNode) |
限定符和类型 | 方法和说明 |
---|---|
static InterDatanodeProtocol |
DataNode.createInterDataNodeProtocolProxy(DatanodeInfo info,
Configuration conf,
int socketTimeout,
boolean connectToDnViaHostname) |
LocatedBlock |
DataNode.recoverBlock(Block block,
boolean keepLength,
DatanodeInfo[] targets)
Start generation-stamp recovery for specified block
|
Daemon |
DataNode.recoverBlocks(Block[] blocks,
DatanodeInfo[][] targets) |
限定符和类型 | 类和说明 |
---|---|
class |
DatanodeDescriptor
DatanodeDescriptor tracks stats on a given DataNode,
such as available storage capacity, last update time, etc.,
and maintains a set of blocks stored on the datanode.
|
限定符和类型 | 方法和说明 |
---|---|
static DatanodeInfo |
JspHelper.bestNode(LocatedBlock blk) |
DatanodeInfo |
FSNamesystem.chooseDatanode(String srcPath,
String address,
long blocksize)
Choose a datanode near to the given address.
|
DatanodeInfo[] |
FSNamesystem.datanodeReport(FSConstants.DatanodeReportType type) |
DatanodeInfo |
FSNamesystem.getDataNodeInfo(String name) |
DatanodeInfo[] |
NameNode.getDatanodeReport(FSConstants.DatanodeReportType type) |
限定符和类型 | 方法和说明 |
---|---|
LocatedBlock |
NameNode.addBlock(String src,
String clientName,
DatanodeInfo[] excludedNodes) |
void |
BlockPlacementPolicy.adjustSetsWithChosenReplica(Map<String,List<DatanodeDescriptor>> rackMap,
List<DatanodeDescriptor> moreThanOne,
List<DatanodeDescriptor> exactlyOne,
DatanodeInfo cur)
Adjust rackmap, moreThanOne, and exactlyOne after removing replica on cur.
|
BlocksWithLocations |
NameNode.getBlocks(DatanodeInfo datanode,
long size)
return a list of blocks & their locations on
datanode whose
total size is size |
protected String |
BlockPlacementPolicyWithNodeGroup.getRack(DatanodeInfo cur) |
protected String |
BlockPlacementPolicy.getRack(DatanodeInfo datanode)
Get rack string from a data node
|
void |
FSNamesystem.markBlockAsCorrupt(Block blk,
DatanodeInfo dn)
Mark the block belonging to datanode as corrupt
|
限定符和类型 | 方法和说明 |
---|---|
DatanodeInfo[][] |
BlockCommand.getTargets() |
限定符和类型 | 方法和说明 |
---|---|
BlocksWithLocations |
NamenodeProtocol.getBlocks(DatanodeInfo datanode,
long size)
Get a list of blocks belonged to
datanode
whose total size is equal to size |
Copyright © 2009 The Apache Software Foundation