程序包 | 说明 |
---|---|
org.apache.hadoop.examples |
Hadoop example code.
|
org.apache.hadoop.mapreduce.lib.reduce | |
org.apache.hadoop.typedbytes |
Typed bytes are sequences of bytes in which the first byte is a type code.
|
org.apache.hadoop.util |
Common utilities.
|
限定符和类型 | 方法和说明 |
---|---|
RecordReader<IntWritable,IntWritable> |
SleepJob.SleepInputFormat.getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter) |
RecordReader<IntWritable,IntWritable> |
SleepJob.SleepInputFormat.getRecordReader(InputSplit ignored,
JobConf conf,
Reporter reporter) |
限定符和类型 | 方法和说明 |
---|---|
int |
SleepJob.getPartition(IntWritable k,
NullWritable v,
int numPartitions) |
int |
SecondarySort.FirstPartitioner.getPartition(SecondarySort.IntPair key,
IntWritable value,
int numPartitions) |
void |
SleepJob.map(IntWritable key,
IntWritable value,
OutputCollector<IntWritable,NullWritable> output,
Reporter reporter) |
void |
SleepJob.reduce(IntWritable key,
Iterator<NullWritable> values,
OutputCollector<NullWritable,NullWritable> output,
Reporter reporter) |
限定符和类型 | 方法和说明 |
---|---|
void |
SleepJob.map(IntWritable key,
IntWritable value,
OutputCollector<IntWritable,NullWritable> output,
Reporter reporter) |
void |
SecondarySort.Reduce.reduce(SecondarySort.IntPair key,
Iterable<IntWritable> values,
Reducer.Context context) |
void |
WordCount.IntSumReducer.reduce(Text key,
Iterable<IntWritable> values,
Reducer.Context context) |
限定符和类型 | 方法和说明 |
---|---|
void |
IntSumReducer.reduce(Key key,
Iterable<IntWritable> values,
Reducer.Context context) |
限定符和类型 | 方法和说明 |
---|---|
IntWritable |
TypedBytesWritableInput.readInt() |
IntWritable |
TypedBytesWritableInput.readInt(IntWritable iw) |
限定符和类型 | 方法和说明 |
---|---|
IntWritable |
TypedBytesWritableInput.readInt(IntWritable iw) |
void |
TypedBytesWritableOutput.writeInt(IntWritable iw) |
构造器和说明 |
---|
MergeSort(Comparator<IntWritable> comparator) |
Copyright © 2009 The Apache Software Foundation