程序包 | 说明 |
---|---|
org.apache.hadoop.contrib.index.example | |
org.apache.hadoop.contrib.index.mapred | |
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.examples |
Hadoop example code.
|
org.apache.hadoop.examples.dancing |
This package is a distributed implementation of Knuth's dancing links
algorithm that can run under Hadoop.
|
org.apache.hadoop.examples.terasort |
This package consists of 3 map/reduce applications for Hadoop to
compete in the annual terabyte sort
competition.
|
org.apache.hadoop.mapred |
A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
|
org.apache.hadoop.mapred.join |
Given a set of sorted datasets keyed with the same class and yielding equal
partitions, it is possible to effect a join of those datasets prior to the map.
|
org.apache.hadoop.mapred.lib |
Library of generally useful mappers, reducers, and partitioners.
|
org.apache.hadoop.mapred.lib.aggregate |
Classes for performing various counting and aggregations.
|
org.apache.hadoop.mapred.lib.db |
org.apache.hadoop.mapred.lib.db Package
This package contains a library to read records from a database as an
input to a mapreduce job, and write the output records to the database.
|
org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
org.apache.hadoop.tools |
限定符和类型 | 方法和说明 |
---|---|
RecordReader<DocumentID,LineDocTextAndOp> |
LineDocInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
IdentityLocalAnalysis.map(DocumentID key,
DocumentAndOp value,
OutputCollector<DocumentID,DocumentAndOp> output,
Reporter reporter) |
void |
LineDocLocalAnalysis.map(DocumentID key,
LineDocTextAndOp value,
OutputCollector<DocumentID,DocumentAndOp> output,
Reporter reporter) |
限定符和类型 | 方法和说明 |
---|---|
void |
IndexUpdateMapper.map(K key,
V value,
OutputCollector<Shard,IntermediateForm> output,
Reporter reporter)
Map a key-value pair to a shard-and-intermediate form pair.
|
void |
IndexUpdateCombiner.reduce(Shard key,
Iterator<IntermediateForm> values,
OutputCollector<Shard,IntermediateForm> output,
Reporter reporter) |
void |
IndexUpdateReducer.reduce(Shard key,
Iterator<IntermediateForm> values,
OutputCollector<Shard,Text> output,
Reporter reporter) |
限定符和类型 | 字段和说明 |
---|---|
protected Reporter |
DataJoinReducerBase.reporter |
protected Reporter |
DataJoinMapperBase.reporter |
限定符和类型 | 方法和说明 |
---|---|
protected void |
DataJoinReducerBase.collect(Object key,
TaggedMapOutput aRecord,
OutputCollector output,
Reporter reporter)
The subclass can overwrite this method to perform additional filtering
and/or other processing logic before a value is collected.
|
void |
DataJoinReducerBase.map(Object arg0,
Object arg1,
OutputCollector arg2,
Reporter arg3) |
void |
DataJoinMapperBase.map(Object key,
Object value,
OutputCollector output,
Reporter reporter) |
void |
DataJoinReducerBase.reduce(Object key,
Iterator values,
OutputCollector output,
Reporter reporter) |
void |
DataJoinMapperBase.reduce(Object arg0,
Iterator arg1,
OutputCollector arg2,
Reporter arg3) |
限定符和类型 | 方法和说明 |
---|---|
void |
DistributedPentomino.PentMap.map(WritableComparable key,
Text value,
OutputCollector<Text,Text> output,
Reporter reporter)
Break the prefix string into moves (a sequence of integer row ids that
will be selected for each column in order).
|
限定符和类型 | 方法和说明 |
---|---|
RecordReader<Text,Text> |
TeraInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
TeraGen.SortGenMapper.map(LongWritable row,
NullWritable ignored,
OutputCollector<Text,Text> output,
Reporter reporter) |
限定符和类型 | 类和说明 |
---|---|
protected class |
Task.TaskReporter |
限定符和类型 | 字段和说明 |
---|---|
static Reporter |
Reporter.NULL
A constant of Reporter type that does nothing.
|
限定符和类型 | 方法和说明 |
---|---|
void |
TextOutputFormat.LineRecordWriter.close(Reporter reporter) |
void |
RecordWriter.close(Reporter reporter)
Close this
RecordWriter to future operations. |
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split
|
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
已过时。
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
Task.initialize(JobConf job,
JobID id,
Reporter reporter,
boolean useNewApi) |
void |
Mapper.map(K1 key,
V1 value,
OutputCollector<K2,V2> output,
Reporter reporter)
Maps a single input key/value pair into an intermediate key/value pair.
|
void |
Reducer.reduce(K2 key,
Iterator<V2> values,
OutputCollector<K3,V3> output,
Reporter reporter)
Reduces values for a given key.
|
void |
MapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
void |
MapRunnable.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter)
Start mapping input <key, value> pairs.
|
构造器和说明 |
---|
Task.CombineValuesIterator(RawKeyValueIterator in,
RawComparator<KEY> comparator,
Class<KEY> keyClass,
Class<VALUE> valClass,
Configuration conf,
Reporter reporter,
Counters.Counter combineInputCounter) |
限定符和类型 | 方法和说明 |
---|---|
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
|
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
限定符和类型 | 字段和说明 |
---|---|
protected Reporter |
CombineFileRecordReader.reporter |
限定符和类型 | 方法和说明 |
---|---|
OutputCollector |
MultipleOutputs.getCollector(String namedOutput,
Reporter reporter)
Gets the output collector for a named output.
|
OutputCollector |
MultipleOutputs.getCollector(String namedOutput,
String multiName,
Reporter reporter)
Gets the output collector for a multi named output.
|
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader<K,V> |
DelegatingInputFormat.getRecordReader(InputSplit split,
JobConf conf,
Reporter reporter) |
abstract RecordReader<K,V> |
CombineFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
This is not implemented yet.
|
void |
DelegatingMapper.map(K1 key,
V1 value,
OutputCollector<K2,V2> outputCollector,
Reporter reporter) |
void |
TokenCountMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter) |
void |
RegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter) |
void |
IdentityMapper.map(K key,
V val,
OutputCollector<K,V> output,
Reporter reporter)
The identify function.
|
void |
FieldSelectionMapReduce.map(K key,
V val,
OutputCollector<Text,Text> output,
Reporter reporter)
The identify function.
|
void |
InverseMapper.map(K key,
V value,
OutputCollector<V,K> output,
Reporter reporter)
The inverse function.
|
void |
ChainMapper.map(Object key,
Object value,
OutputCollector output,
Reporter reporter)
Chains the
map(...) |
void |
LongSumReducer.reduce(K key,
Iterator<LongWritable> values,
OutputCollector<K,LongWritable> output,
Reporter reporter) |
void |
IdentityReducer.reduce(K key,
Iterator<V> values,
OutputCollector<K,V> output,
Reporter reporter)
Writes all keys and values directly to output.
|
void |
ChainReducer.reduce(Object key,
Iterator values,
OutputCollector output,
Reporter reporter)
Chains the
reduce(...) |
void |
FieldSelectionMapReduce.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter) |
void |
MultithreadedMapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
构造器和说明 |
---|
CombineFileRecordReader(JobConf job,
CombineFileSplit split,
Reporter reporter,
Class<RecordReader<K,V>> rrClass)
A generic RecordReader that can hand out different recordReaders
for each chunk in the CombineFileSplit.
|
限定符和类型 | 方法和说明 |
---|---|
void |
ValueAggregatorReducer.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorMapper.map(K1 key,
V1 value,
OutputCollector<Text,Text> output,
Reporter reporter)
the map function.
|
void |
ValueAggregatorCombiner.map(K1 arg0,
V1 arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorReducer.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter) |
void |
ValueAggregatorMapper.reduce(Text arg0,
Iterator<Text> arg1,
OutputCollector<Text,Text> arg2,
Reporter arg3)
Do nothing.
|
void |
ValueAggregatorCombiner.reduce(Text key,
Iterator<Text> values,
OutputCollector<Text,Text> output,
Reporter reporter)
Combines values for a given key.
|
限定符和类型 | 方法和说明 |
---|---|
void |
DBOutputFormat.DBRecordWriter.close(Reporter reporter)
Close this
RecordWriter to future operations. |
RecordReader<LongWritable,T> |
DBInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the
RecordReader for the given InputSplit . |
限定符和类型 | 方法和说明 |
---|---|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter) |
RecordReader |
AutoInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter) |
void |
PipeMapper.map(Object key,
Object value,
OutputCollector output,
Reporter reporter) |
void |
PipeReducer.reduce(Object key,
Iterator values,
OutputCollector output,
Reporter reporter) |
void |
PipeMapRunner.run(RecordReader<K1,V1> input,
OutputCollector<K2,V2> output,
Reporter reporter) |
构造器和说明 |
---|
StreamBaseRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs) |
StreamXmlRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs) |
限定符和类型 | 方法和说明 |
---|---|
void |
Logalyzer.LogRegexMapper.map(K key,
Text value,
OutputCollector<Text,LongWritable> output,
Reporter reporter) |
Copyright © 2009 The Apache Software Foundation