程序包 | 说明 |
---|---|
org.apache.hadoop.examples |
Hadoop example code.
|
org.apache.hadoop.mapred |
A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
|
org.apache.hadoop.mapred.pipes |
Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce.
|
org.apache.hadoop.streaming |
Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
|
限定符和类型 | 方法和说明 |
---|---|
RunningJob |
Sort.getResult()
Get the last job that was run using this instance.
|
限定符和类型 | 方法和说明 |
---|---|
RunningJob |
JobClient.getJob(JobID jobid)
Get an
RunningJob object to track an ongoing job. |
RunningJob |
JobClient.getJob(String jobid)
已过时。
Applications should rather use
JobClient.getJob(JobID) . |
static RunningJob |
JobClient.runJob(JobConf job)
Utility that submits a job, then polls for progress until the job is
complete.
|
RunningJob |
JobClient.submitJob(JobConf job)
Submit a job to the MR system.
|
RunningJob |
JobClient.submitJob(String jobFile)
Submit a job to the MR system.
|
RunningJob |
JobClient.submitJobInternal(JobConf job)
Internal method for submitting jobs to the system.
|
限定符和类型 | 方法和说明 |
---|---|
boolean |
JobClient.monitorAndPrintJob(JobConf conf,
RunningJob job)
Monitor a job and print status in real-time as progress is made and tasks
fail.
|
限定符和类型 | 方法和说明 |
---|---|
static RunningJob |
Submitter.jobSubmit(JobConf conf)
Submit a job to the Map-Reduce framework.
|
static RunningJob |
Submitter.runJob(JobConf conf)
Submit a job to the map/reduce cluster.
|
static RunningJob |
Submitter.submitJob(JobConf conf)
已过时。
|
限定符和类型 | 字段和说明 |
---|---|
protected RunningJob |
StreamJob.running_ |
Copyright © 2009 The Apache Software Foundation