public class NewHadoopRDD<K,V> extends RDD<scala.Tuple2<K,V>> implements Logging
org.apache.hadoop.mapreduce
).
Note: Instantiating this class directly is not recommended, please use
org.apache.spark.SparkContext.newAPIHadoopRDD()
Constructor and Description |
---|
NewHadoopRDD(SparkContext sc,
Class<? extends org.apache.hadoop.mapreduce.InputFormat<K,V>> inputFormatClass,
Class<K> keyClass,
Class<V> valueClass,
org.apache.hadoop.conf.Configuration conf) |
Modifier and Type | Method and Description |
---|---|
InterruptibleIterator<scala.Tuple2<K,V>> |
compute(Partition theSplit,
TaskContext context)
:: DeveloperApi ::
Implemented by subclasses to compute a given partition.
|
org.apache.hadoop.conf.Configuration |
getConf() |
Partition[] |
getPartitions()
Implemented by subclasses to return the set of partitions in this RDD.
|
scala.collection.Seq<String> |
getPreferredLocations(Partition split)
Optionally overridden by subclasses to specify placement preferences.
|
aggregate, cache, cartesian, checkpoint, checkpointData, coalesce, collect, collect, context, count, countApprox, countApproxDistinct, countByValue, countByValueApprox, creationSiteInfo, dependencies, distinct, distinct, filter, filterWith, first, flatMap, flatMapWith, fold, foreach, foreachPartition, foreachWith, getCheckpointFile, getStorageLevel, glom, groupBy, groupBy, groupBy, id, intersection, intersection, intersection, isCheckpointed, iterator, keyBy, map, mapPartitions, mapPartitionsWithContext, mapPartitionsWithIndex, mapPartitionsWithSplit, mapWith, max, min, name, partitioner, partitions, persist, persist, pipe, pipe, pipe, preferredLocations, randomSplit, reduce, repartition, sample, saveAsObjectFile, saveAsTextFile, saveAsTextFile, setName, sparkContext, subtract, subtract, subtract, take, takeOrdered, takeSample, toArray, toDebugString, toJavaRDD, toLocalIterator, top, toString, union, unpersist, zip, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipPartitions, zipWithIndex, zipWithUniqueId
initialized, initializeIfNecessary, initializeLogging, initLock, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logTrace, logTrace, logWarning, logWarning
public NewHadoopRDD(SparkContext sc, Class<? extends org.apache.hadoop.mapreduce.InputFormat<K,V>> inputFormatClass, Class<K> keyClass, Class<V> valueClass, org.apache.hadoop.conf.Configuration conf)
public Partition[] getPartitions()
RDD
public InterruptibleIterator<scala.Tuple2<K,V>> compute(Partition theSplit, TaskContext context)
RDD
public scala.collection.Seq<String> getPreferredLocations(Partition split)
RDD
public org.apache.hadoop.conf.Configuration getConf()