SparkContext.
newAPIHadoopFile
Read a ‘new API’ Hadoop InputFormat with arbitrary key and value class from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI. The mechanism is the same as for meth:SparkContext.sequenceFile.
A Hadoop configuration can be passed in as a Python dict. This will be converted into a Configuration in Java
New in version 1.1.0.
path to Hadoop file
fully qualified classname of Hadoop InputFormat (e.g. “org.apache.hadoop.mapreduce.lib.input.TextInputFormat”)
fully qualified classname of key Writable class (e.g. “org.apache.hadoop.io.Text”)
fully qualified classname of value Writable class (e.g. “org.apache.hadoop.io.LongWritable”)
fully qualified name of a function returning key WritableConverter None by default
fully qualified name of a function returning value WritableConverter None by default
Hadoop configuration, passed in as a dict None by default
The number of Python objects represented as a single Java object. (default 0, choose batchSize automatically)
RDD
RDD of tuples of key and corresponding value
See also
RDD.saveAsSequenceFile()
RDD.saveAsNewAPIHadoopFile()
RDD.saveAsHadoopFile()
SparkContext.sequenceFile()
SparkContext.hadoopFile()
Examples
>>> import os >>> import tempfile
Set the related classes
>>> output_format_class = "org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat" >>> input_format_class = "org.apache.hadoop.mapreduce.lib.input.SequenceFileInputFormat" >>> key_class = "org.apache.hadoop.io.IntWritable" >>> value_class = "org.apache.hadoop.io.Text"
>>> with tempfile.TemporaryDirectory() as d: ... path = os.path.join(d, "new_hadoop_file") ... ... # Write a temporary Hadoop file ... rdd = sc.parallelize([(1, ""), (1, "a"), (3, "x")]) ... rdd.saveAsNewAPIHadoopFile(path, output_format_class, key_class, value_class) ... ... loaded = sc.newAPIHadoopFile(path, input_format_class, key_class, value_class) ... collected = sorted(loaded.collect())
>>> collected [(1, ''), (1, 'a'), (3, 'x')]