public class SequenceFileRDDFunctions<K,V> extends Object implements Logging, scala.Serializable
Import org.apache.spark.SparkContext._
at the top of their program to use these functions.
Constructor and Description |
---|
SequenceFileRDDFunctions(RDD<scala.Tuple2<K,V>> self,
scala.Function1<K,org.apache.hadoop.io.Writable> evidence$1,
scala.reflect.ClassTag<K> evidence$2,
scala.Function1<V,org.apache.hadoop.io.Writable> evidence$3,
scala.reflect.ClassTag<V> evidence$4) |
Modifier and Type | Method and Description |
---|---|
void |
saveAsSequenceFile(String path,
scala.Option<Class<? extends org.apache.hadoop.io.compress.CompressionCodec>> codec)
Output the RDD as a Hadoop SequenceFile using the Writable types we infer from the RDD's key
and value types.
|
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
initialized, initializeIfNecessary, initializeLogging, initLock, isTraceEnabled, log_, log, logDebug, logDebug, logError, logError, logInfo, logInfo, logName, logTrace, logTrace, logWarning, logWarning
public void saveAsSequenceFile(String path, scala.Option<Class<? extends org.apache.hadoop.io.compress.CompressionCodec>> codec)
path
can be on any Hadoop-supported
file system.