(Since version 2.0.0) Use SparkSession.builder instead
(Since version 2.0.0) Use SparkSession.builder instead
Convert a BaseRelation created for external data sources into a DataFrame.
Convert a BaseRelation created for external data sources into a DataFrame.
1.3.0
Caches the specified table in-memory.
Caches the specified table in-memory.
1.3.0
Removes all cached tables from the in-memory cache.
Removes all cached tables from the in-memory cache.
1.3.0
Applies a schema to a List of Java Beans.
Applies a schema to a List of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
1.6.0
Applies a schema to an RDD of Java Beans.
Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
1.3.0
Applies a schema to an RDD of Java Beans.
Applies a schema to an RDD of Java Beans.
WARNING: Since there is no guaranteed ordering for fields in a Java Bean, SELECT * queries will return the columns in an undefined order.
1.3.0
:: DeveloperApi :: Creates a DataFrame from a java.util.List containing Rows using the given schema.
:: DeveloperApi :: Creates a DataFrame from a JavaRDD containing Rows using the given schema.
:: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema.
:: DeveloperApi :: Creates a DataFrame from an RDD containing Rows using the given schema. It is important to make sure that the structure of every Row of the provided RDD matches the provided schema. Otherwise, there will be runtime exception. Example:
import org.apache.spark.sql._ import org.apache.spark.sql.types._ val sqlContext = new org.apache.spark.sql.SQLContext(sc) val schema = StructType( StructField("name", StringType, false) :: StructField("age", IntegerType, true) :: Nil) val people = sc.textFile("examples/src/main/resources/people.txt").map( _.split(",")).map(p => Row(p(0), p(1).trim.toInt)) val dataFrame = sqlContext.createDataFrame(people, schema) dataFrame.printSchema // root // |-- name: string (nullable = false) // |-- age: integer (nullable = true) dataFrame.createOrReplaceTempView("people") sqlContext.sql("select name from people").collect.foreach(println)
1.3.0
:: Experimental :: Creates a DataFrame from a local Seq of Product.
:: Experimental :: Creates a DataFrame from a local Seq of Product.
1.3.0
:: Experimental :: Creates a DataFrame from an RDD of Product (e.g.
:: Experimental :: Creates a DataFrame from an RDD of Product (e.g. case classes, tuples).
1.3.0
:: Experimental :: Creates a Dataset from a java.util.List of a given type.
:: Experimental ::
Creates a Dataset from a java.util.List of a given type. This method requires an
encoder (to convert a JVM object of type T
to and from the internal Spark SQL representation)
that is generally created automatically through implicits from a SparkSession
, or can be
created explicitly by calling static methods on Encoders.
List<String> data = Arrays.asList("hello", "world"); Dataset<String> ds = spark.createDataset(data, Encoders.STRING());
2.0.0
:: Experimental :: Creates a Dataset from an RDD of a given type.
:: Experimental ::
Creates a Dataset from an RDD of a given type. This method requires an
encoder (to convert a JVM object of type T
to and from the internal Spark SQL representation)
that is generally created automatically through implicits from a SparkSession
, or can be
created explicitly by calling static methods on Encoders.
2.0.0
:: Experimental :: Creates a Dataset from a local Seq of data of a given type.
:: Experimental ::
Creates a Dataset from a local Seq of data of a given type. This method requires an
encoder (to convert a JVM object of type T
to and from the internal Spark SQL representation)
that is generally created automatically through implicits from a SparkSession
, or can be
created explicitly by calling static methods on Encoders.
import spark.implicits._ case class Person(name: String, age: Long) val data = Seq(Person("Michael", 29), Person("Andy", 30), Person("Justin", 19)) val ds = spark.createDataset(data) ds.show() // +-------+---+ // | name|age| // +-------+---+ // |Michael| 29| // | Andy| 30| // | Justin| 19| // +-------+---+
2.0.0
(Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options.
(Scala-specific) Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.
1.3.0
Create an external table from the given path based on a data source, a schema and a set of options.
Create an external table from the given path based on a data source, a schema and a set of options. Then, returns the corresponding DataFrame.
1.3.0
(Scala-specific) Creates an external table from the given path based on a data source and a set of options.
(Scala-specific) Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.
1.3.0
Creates an external table from the given path based on a data source and a set of options.
Creates an external table from the given path based on a data source and a set of options. Then, returns the corresponding DataFrame.
1.3.0
Creates an external table from the given path based on a data source and returns the corresponding DataFrame.
Creates an external table from the given path based on a data source and returns the corresponding DataFrame.
1.3.0
Creates an external table from the given path and returns the corresponding DataFrame.
Creates an external table from the given path and returns the corresponding DataFrame. It will use the default data source configured by spark.sql.sources.default.
1.3.0
Drops the temporary table with the given table name in the catalog.
Drops the temporary table with the given table name in the catalog. If the table has been cached/persisted before, it's also unpersisted.
the name of the table to be unregistered.
1.3.0
Returns a DataFrame with no rows or columns.
Returns a DataFrame with no rows or columns.
1.3.0
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
:: Experimental :: A collection of methods that are considered experimental, but can be used to hook into the query planner for advanced functionality.
1.3.0
Return all the configuration properties that have been set (i.e.
Return all the configuration properties that have been set (i.e. not the default). This creates a new copy of the config properties in the form of a Map.
1.0.0
Return the value of Spark SQL configuration property for the given key.
Return the value of Spark SQL configuration property for the given key. If the key is not set
yet, return defaultValue
.
1.0.0
Return the value of Spark SQL configuration property for the given key.
Return the value of Spark SQL configuration property for the given key.
1.0.0
:: Experimental :: (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.
:: Experimental :: (Scala-specific) Implicit methods available in Scala for converting common Scala objects into DataFrames.
val sqlContext = new SQLContext(sc) import sqlContext.implicits._
1.3.0
Returns true if the table is currently cached in-memory.
Returns true if the table is currently cached in-memory.
1.3.0
An interface to register custom org.apache.spark.sql.util.QueryExecutionListeners that listen for execution metrics.
An interface to register custom org.apache.spark.sql.util.QueryExecutionListeners that listen for execution metrics.
Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.
Returns a SQLContext as new session, with separated SQL configurations, temporary tables, registered functions, but sharing the same SparkContext, cached data and other things.
1.6.0
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in an range from start
to end
(exclusive) with an step value, with partition number
specified.
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in an range from start
to end
(exclusive) with an step value, with partition number
specified.
1.4.0
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from start
to end
(exclusive) with a step value.
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from start
to end
(exclusive) with a step value.
2.0.0
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from start
to end
(exclusive) with step value 1.
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from start
to end
(exclusive) with step value 1.
1.4.0
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from 0 to end
(exclusive) with step value 1.
:: Experimental ::
Creates a DataFrame with a single LongType column named id
, containing elements
in a range from 0 to end
(exclusive) with step value 1.
1.4.1
Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.
Returns a DataFrameReader that can be used to read non-streaming data in as a DataFrame.
sqlContext.read.parquet("/path/to/file.parquet") sqlContext.read.schema(schema).json("/path/to/file.json")
1.4.0
:: Experimental :: Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.
:: Experimental :: Returns a DataStreamReader that can be used to read streaming data in as a DataFrame.
sparkSession.readStream.parquet("/path/to/directory/of/parquet/files") sparkSession.readStream.schema(schema).json("/path/to/directory/of/json/files")
2.0.0
Set the given Spark SQL configuration property.
Set the given Spark SQL configuration property.
1.0.0
Set Spark SQL configuration properties.
Set Spark SQL configuration properties.
1.0.0
Executes a SQL query using Spark, returning the result as a DataFrame.
Executes a SQL query using Spark, returning the result as a DataFrame. The dialect that is used for SQL parsing can be configured with 'spark.sql.dialect'.
1.3.0
Returns a StreamingQueryManager that allows managing all the
StreamingQueries active on this
context.
Returns a StreamingQueryManager that allows managing all the
StreamingQueries active on this
context.
2.0.0
Returns the specified table as a DataFrame.
Returns the specified table as a DataFrame.
1.3.0
Returns the names of tables in the given database as an array.
Returns the names of tables in the given database as an array.
1.3.0
Returns the names of tables in the current database as an array.
Returns the names of tables in the current database as an array.
1.3.0
Returns a DataFrame containing names of existing tables in the given database.
Returns a DataFrame containing names of existing tables in the given database. The returned DataFrame has two columns, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).
1.3.0
Returns a DataFrame containing names of existing tables in the current database.
Returns a DataFrame containing names of existing tables in the current database. The returned DataFrame has two columns, tableName and isTemporary (a Boolean indicating if a table is a temporary one or not).
1.3.0
A collection of methods for registering user-defined functions (UDF).
A collection of methods for registering user-defined functions (UDF). Note that the user-defined functions must be deterministic. Due to optimization, duplicate invocations may be eliminated or the function may even be invoked more times than it is present in the query.
The following example registers a Scala closure as UDF:
sqlContext.udf.register("myUDF", (arg1: Int, arg2: String) => arg2 + arg1)
The following example registers a UDF in Java:
sqlContext.udf().register("myUDF", new UDF2<Integer, String, String>() { @Override public String call(Integer arg1, String arg2) { return arg2 + arg1; } }, DataTypes.StringType);
Or, to use Java 8 lambda syntax:
sqlContext.udf().register("myUDF", (Integer arg1, String arg2) -> arg2 + arg1, DataTypes.StringType);
1.3.0
Removes the specified table from the in-memory cache.
Removes the specified table from the in-memory cache.
1.3.0
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
(Since version 1.3.0) Use createDataFrame instead.
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
Construct a DataFrame representing the database table accessible via JDBC URL url named table. The theParts parameter gives a list expressions suitable for inclusion in WHERE clauses; each one defines one partition of the DataFrame.
(Since version 1.4.0) Use read.jdbc() instead.
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
Construct a DataFrame representing the database table accessible via JDBC URL url named table. Partitions of the table will be retrieved in parallel based on the parameters passed to this function.
the name of a column of integral type that will be used for partitioning.
the minimum value of columnName
used to decide partition stride
the maximum value of columnName
used to decide partition stride
the number of partitions. the range minValue
-maxValue
will be split
evenly into this many partitions
(Since version 1.4.0) Use read.jdbc() instead.
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
Construct a DataFrame representing the database table accessible via JDBC URL url named table.
(Since version 1.4.0) Use read.jdbc() instead.
(Since version 1.4.0) Use read.json() instead.
Loads a JSON file (one object per line) and applies the given schema, returning the result as a DataFrame.
Loads a JSON file (one object per line) and applies the given schema, returning the result as a DataFrame.
(Since version 1.4.0) Use read.json() instead.
Loads a JSON file (one object per line), returning the result as a DataFrame.
Loads a JSON file (one object per line), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.
(Since version 1.4.0) Use read.json() instead.
Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.
Loads a JavaRDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.
(Since version 1.4.0) Use read.json() instead.
Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.
Loads an RDD[String] storing JSON objects (one object per record) inferring the schema, returning the result as a DataFrame.
(Since version 1.4.0) Use read.json() instead.
Loads an JavaRDD<String> storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.
Loads an JavaRDD<String> storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.
(Since version 1.4.0) Use read.json() instead.
Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.
Loads an RDD[String] storing JSON objects (one object per record) and applies the given schema, returning the result as a DataFrame.
(Since version 1.4.0) Use read.json() instead.
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame.
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.
(Since version 1.4.0) Use read.json() instead.
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame.
Loads an RDD[String] storing JSON objects (one object per record), returning the result as a DataFrame. It goes through the entire dataset once to determine the schema.
(Since version 1.4.0) Use read.json() instead.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame, using the given schema as the schema of the DataFrame.
(Since version 1.4.0) Use read.format(source).schema(schema).options(options).load() instead.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Scala-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Since version 1.4.0) Use read.format(source).options(options).load() instead.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Java-specific) Returns the dataset specified by the given data source and a set of options as a DataFrame.
(Since version 1.4.0) Use read.format(source).options(options).load() instead.
Returns the dataset stored at path as a DataFrame, using the given data source.
Returns the dataset stored at path as a DataFrame, using the given data source.
(Since version 1.4.0) Use read.format(source).load(path) instead.
Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.
Returns the dataset stored at path as a DataFrame, using the default data source configured by spark.sql.sources.default.
(Since version 1.4.0) Use read.load(path) instead.
Loads a Parquet file, returning the result as a DataFrame.
The entry point for working with structured data (rows and columns) in Spark 1.x.
As of Spark 2.0, this is replaced by SparkSession. However, we are keeping the class here for backward compatibility.
1.0.0