An alias for getOrDefault()
.
An alias for getOrDefault()
.
Clears the user-supplied value for the input param.
Clears the user-supplied value for the input param.
Creates a copy of this instance with the same UID and some extra params.
Creates a copy of this instance with the same UID and some extra params.
Subclasses should implement this method and set the return type properly.
See defaultCopy()
.
Copies param values from this instance to another instance for params shared by them.
Copies param values from this instance to another instance for params shared by them.
This handles default Params and explicitly set Params separately.
Default Params are copied from and to defaultParamMap
, and explicitly set Params are
copied from and to paramMap
.
Warning: This implicitly assumes that this Params instance and the target instance
share the same set of default Params.
the target instance, which should work with the same set of default Params as this source instance
extra params to be copied to the target's paramMap
the target instance with param values copied
Default implementation of copy with extra params.
Default implementation of copy with extra params. It tries to create a new instance with the same UID. Then it copies the embedded and extra parameters over and returns the new instance.
Explains a param.
Explains a param.
input param, must belong to this instance.
a string that contains the input param name, doc, and optionally its default value and the user-supplied value
Explains all params of this instance.
Explains all params of this instance. See explainParam()
.
extractParamMap
with no extra values.
extractParamMap
with no extra values.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values less than user-supplied values less than extra.
Fits a model to the input data.
Fits a model to the input data.
Fits multiple models to the input data with multiple sets of parameters.
Fits multiple models to the input data with multiple sets of parameters. The default implementation uses a for loop on each parameter map. Subclasses could override this to optimize multi-model training.
input dataset
An array of parameter maps. These values override any specified in this Estimator's embedded ParamMap.
fitted models, matching the input parameter maps
Fits a single model to the input data with provided parameter map.
Fits a single model to the input data with provided parameter map.
input dataset
Parameter map. These values override any specified in this Estimator's embedded ParamMap.
fitted model
Fits a single model to the input data with optional parameters.
Fits a single model to the input data with optional parameters.
input dataset
the first param pair, overrides embedded params
other param pairs. These values override any specified in this Estimator's embedded ParamMap.
fitted model
Optionally returns the user-supplied value of a param.
Optionally returns the user-supplied value of a param.
Gets the default value of a parameter.
Gets the default value of a parameter.
Gets the value of a param in the embedded param map or its default value.
Gets the value of a param in the embedded param map or its default value. Throws an exception if neither is set.
Gets a param by its name.
Gets a param by its name.
Param for how to handle invalid entries.
Param for how to handle invalid entries. Options are 'skip' (filter out rows with invalid values), 'error' (throw an error), or 'keep' (keep invalid values in a special additional bucket). Note that in the multiple columns case, the invalid handling is applied to all columns. That said for 'error' it will throw an error if any invalids are found in any column, for 'skip' it will skip rows with any invalids in any columns, etc. Default: "error"
Tests whether the input param has a default value set.
Tests whether the input param has a default value set.
Tests whether this instance contains a param with a given name.
Tests whether this instance contains a param with a given name.
Param for input column name.
Param for input column name.
Param for input column names.
Param for input column names.
Checks whether a param is explicitly set or has a default value.
Checks whether a param is explicitly set or has a default value.
Checks whether a param is explicitly set.
Checks whether a param is explicitly set.
Number of buckets (quantiles, or categories) into which data points are grouped.
Number of buckets (quantiles, or categories) into which data points are grouped. Must be greater than or equal to 2.
See also handleInvalid, which can optionally create an additional bucket for NaN values.
default: 2
Array of number of buckets (quantiles, or categories) into which data points are grouped.
Array of number of buckets (quantiles, or categories) into which data points are grouped. Each value must be greater than or equal to 2
See also handleInvalid, which can optionally create an additional bucket for NaN values.
Param for output column name.
Param for output column name.
Param for output column names.
Param for output column names.
Returns all params sorted by their names.
Returns all params sorted by their names. The default implementation uses Java reflection to list all public methods that have no arguments and return Param.
Developer should not use this method in constructor because we cannot guarantee that this variable gets initialized before other params.
Relative error (see documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for description)
Must be in the range [0, 1].
Relative error (see documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for description)
Must be in the range [0, 1].
Note that in multiple columns case, relative error is applied to all columns.
default: 0.001
Saves this ML instance to the input path, a shortcut of write.save(path)
.
Saves this ML instance to the input path, a shortcut of write.save(path)
.
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
Sets a parameter (by name) in the embedded param map.
Sets a parameter (by name) in the embedded param map.
Sets a parameter in the embedded param map.
Sets a parameter in the embedded param map.
Sets default values for a list of params.
Sets default values for a list of params.
Note: Java developers should use the single-parameter setDefault
.
Annotating this with varargs can cause compilation failures due to a Scala compiler bug.
See SPARK-9268.
a list of param pairs that specify params and their default values to set respectively. Make sure that the params are initialized before this method gets called.
Sets a default value for a param.
Sets a default value for a param.
param to set the default value. Make sure that this param is initialized before this method gets called.
the default value
:: DeveloperApi ::
:: DeveloperApi ::
Check transform validity and derive the output schema from the input schema.
We check validity for interactions between parameters during transformSchema
and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate()
.
Typical implementation should first conduct verification on schema change and parameter validity, including complex parameter interaction checks.
:: DeveloperApi ::
:: DeveloperApi ::
Derives the output schema from the input schema and parameters, optionally with logging.
This should be optimistic. If it is unclear whether the schema will be valid, then it should be assumed valid until proven otherwise.
An immutable unique ID for the object and its derivatives.
An immutable unique ID for the object and its derivatives.
Returns an MLWriter
instance for this ML instance.
Returns an MLWriter
instance for this ML instance.
A list of (hyper-)parameter keys this algorithm can take. Users can set and get the parameter values through setters and getters, respectively.
QuantileDiscretizer
takes a column with continuous features and outputs a column with binned categorical features. The number of bins can be set using thenumBuckets
parameter. It is possible that the number of buckets used will be smaller than this value, for example, if there are too few distinct values of the input to create enough distinct quantiles. Since 2.3.0,QuantileDiscretizer
can map multiple columns at once by setting theinputCols
parameter. If both of theinputCol
andinputCols
parameters are set, an Exception will be thrown. To specify the number of buckets for each column, thenumBucketsArray
parameter can be set, or if the number of buckets should be the same across columns,numBuckets
can be set as a convenience.NaN handling: null and NaN values will be ignored from the column during
QuantileDiscretizer
fitting. This will produce aBucketizer
model for making predictions. During the transformation,Bucketizer
will raise an error when it finds NaN values in the dataset, but the user can also choose to either keep or remove NaN values within the dataset by settinghandleInvalid
. If the user chooses to keep NaN values, they will be handled specially and placed into their own bucket, for example, if 4 buckets are used, then non-NaN data will be put into buckets[0-3], but NaNs will be counted in a special bucket[4].Algorithm: The bin ranges are chosen using an approximate algorithm (see the documentation for
org.apache.spark.sql.DataFrameStatFunctions.approxQuantile
for a detailed description). The precision of the approximation can be controlled with therelativeError
parameter. The lower and upper bin bounds will be-Infinity
and+Infinity
, covering all real values.