Creates a ChiSquared feature selector.
Chi Squared selector model.
Chi Squared selector model.
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector.
Outputs the Hadamard product (i.e., the element-wise product) of each input vector with a provided "weight" vector. In other words, it scales each column of the dataset by a scalar multiplier.
Maps a sequence of terms to their term frequencies using the hashing trick.
Maps a sequence of terms to their term frequencies using the hashing trick.
Inverse document frequency (IDF).
Inverse document frequency (IDF).
The standard formulation is used: idf = log((m + 1) / (d(t) + 1))
, where m
is the total
number of documents and d(t)
is the number of documents that contain term t
.
This implementation supports filtering out terms which do not appear in a minimum number
of documents (controlled by the variable minDocFreq
). For terms that are not in
at least minDocFreq
documents, the IDF is found as 0, resulting in TF-IDFs of 0.
Represents an IDF model that can transform term frequency vectors.
Represents an IDF model that can transform term frequency vectors.
Normalizes samples individually to unit Lp norm
Normalizes samples individually to unit Lp norm
For any 1 <= p < Double.PositiveInfinity, normalizes samples using sum(abs(vector).p)(1/p) as norm.
For p = Double.PositiveInfinity, max(abs(vector)) will be used as norm for normalization.
A feature transformer that projects vectors to a low-dimensional space using PCA.
A feature transformer that projects vectors to a low-dimensional space using PCA.
Model fitted by PCA that can project vectors to a low-dimensional space using PCA.
Model fitted by PCA that can project vectors to a low-dimensional space using PCA.
Standardizes features by removing the mean and scaling to unit std using column summary statistics on the samples in the training set.
Standardizes features by removing the mean and scaling to unit std using column summary statistics on the samples in the training set.
The "unit std" is computed using the corrected sample standard deviation (https://en.wikipedia.org/wiki/Standard_deviation#Corrected_sample_standard_deviation), which is computed as the square root of the unbiased sample variance.
Represents a StandardScaler model that can transform vectors.
Represents a StandardScaler model that can transform vectors.
:: DeveloperApi :: Trait for transformation of a vector
:: DeveloperApi :: Trait for transformation of a vector
Word2Vec creates vector representation of words in a text corpus.
Word2Vec creates vector representation of words in a text corpus. The algorithm first constructs a vocabulary from the corpus and then learns vector representation of words in the vocabulary. The vector representation can be used as features in natural language processing and machine learning algorithms.
We used skip-gram model in our implementation and hierarchical softmax method to train the model. The variable names in the implementation matches the original C implementation.
For original C implementation, see https://code.google.com/p/word2vec/ For research papers, see Efficient Estimation of Word Representations in Vector Space and Distributed Representations of Words and Phrases and their Compositionality.
Word2Vec model
Word2Vec model
Creates a ChiSquared feature selector. The selector supports different selection methods:
numTopFeatures
,percentile
,fpr
,fdr
,fwe
.numTopFeatures
chooses a fixed number of top features according to a chi-squared test.percentile
is similar but chooses a fraction of all features instead of a fixed number.fpr
chooses all features whose p-values are below a threshold, thus controlling the false positive rate of selection.fdr
uses the [Benjamini-Hochberg procedure] (https://en.wikipedia.org/wiki/False_discovery_rate#Benjamini.E2.80.93Hochberg_procedure) to choose all features whose false discovery rate is below a threshold.fwe
chooses all features whose p-values are below a threshold. The threshold is scaled by 1/numFeatures, thus controlling the family-wise error rate of selection. By default, the selection method isnumTopFeatures
, with the default number of top features set to 50.