Input Features type
Input Features type
Checks the input length
Checks the input length
input features
true is input size as expected, false otherwise
Check if the stage is serializable
Check if the stage is serializable
Failure if not serializable
This method is used to make a copy of the instance with new parameters in several methods in spark internals Default will find the constructor and make a copy for any class (AS LONG AS ALL CONSTRUCTOR PARAMS ARE VALS, this is why type tags are written as implicit vals in base classes).
This method is used to make a copy of the instance with new parameters in several methods in spark internals Default will find the constructor and make a copy for any class (AS LONG AS ALL CONSTRUCTOR PARAMS ARE VALS, this is why type tags are written as implicit vals in base classes).
Note: that the convention in spark is to have the uid be a constructor argument, so that copies will share a uid with the original (developers should follow this convention).
new parameters want to add to instance
a new instance with the same uid
the estimator to wrap
the estimator to wrap
Gets names of parameters that control input columns for Spark stage
Gets names of parameters that control input columns for Spark stage
Gets an input feature Note: this method IS NOT safe to use outside the driver, please use getTransientFeature method instead
Gets an input feature Note: this method IS NOT safe to use outside the driver, please use getTransientFeature method instead
array of features
NoSuchElementException
if the features are not set
RuntimeException
in case one of the features is null
Gets the input features Note: this method IS NOT safe to use outside the driver, please use getTransientFeatures method instead
Gets the input features Note: this method IS NOT safe to use outside the driver, please use getTransientFeatures method instead
array of features
NoSuchElementException
if the features are not set
RuntimeException
in case one of the features is null
Method to access the local version of stage being wrapped
Method to access the local version of stage being wrapped
Option of ml leap runtime version of the spark stage after reloading as local
Output features that will be created by this stage
Output features that will be created by this stage
feature of type OutputFeatures
Gets names of parameters that control output columns for Spark stage
Gets names of parameters that control output columns for Spark stage
Name of output feature (i.e.
Name of output feature (i.e. column created by this stage)
Method to access the spark stage being wrapped
Method to access the spark stage being wrapped
Option of spark ml stage
Gets a save path for wrapped spark stage
Gets a save path for wrapped spark stage
Gets an input feature at index i
Gets an input feature at index i
input index
maybe an input feature
Gets the input Features
Function to convert InputFeatures to an Array of FeatureLike
Function to convert InputFeatures to an Array of FeatureLike
an Array of FeatureLike
name of spark parameter that sets the second input column
name of spark parameter that sets the second input column
Function to be called on getMetadata
Function to be called on getMetadata
Function to be called on setInput
Function to be called on setInput
Short unique name of the operation this stage performs
Short unique name of the operation this stage performs
operation name
Function to convert OutputFeatures to an Array of FeatureLike
Function to convert OutputFeatures to an Array of FeatureLike
an Array of FeatureLike
Should output feature be a response? Yes, if any of the input features are.
Should output feature be a response? Yes, if any of the input features are.
true if the the output feature should be a response
name of spark parameter that sets the first output column
name of spark parameter that sets the first output column
Set binary toggle to control the output vector values.
Set binary toggle to control the output vector values. If True, all nonzero counts (after minTF filter applied) are set to 1. This is useful for discrete probabilistic models that model binary events rather than integer counts. Default: false
Input features that will be used by the stage
Input features that will be used by the stage
feature of type InputFeatures
Sets input features
Sets input features
feature like type
array of input features
this stage
Set minimum number of different documents a term must appear in to be included in the vocabulary.
Set minimum number of different documents a term must appear in to be included in the vocabulary. If this is an integer greater than or equal to 1, this specifies the number of documents the term must appear in; if this is a double in [0,1), then this specifies the fraction of documents. Default: 1.0
Set minimum number of times a term must appear in a document.
Set minimum number of times a term must appear in a document. Filter to ignore rare words in a document. For each document, terms with frequency/count less than the given threshold are ignored. If this is an integer greater than or equal to 1, then this specifies a count (of times the term must appear in the document); if this is a double in [0,1), then this specifies a fraction (out of the document's token count). Default: 1.0
Sets a save path for wrapped spark stage
Sets a save path for wrapped spark stage
Set max size of the vocabulary.
Set max size of the vocabulary. CountVectorizer will build a vocabulary that only considers the top vocabSize terms ordered by term frequency across the corpus. Default: 1 << 18
Stage unique name consisting of the stage operation name and uid
Stage unique name consisting of the stage operation name and uid
stage name
This function translates the input and output features into spark schema checks and changes that will occur on the underlying data frame
This function translates the input and output features into spark schema checks and changes that will occur on the underlying data frame
schema of the input data frame
a new schema with the output features added
type tag for input
type tag for input
type tag for output
type tag for output
type tag for output value
type tag for output value
stage uid
stage uid
Wrapper around spark ml CountVectorizer for use with OP pipelines