QuadraticDiscriminantAnalysis#
- class skfda.ml.classification.QuadraticDiscriminantAnalysis(cov_estimator, *, regularizer=0)[source]#
Functional quadratic discriminant analysis.
It is based on the assumption that the data is part of a Gaussian process and depending on the output label, the covariance and mean parameters are different for each class. This means that curves classified with one determined label come from a distinct Gaussian process compared with data that is classified with a different label.
The training phase of the classifier will try to approximate the two main parameters of a Gaussian process for each class. The covariance will be estimated by fitting the initial kernel passed on the creation of the ParameterizedFunctionalQDA object. The result of the training function will be two arrays, one of means and another one of covariances. Both with length
n_classes
.The prediction phase instead uses a quadratic discriminant classifier to predict which gaussian process of the fitted ones correspond the most with each curve passed.
Warning
This classifier is experimental as it does not come from a peer-published paper.
- Parameters:
kernel – Initial kernel to be fitted with the training data. For now, only kernels that belongs to the GPy module are allowed.
regularizer (float) – Parameter that regularizes the covariance matrices in order to avoid Singular matrices. It is multiplied by the identity matrix and then added to the covariance one.
cov_estimator (CovarianceEstimator[FDataGrid]) –
Examples
Firstly, we will import and split the Berkeley Growth Study dataset
>>> from skfda.datasets import fetch_growth >>> from sklearn.model_selection import train_test_split >>> X, y = fetch_growth(return_X_y=True, as_frame=True) >>> X = X.iloc[:, 0].values >>> y = y.values.codes >>> X_train, X_test, y_train, y_test = train_test_split( ... X, ... y, ... test_size=0.3, ... stratify=y, ... random_state=0, ... )
Then we need to choose and import a kernel so it can be fitted with the data in the training phase. We will use a Gaussian kernel. The variance and lengthscale parameters will be optimized during the training phase. Therefore, the initial values do not matter too much. We will use random values such as 1 for the mean and 6 for the variance.
>>> from skfda.exploratory.stats.covariance import ( ... ParametricGaussianCovariance ... ) >>> from skfda.ml.classification import QuadraticDiscriminantAnalysis >>> from skfda.misc.covariances import Gaussian >>> rbf = Gaussian(variance=6, length_scale=1)
We will fit the ParameterizedFunctionalQDA with training data. We use as regularizer parameter a low value such as 0.05.
>>> qda = QuadraticDiscriminantAnalysis( ... ParametricGaussianCovariance(rbf), ... regularizer=0.05, ... ) >>> qda = qda.fit(X_train, y_train)
We can predict the class of new samples.
>>> list(qda.predict(X_test)) [0, 1, 0, 0, 1, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, 1, 1]
Finally, we calculate the mean accuracy for the test data.
>>> round(qda.score(X_test, y_test), 2) 0.96
Methods
fit
(X, y)Fit the model using X as training data and y as target values.
Get metadata routing of this object.
get_params
([deep])Get parameters for this estimator.
predict
(X)Predict the class labels for the provided data.
score
(X, y[, sample_weight])Return the mean accuracy on the given test data and labels.
set_params
(**params)Set the parameters of this estimator.
set_score_request
(*[, sample_weight])Request metadata passed to the
score
method.- fit(X, y)[source]#
Fit the model using X as training data and y as target values.
- Parameters:
X (FDataGrid) – FDataGrid with the training data.
y (Target) – Target values of shape (n_samples).
- Returns:
self
- Return type:
QuadraticDiscriminantAnalysis[Target]
- get_metadata_routing()#
Get metadata routing of this object.
Please check User Guide on how the routing mechanism works.
- Returns:
routing – A
MetadataRequest
encapsulating routing information.- Return type:
MetadataRequest
- get_params(deep=True)#
Get parameters for this estimator.
- predict(X)[source]#
Predict the class labels for the provided data.
- Parameters:
X (FDataGrid) – FDataGrid with the test samples.
- Returns:
Array of shape (n_samples) with class labels for each data sample.
- Return type:
Target
- score(X, y, sample_weight=None)[source]#
Return the mean accuracy on the given test data and labels.
In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.
- Parameters:
X (array-like of shape (n_samples, n_features)) – Test samples.
y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.
sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.
- Returns:
score – Mean accuracy of
self.predict(X)
w.r.t. y.- Return type:
- set_params(**params)#
Set the parameters of this estimator.
The method works on simple estimators as well as on nested objects (such as
Pipeline
). The latter have parameters of the form<component>__<parameter>
so that it’s possible to update each component of a nested object.- Parameters:
**params (dict) – Estimator parameters.
- Returns:
self – Estimator instance.
- Return type:
estimator instance
- set_score_request(*, sample_weight='$UNCHANGED$')#
Request metadata passed to the
score
method.Note that this method is only relevant if
enable_metadata_routing=True
(seesklearn.set_config()
). Please see User Guide on how the routing mechanism works.The options for each parameter are:
True
: metadata is requested, and passed toscore
if provided. The request is ignored if metadata is not provided.False
: metadata is not requested and the meta-estimator will not pass it toscore
.None
: metadata is not requested, and the meta-estimator will raise an error if the user provides it.str
: metadata should be passed to the meta-estimator with this given alias instead of the original name.
The default (
sklearn.utils.metadata_routing.UNCHANGED
) retains the existing request. This allows you to change the request for some parameters and not others.New in version 1.3.
Note
This method is only relevant if this estimator is used as a sub-estimator of a meta-estimator, e.g. used inside a
Pipeline
. Otherwise it has no effect.- Parameters:
sample_weight (str, True, False, or None, default=sklearn.utils.metadata_routing.UNCHANGED) – Metadata routing for
sample_weight
parameter inscore
.self (QuadraticDiscriminantAnalysis) –
- Returns:
self – The updated object.
- Return type: