.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/plot_phonemes_classification.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code or to run this example in your browser via Binder .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_plot_phonemes_classification.py: Voice signals: smoothing, registration, and classification ========================================================== Shows the use of functional preprocessing tools such as smoothing and registration, and functional classification methods. .. GENERATED FROM PYTHON SOURCE LINES 9-27 .. code-block:: Python # License: MIT # sphinx_gallery_thumbnail_number = 3 import matplotlib.pyplot as plt import numpy as np from sklearn.metrics import accuracy_score from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.pipeline import Pipeline from skfda.datasets import fetch_phoneme from skfda.misc.hat_matrix import NadarayaWatsonHatMatrix from skfda.misc.kernels import normal from skfda.misc.metrics import MahalanobisDistance from skfda.ml.classification import KNeighborsClassifier from skfda.preprocessing.registration import FisherRaoElasticRegistration from skfda.preprocessing.smoothing import KernelSmoother .. GENERATED FROM PYTHON SOURCE LINES 28-36 This example uses the Phoneme dataset\ :footcite:`hastie++_1995_penalized` containing the frequency curves of some common phonemes as pronounced by different people. We illustrate with this data the preprocessing and classification techniques available in scikit-fda. This is one of the examples presented in the ICTAI conference\ :footcite:p:`ramos-carreno++_2022_scikitfda`. .. GENERATED FROM PYTHON SOURCE LINES 38-43 We will first load the (binary) Phoneme dataset and plot the first 20 functions. We restrict the data to the first 150 variables, as done in :footcite:t:`ferraty+vieu_2006_computational`, because most of the useful information is in the lower frequencies. .. GENERATED FROM PYTHON SOURCE LINES 43-63 .. code-block:: Python X, y = fetch_phoneme(return_X_y=True) X = X[(y == 0) | (y == 1)] y = y[(y == 0) | (y == 1)] n_points = 150 new_points = X.grid_points[0][:n_points] new_data = X.data_matrix[:, :n_points] X = X.copy( grid_points=new_points, data_matrix=new_data, domain_range=(np.min(new_points), np.max(new_points)), ) n_plot = 20 X[:n_plot].plot(group=y) plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_phonemes_classification_001.png :alt: Phoneme :srcset: /auto_examples/images/sphx_glr_plot_phonemes_classification_001.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 64-68 As we just saw, the curves are very noisy. We can leverage the continuity of the trajectories by smoothing, using a Nadaraya-Watson estimator. We then plot the data again, as well as the class means. .. GENERATED FROM PYTHON SOURCE LINES 68-85 .. code-block:: Python smoother = KernelSmoother( NadarayaWatsonHatMatrix( bandwidth=0.1, kernel=normal, ), ) X_smooth = smoother.fit_transform(X) fig = X_smooth[:n_plot].plot(group=y) X_smooth_aa = X_smooth[:n_plot][y[:n_plot] == 0] X_smooth_ao = X_smooth[:n_plot][y[:n_plot] == 1] X_smooth_aa.mean().plot(fig=fig, color="blue", linewidth=3) X_smooth_ao.mean().plot(fig=fig, color="red", linewidth=3) plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_phonemes_classification_002.png :alt: Phoneme :srcset: /auto_examples/images/sphx_glr_plot_phonemes_classification_002.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 86-103 The smoothed curves are easier to interpret. Now, it is possible to appreciate the characteristic landmarks of each class, such as maxima or minima. However, not all these landmarks appear at the same frequency for each trajectory. One way to solve it is by registering (aligning) the data. We use Fisher-Rao elastic registration, a state-of-the-art registration method to illustrate the effect of registration. Although this registration method achieves very good results, it attempts to align all the curves to a common template. Thus, in order to clearly view the specific landmarks of each class we have to register the data per class. This also means that if the we cannot use this method for a classification task if the landmarks of each class are very different, as it is not able to do per-class registration with unlabeled data. As Fisher-Rao elastic registration is very slow, we only register the plotted curves as an approximation. .. GENERATED FROM PYTHON SOURCE LINES 103-117 .. code-block:: Python reg = FisherRaoElasticRegistration( penalty=0.01, ) X_reg_aa = reg.fit_transform(X_smooth[:n_plot][y[:n_plot] == 0]) fig = X_reg_aa.plot(color="C0") X_reg_ao = reg.fit_transform(X_smooth[:n_plot][y[:n_plot] == 1]) X_reg_ao.plot(fig=fig, color="C1") X_reg_aa.mean().plot(fig=fig, color="blue", linewidth=3) X_reg_ao.mean().plot(fig=fig, color="red", linewidth=3) plt.show() .. image-sg:: /auto_examples/images/sphx_glr_plot_phonemes_classification_003.png :alt: Phoneme :srcset: /auto_examples/images/sphx_glr_plot_phonemes_classification_003.png :class: sphx-glr-single-img .. GENERATED FROM PYTHON SOURCE LINES 118-122 We now split the smoothed data in train and test datasets. Note that there is no data leakage because no parameters are fitted in the smoothing step, but normally you would want to do all preprocessing in a pipeline to guarantee that. .. GENERATED FROM PYTHON SOURCE LINES 122-131 .. code-block:: Python X_train, X_test, y_train, y_test = train_test_split( X_smooth, y, test_size=0.25, random_state=0, stratify=y, ) .. GENERATED FROM PYTHON SOURCE LINES 132-134 We use a k-nn classifier with a functional analog to the Mahalanobis distance and a fixed number of neighbors. .. GENERATED FROM PYTHON SOURCE LINES 134-149 .. code-block:: Python n_neighbors = int(np.sqrt(X_smooth.n_samples)) n_neighbors += n_neighbors % 2 - 1 # Round to an odd integer classifier = KNeighborsClassifier( n_neighbors=n_neighbors, metric=MahalanobisDistance( alpha=0.001, ), ) classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) score = accuracy_score(y_test, y_pred) print(score) .. rst-class:: sphx-glr-script-out .. code-block:: none 0.8046511627906977 .. GENERATED FROM PYTHON SOURCE LINES 150-151 If we wanted to optimize hyperparameters, we can use scikit-learn tools. .. GENERATED FROM PYTHON SOURCE LINES 151-173 .. code-block:: Python pipeline = Pipeline([ ("smoother", smoother), ("classifier", classifier), ]) grid_search = GridSearchCV( pipeline, param_grid={ "smoother__kernel_estimator__bandwidth": [1, 1e-1, 1e-2, 1e-3], "classifier__n_neighbors": range(3, n_neighbors, 2), "classifier__metric__alpha": [1, 1e-1, 1e-2, 1e-3, 1e-4], }, ) # The grid search is too slow for a example. Uncomment it if you want, but it # will take a while. # grid_search.fit(X_train, y_train) # y_pred = grid_search.predict(X_test) # score = accuracy_score(y_test, y_pred) # print(score) .. GENERATED FROM PYTHON SOURCE LINES 174-178 References ---------- .. footbibliography:: .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 5.633 seconds) .. _sphx_glr_download_auto_examples_plot_phonemes_classification.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: binder-badge .. image:: images/binder_badge_logo.svg :target: https://mybinder.org/v2/gh/GAA-UAM/scikit-fda/develop?filepath=examples/plot_phonemes_classification.py :alt: Launch binder :width: 150 px .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_phonemes_classification.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_phonemes_classification.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_