| Did you know ... | Search Documentation: |
| Pack logtalk -- logtalk-3.100.1/docs/handbook/_sources/libraries/kernel_pca_projection.rst.txt |
.. _library_kernel_pca_projection:
kernel_pca_projection
Kernel Principal Component Analysis reducer for continuous datasets. The
library implements the dimension_reducer_protocol defined in the
dimension_reduction_protocols library and learns a nonlinear
projection by centering the training data, optionally standardizing
continuous attributes, building a centered kernel Gram matrix, and
extracting deterministic principal directions in sample space using
portable power iteration with deflation.
Open the `../../apis/library_index.html#kernel_pca_projection <../../apis/library_index.html#kernel_pca_projection>`__ link in a web browser.
To load this library, load the loader.lgt file:
::
| ?- logtalk_load(kernel_pca_projection(loader)).
To test this library predicates, load the tester.lgt file:
::
| ?- logtalk_load(kernel_pca_projection(tester)).
linear_algebra library.component_N-Value pairs using centered kernel evaluations against
the training rows.The learn/3 predicate accepts the following options:
SampleCount - 1 raise
domain_error(component_count, Requested-Maximum). The default is
2.true (default) or false.truncate (default), which returns a reducer
with fewer components and records a
shortfall(truncated(Requested, Learned, ResidualEigenvalue, Tolerance))
diagnostic, or error, which raises
domain_error(component_count, Requested-Learned).linear
(default), polynomial(Degree, Gamma, Coef0) with positive
Degree, positive Gamma, and non-negative Coef0, and
rbf(Gamma) with positive Gamma.1000.1.0e-8.
The following examples use the sample datasets shipped with the
dimension_reduction_protocols library:
::
| ?- logtalk_load(dimension_reduction_protocols('test_datasets/correlated_plane')).
Learning a reducer ~~~~~~~~~~~~~~~~~~
::
| ?- kernel_pca_projection::learn(correlated_plane, DimensionReducer).
| ?- kernel_pca_projection::learn(correlated_plane, DimensionReducer, [n_components(1), kernel(rbf(0.25))]).
Transforming new instances ~~~~~~~~~~~~~~~~~~~~~~~~~~
::
| ?- kernel_pca_projection::learn(correlated_plane, DimensionReducer, [n_components(2), kernel(rbf(0.25))]),
kernel_pca_projection::transform(DimensionReducer, [x-2.0, y-4.0, z-6.0], ReducedInstance).
Exporting and reusing the reducer ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
::
| ?- kernel_pca_projection::learn(correlated_plane, DimensionReducer, [n_components(1)]),
kernel_pca_projection::export_to_file(correlated_plane, DimensionReducer, reducer, 'kernel_pca_reducer.pl').
| ?- logtalk_load('kernel_pca_reducer.pl'),
reducer(Reducer),
kernel_pca_projection::transform(Reducer, [x-1.0, y-2.0, z-3.0], ReducedInstance).
The learned dimension reducer is represented by a compound term with the functor chosen by the implementation and arity 7. For example:
::
kernel_pca_reducer(Encoders, TrainingRows, RowMeans, TotalMean, Components, ExplainedVariances, Diagnostics)
Where:
Encoders: List of continuous attribute encoders storing attribute
name, mean, and scale.TrainingRows: Encoded training rows used when evaluating kernels
for new instances.RowMeans: Per-training-row kernel means used for centering
out-of-sample kernel vectors.TotalMean: Global kernel mean used for centering both the training
Gram matrix and new kernel vectors.Components: List of normalized dual projection vectors in
descending variance order.ExplainedVariances: List of kernel Gram matrix eigenvalues
matching the extracted components.Diagnostics: Learned metadata including the effective training
options, kernel preprocessing, sample count, explained variances, and
optional truncate-mode shortfall details.
When exported using export_to_clauses/4 or export_to_file/4, this reducer term is serialized directly as the single argument of the generated predicate clause so that the exported model can be loaded and reused as-is.