| Did you know ... | Search Documentation: |
| Pack logtalk -- logtalk-3.100.1/docs/handbook/_sources/libraries/bayesian_ridge_regression.rst.txt |
.. _library_bayesian_ridge_regression:
bayesian_ridge_regression
Bayesian ridge regression regressor supporting continuous and
mixed-feature datasets. The library implements the
regressor_protocol defined in the regression_protocols library
and learns a Bayesian linear model using evidence maximization for the
global weight and noise precisions together with Gamma hyperpriors over
both precision terms.
Open the `../../apis/library_index.html#bayesian_ridge_regression <../../apis/library_index.html#bayesian_ridge_regression>`__ link in a web browser.
To load this library, load the loader.lgt file:
::
| ?- logtalk_load(bayesian_ridge_regression(loader)).
To test this library predicates, load the tester.lgt file:
::
| ?- logtalk_load(bayesian_ridge_regression(tester)).
To run the performance benchmark suite, load the
tester_performance.lgt file:
::
| ?- logtalk_load(bayesian_ridge_regression(tester_performance)).
precision_bounds(Min, Max) to avoid degenerate zero or infinite
precision estimates. Posterior solves use Cholesky factorization of
positive-definite precision matrices, diagnostics report any diagonal
jitter applied when factorization retries are needed, and the
evidence-maximization loop computes the effective degrees of freedom
from a one-time eigenspectrum of the centered Gram surrogate while
still switching to a sample-space solve when the active encoded
feature count exceeds the number of training rows.The learned regressor is represented by default as:
bayesian_ridge_regressor(Encoders, Bias, Weights, ActiveFlags, PosteriorCovariance, NoiseVariance, Diagnostics)
The exported predicate clauses therefore use the shape:Functor(Encoders, Bias, Weights, ActiveFlags, PosteriorCovariance, NoiseVariance, Diagnostics)The diagnostics/2 predicate returns a list of metadata terms with the form:
::
[
model(bayesian_ridge_regression),
target(Target),
training_example_count(TrainingExampleCount),
options(Options),
solver(cholesky_factorization),
stabilization_attempts(StabilizationAttempts),
stabilization_jitter(StabilizationJitter),
precision_bounds(MinimumPrecision, MaximumPrecision),
weight_precision_hyperprior(gamma(LambdaShape, LambdaRate)),
noise_precision_hyperprior(gamma(AlphaShape, AlphaRate)),
weight_precision(Alpha),
noise_precision(Beta),
noise_variance(NoiseVariance),
log_evidence(LogEvidence),
scores(Scores),
active_feature_count(ActiveFeatureCount),
weight_prior(isotropic_zero_mean_gaussian),
intercept_treatment(non_probabilistic),
bias_variance(BiasVariance),
weight_variances(WeightVariances),
convergence_metric(coefficient_l1),
convergence(Convergence),
iterations(Iterations),
final_delta(FinalDelta),
encoded_feature_count(FeatureCount)
]
The scores/1 diagnostic is analogous to scikit-learn scores_: it
stores the log marginal likelihood at the initial hyperparameters
followed by the value after each evidence-maximization update. The final
element is identical to log_evidence/1.
The bias_variance/1 diagnostic is always 0.0 because the
intercept is treated as a deterministic centering adjustment rather than
as a probabilistic parameter.
Use the regression_protocols diagnostic/2 and
regressor_options/2 helper predicates when you only need a single
metadata term or the effective options.
The learn/3 predicate accepts the following options:
300.1.0e-6.1.0.auto to derive it from the target
variance. The default is auto.1.0e-6.1.0e-6.1.0e-6.1.0e-6.true and false. The default is true.precision_bounds(1.0e-12, 1.0e12).