Note
Go to the end to download the full example code.
Kernel ridge¶
This example demonstrates how to solve kernel ridge regression, using
himalaya’s estimator KernelRidge compatible with scikit-learn’s API.
Create a random dataset¶
import numpy as np
n_samples, n_features, n_targets = 10, 20, 4
X = np.random.randn(n_samples, n_features)
Y = np.random.randn(n_samples, n_targets)
Scikit-learn API¶
Himalaya implements a KernelRidge estimator, similar to the corresponding
scikit-learn estimator, with similar parameters and methods.
import sklearn.kernel_ridge
import himalaya.kernel_ridge
# Fit a scikit-learn model
model_skl = sklearn.kernel_ridge.KernelRidge(kernel="linear", alpha=0.1)
model_skl.fit(X, Y)
# Fit a himalaya model
model_him = himalaya.kernel_ridge.KernelRidge(kernel="linear", alpha=0.1)
model_him.fit(X, Y)
Y_pred_skl = model_skl.predict(X)
Y_pred_him = model_him.predict(X)
# The predictions are virtually identical.
print(np.max(np.abs(Y_pred_skl - Y_pred_him)))
2.5979218776228663e-14
Small API difference¶
Since himalaya focuses on fitting multiple targets, the score method
returns the score on each target separately, while scikit-learn returns the
average score over targets.
print(model_skl.score(X, Y))
print(model_him.score(X, Y))
print(model_him.score(X, Y).mean())
0.9976892308577026
[0.99878788 0.99718964 0.9972637 0.99751571]
0.9976892308577028
Total running time of the script: (0 minutes 0.013 seconds)