本文介绍了Scikit-learn 的内核 PCA:如何在 KPCA 中实现各向异性高斯内核或任何其他自定义内核?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我目前正在使用 Scikit-learn 的 KPCA 对我的数据集执行降维.它们具有各向同性高斯核(RBF 核),它只有一个值 gamma.但是现在,我想实现一个各向异性高斯核,该核具有许多取决于维数的 gamma 值.

I'm currently using the Scikit-learn's KPCA to perform dimensionality reduction on my dataset. They have the isotropic Gaussian kernel (RBF kernel) which only has one value gamma. But now, I want to implement an anisotropic Gaussian kernel that has many values of gamma that depend on the number of dimensions.

我知道内核 PCA 有一个用于预计算内核的选项,但我找不到任何用于降维的代码示例.

I'm aware that Kernel PCA has an option for precomputed kernel but I couldn't find any code example of it being used for dimensionality reduction.

有谁知道如何在 sklearn KPCA 中实现自定义内核?

Does anyone know how to implement a custom kernel in sklearn KPCA?

推荐答案

我已经找到了这个问题的解决方案.

I've found the solution to this problem.

首先,您必须定义自己的核函数,用于返回样本之间的克矩阵.

First of all, you have to define your own kernel function that returns the gram matrix between the samples.

def customkernel(X1,X2,etc):
    k = yourkernelfunction(X1,X2,etc)
    return k

如果我们要将x大小为nxm的数据集拟合到我们的KernelPCA模型中并将其转换为nx n_princomp,我们需要的是

If we want to fit a dataset x with size n x m into our KernelPCA model and transform it into n x n_princomp, what we need is

KPCA = kpca(n_princomp,kernel='precomputed')
gram_mat = customkernel(x,x)
transformed_x = KPCA.fit_transform(gram_mat)

接下来,如果我们想将另一个X大小为N xm的数据集转换为N x n_princomp 我们要做的是计算一个新的 gram 矩阵,其中 X 为 X1,x 为 X2.

Next, if we want to transform another dataset X with size N x m into N x n_princomp what we have to do is calculating a new gram matrix with X as X1 and x as X2.

new_gram_mat = customkernel(X,x)
transformed_X = KPCA.transform(new_gram_mat)    

这篇关于Scikit-learn 的内核 PCA:如何在 KPCA 中实现各向异性高斯内核或任何其他自定义内核?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-22 08:47