本文介绍了为什么我在python的sklearn中使用pipline和不使用pipline获得不同的值的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我将recursive feature elimination with cross-validation (rfecv)GridSearchCVRandomForest分类器结合使用,如下所示使用管道不使用管道.

I am using recursive feature elimination with cross-validation (rfecv) with GridSearchCV with RandomForest classifier as follows using pipeline and without using pipeline.

我的带有管道的代码 如下.

My code with pipeline is as follows.

X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

from sklearn.pipeline import Pipeline

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)
#this is the classifier used for feature selection
clf_featr_sele = RandomForestClassifier(random_state = 42, class_weight="balanced")
rfecv = RFECV(estimator=clf_featr_sele, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'n_estimators': [200, 500],
    'max_features': ['auto', 'sqrt', 'log2'],
    'max_depth' : [3,4,5]
    }

#you can have different classifier for your final classifier
clf = RandomForestClassifier(random_state = 42, class_weight="balanced")
CV_rfc = GridSearchCV(estimator=clf, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)

pipeline  = Pipeline([('feature_sele',rfecv),('clf_cv',CV_rfc)])

pipeline.fit(x_train, y_train)

结果是(带有管道):

Optimal features: 29
Best hyperparameters: {'max_depth': 3, 'max_features': 'auto', 'n_estimators': 500}
Best score: 0.714763

我的代码没有管道如下.

X = df[my_features_all]
y = df['gold_standard']

#get development and testing sets
x_train, x_test, y_train, y_test = train_test_split(X, y, random_state=0)

#cross validation setting
k_fold = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)

clf = RandomForestClassifier(random_state = 42, class_weight="balanced")

rfecv = RFECV(estimator=clf, step=1, cv=k_fold, scoring='roc_auc')

param_grid = {'estimator__n_estimators': [200, 500],
    'estimator__max_features': ['auto', 'sqrt', 'log2'],
    'estimator__max_depth' : [3,4,5]
    }

CV_rfc = GridSearchCV(estimator=rfecv, param_grid=param_grid, cv= k_fold, scoring = 'roc_auc', verbose=10, n_jobs = 5)
CV_rfc.fit(x_train, y_train)

结果是(没有管道):

Optimal features: 4
Best hyperparameters: {'max_depth': 3, 'max_features': 'auto', 'n_estimators': 500}
Best score: 0.756835

尽管,两种方法的概念相似,但我得到了不同的结果不同的所选功能(如上面结果部分所示).但是,我得到了相同的超参数值.

Even though, the concept of both approaches is similar I get different results and different selected features (as shown above in results sections). However, I get the same hyperparameter values.

我只是想知道为什么会发生这种差异.哪种方法(不使用管道使用管道?)最适合执行上述任务?

I am just wondering why this difference happens. What approach (without using pipeline or with using pipeline?) is the most suitable to perform the aforementioned task?

如果需要,我很乐意提供更多详细信息.

I am happy to provide more details if needed.

推荐答案

在管道情况下,

在基础模型(RandomForestClassifier(random_state = 42, class_weight="balanced"))上进行特征选择(RFECV),然后将grid_searchCV应用于最终估计量.

Feature selection (RFECV) is carried out with base model (RandomForestClassifier(random_state = 42, class_weight="balanced")) before applying the grid_searchCV on final estimator.

在没有管道的情况下,

对于超参数的每种组合,将相应的估计器用于特征选择(RFECV).因此,这将很耗时.

For each combination of hyperparameter, the corresponding estimator is used for feature selection (RFECV). Hence, it would be time consuming.

这篇关于为什么我在python的sklearn中使用pipline和不使用pipline获得不同的值的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-21 07:19