如何计算随机森林的OOB

如何计算随机森林的OOB

本文介绍了如何计算随机森林的OOB?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我正在比较一些模型以获得最佳模型.现在,我想获得随机森林模型的 OOB 错误,以将其与其他一些模型的交叉验证错误进行比较.我可以做比较吗?如果可以,如何通过 R 代码获取 OOB 错误?

I am comparing some models to get a best model. Now, I want to get an OOB error of random forest model to compare it with the cross-validation errors of some others model. Can I do the comparison? If I can, how can I get the OOB error by R code?

推荐答案

要在 R 中获取随机森林模型的 OOB,您可以:

To get the OOB of a random forest model in R you can:

library(randomForest)

set.seed(1)
model <- randomForest(Species ~ ., data = iris)

OOB 错误在于:

model$err.rate[,1]

其中第 i 个元素是直到第 i 个树的 (OOB) 错误率.

where the i-th element is the (OOB) error rate for all trees up to the i-th.

可以绘制它并检查它是否与为 rf 模型定义的绘图方法中的 OOB 相同:

one can plot it and check if it is the same as the OOB in the plot method defined for rf models:

par(mfrow = c(2,1))
plot(model$err.rate[,1], type = "l")
plot(model)

OOB 对于选择超参数 mtryntree 很有用,并且应该与 k-fold CV 相关,但不应使用它来比较 rf 与不同类型的测试模型通过 k 折 CV.OOB 很棒,因为它几乎是免费的,而不是需要运行 k 次的 k-fold CV.

OOB is useful for picking hyper parameters mtry and ntree and should correlate with k-fold CV but one should not use it to compare rf to different types of models tested by k-fold CV.OOB is great since it is almost free as opposed to k-fold CV which takes k times to run.

在 R 中运行 k-fold CV 的一种简单方法是:

An easy way to run a k-fold CV in R is:

定义折叠(用k(正整数>1)替换5)以运行k-fold CV:

define the folds (replace the 5 with k (positive integer >1) to run k -fold CV:

folds <- sample(1:5, size = nrow(iris), replace = T) #5 fold CV

这种方法不会给出相同大小的折叠(尤其是对于较小的数据集),这通常没什么大不了的.

this approach will not give equally sized folds (especially for smaller data sets), this is usually not a big deal.

table(folds)
#output
 1  2  3  4  5
30 28 28 33 31

解决这个问题:

folds <- sample(rep(1:5, length.out = nrow(iris)), size = nrow(iris), replace = F)

table(folds)
#output
 1  2  3  4  5
30 30 30 30 30

运行折叠,在 4 个折叠中的每一个上训练模型,并在第 5 个预测.这里我只返回一个包含预测和实际值的数据帧列表,您可以自定义调用以返回他想要的任何统计数据.

run through the folds training the model on each of the 4 folds and prediction on the 5th. Here I just return a list of data frames containing the predictions and real values, one can customize the call to return any statistic he desires.

CV_rf <- lapply(1:5, function(x){ #5 corresponds to the number of folds defined earlier
  model <- randomForest(Species ~ ., data = iris[folds != x,])
  preds <- predict(model,  iris[folds == x,], type="response")
  return(data.frame(preds, real = iris$Species[folds == x]))
  })

您可以使用相同的代码来获得岭模型的性能.

You can use the same code to get the performance of the ridge model.

将数据框列表转换为数据框:

convert list of data frames to a data frame:

CV_rf <- do.call(rbind, CV_rf)

检查准确性

caret::confusionMatrix(CV_rf$preds, CV_rf$real)
#part of output:
Overall Statistics

               Accuracy : 0.9533
                 95% CI : (0.9062, 0.981)
    No Information Rate : 0.3333
    P-Value [Acc > NIR] : < 2.2e-16

所以这里的准确度是 0.9533

so here the accuracy is 0.9533

而第 500 个(默认情况下在 rf 中适合 500)树的 OOB 是:

while the OOB for the 500th (500 is fit by default in rf) tree was:

model$err.rate[500,1]
#OOB
0.04666667

它们完全相同,完全击败了我的观点,但是例如尝试运行 10 倍 CV 或 3 倍,您会发现它们不一样.

They are the same defeating my point completely, but for instance try to run 10 fold CV or 3 fold and you will see they are not the same.

另一种方法是使用 caretmlr 库.我不使用 mlrcaret 非常适合这样的任务.这是某事,可帮助您开始使用插入符号和 rf.此外,caret 具有出色的文档.即使您不打算使用该软件包,我也可以推荐它.

another approach is to use caret or mlr libraries. I don't use mlr but caret is really good for tasks like this. Here is something to get you started with caret and rf. Additionally caret has excellent documentation. I can recommend it even if you do not plan to use the package.

这篇关于如何计算随机森林的OOB?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

08-13 18:50