在Spark 1.6.1和2.0中使用ParamGridBuilder时出现scala.MatchError

val paramGrid = new ParamGridBuilder()
  .addGrid(lr.regParam, Array(0.1, 0.01))
  .addGrid(lr.fitIntercept)
  .addGrid(lr.elasticNetParam, Array(0.0, 0.5, 1.0))
  .build()


错误是

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 57.0 failed 1 times, most recent failure: Lost task 0.0 in stage 57.0 (TID 257, localhost):
scala.MatchError: [280000,1.0,[2400.0,9373.0,3.0,1.0,1.0,0.0,0.0,0.0]] (of class org.apache.spark.sql.catalyst.expressions.GenericRowWithSchema)


Full code

问题是在这种情况下我应该如何使用ParamGridBuilder

最佳答案

这里的问题是输入模式不是ParamGridBuilder。 “价格”列作为整数加载,而LinearRegression期望为双精度。您可以通过将列显式转换为所需的类型来修复它:

val houses = sqlContext.read.format("com.databricks.spark.csv")
  .option("header", "true")
  .option("inferSchema", "true")
  .load(...)
  .withColumn("price", $"price".cast("double"))

08-04 09:42