网格搜索中的最佳参数范围?

问题描述 投票:1回答:1

我想用MLlib运行一个简单的网格搜索实现,但我对选择“最佳”参数范围感到困惑。显然,我不想为可能无法改进模型的参数组合浪费太多资源。根据您的经验提出建议吗?

set parameter ranges:

val intercept   : List[Boolean]  = List(false)
val classes     : List[Int]      = List(2)
val validate    : List[Boolean]  = List(true)
val tolerance   : List[Double]   = List(0.0000001 , 0.000001 , 0.00001 , 0.0001 , 0.001 , 0.01 , 0.1 , 1.0)
val gradient    : List[Gradient] = List(new LogisticGradient() , new LeastSquaresGradient() , new HingeGradient())
val corrections : List[Int]      = List(5 , 10 , 15)
val iters       : List[Int]      = List(1 , 10 , 100 , 1000 , 10000)
val regparam    : List[Double]   = List(0.0 , 0.0001 , 0.001 , 0.01 , 0.1 , 1.0 , 10.0 , 100.0)
val updater     : List[Updater]  = List(new SimpleUpdater() , new L1Updater() , new SquaredL2Updater())

perform grid search:

val combinations = for (a <- intercept;
                        b <- classes;
                        c <- validate;
                        d <- tolerance;
                        e <- gradient;
                        f <- corrections;
                        g <- iters;
                        h <- regparam;
                        i <- updater) yield (a,b,c,d,e,f,g,h,i)

for( ( interceptS , classesS , validateS , toleranceS , gradientS , correctionsS , itersS , regParamS , updaterS ) <- combinations.take(3) ) {

      val lr : LogisticRegressionWithLBFGS = new LogisticRegressionWithLBFGS().
            setIntercept(addIntercept=interceptS).
            setNumClasses(numClasses=classesS).
            setValidateData(validateData=validateS)

      lr.
            optimizer.
            setConvergenceTol(tolerance=toleranceS).
            setGradient(gradient=gradientS).
            setNumCorrections(corrections=correctionsS).
            setNumIterations(iters=itersS).
            setRegParam(regParam=regParamS).
            setUpdater(updater=updaterS)

}
machine-learning apache-spark-mllib grid-search
1个回答
0
投票

尝试使用randomizedsearchcv进行随机网格搜索,其范围为所涉及的超级域名的数量级。

© www.soinside.com 2019 - 2024. All rights reserved.