Scikit-learn:MLPClassifier在过早意外降低损耗后错误地降低了学习率

问题描述 投票:0回答:1

习惯于使用scikit-learn和使用MLPClassifier。我虽然很新,但也不敢提交错误报告,因为我认为这还为时过早。我也无法想象我是第一个遇到这个问题的人。

问题:我应该提交错误报告,还是缺少scikit-learn的某些功能?

信息前期

System:
    python: 3.7.4 (tags/v3.7.4:e09359112e, Jul  8 2019, 20:34:20) [MSC v.1916 64 bit (AMD64)]
executable: c:\program files\python37\pythonw.exe
   machine: Windows-10-10.0.18362-SP0

Python deps:
       pip: 19.3.1
setuptools: 40.8.0
   sklearn: 0.21.3
     numpy: 1.17.0
     scipy: 1.3.0
    Cython: None
    pandas: 0.25.0

说明

[下面,我复制了stdout文本文件的一部分,您可以看到,在求解过程的早期,iteration 401的损失非常低(约0.13)。在n_iter_no_change = 50之后,学习率除以5(= learning_rate='adaptive'时的sklearn设置)。接下来的50次迭代从0.36到0.19。看来,因为这些值不会低于iteration 401的值,所以学习率将再次除以5!但是,损失正在稳步减少,不需要较低的学习率。接下来的50次迭代也是如此,依此类推。损耗稳定在15附近,但是学习率变得很小,以至于迭代之间的差异不会导致达到0.13。特别是因为学习率一直要除以5,直到达到tol。我手动中断了该过程。

代码

df_train, df_test, labels_train, labels_test = train_test_split(df_stack,labels,test_size=0.2)

print('MULTILAYER PERCEPTRON CLASSIFIER')
l = int(len(df_stack.columns)*1.5)
mlpc = MLPClassifier(hidden_layer_sizes=(l,l,l,l,l),activation='tanh',solver='sgd',alpha=0.01,batch_size=1024,
                     learning_rate_init=0.1,learning_rate='adaptive',max_iter=20000,tol=0.00000001,n_iter_no_change=20,
                     warm_start=True,verbose=True)
mlpc.fit(df_train,labels_train)
mlpc_prediction = mlpc.predict(df_test)
print(classification_report(labels_test,mlpc_prediction))
print(confusion_matrix(labels_test,mlpc_prediction))

我以为它必须对warm_start进行处理,但是我同时尝试了TrueFasle,但是看到了相同的现象。

我将批次大小增加到512。这样可以减少发生的次数,但仍然会发生。

期望

[当损失以稳定的速度降低时,我不希望n_iter_no_change之后的sklearn突然将学习率除以5。这太好了!为什么会降低学习率?

我觉得当学习率下降时,应该以第一个新计算的损失来计算损失是否至少减少了tol。显然不是。取而代之的是,它与先前意外获得的更低的损失进行比较,并确定新的损失没有降低并且没有足够快地降低,因此它又重新划分了学习率,使问题变得更糟。这导致learning_rate和损失差异迅速触底,并错误地声称计算已完成。

Iteration 331, loss = 0.36277055
Iteration 332, loss = 0.24242866
Iteration 333, loss = 0.22366280
Iteration 334, loss = 0.21882291
Iteration 335, loss = 0.22645550
Iteration 336, loss = 0.20247741
Iteration 337, loss = 0.20114807
Iteration 338, loss = 0.20465412
Iteration 339, loss = 0.35536242
Iteration 340, loss = 0.30490947
Iteration 341, loss = 0.23094651
Iteration 342, loss = 0.21910954
Iteration 343, loss = 0.21982883
Iteration 344, loss = 0.23542610
Iteration 345, loss = 0.19875187
Iteration 346, loss = 0.18705431
Iteration 347, loss = 0.19246917
Iteration 348, loss = 0.21514954
Iteration 349, loss = 0.18808634
Iteration 350, loss = 0.18320747
Iteration 351, loss = 0.20807128
Iteration 352, loss = 0.18040341
Iteration 353, loss = 0.17532935
Iteration 354, loss = 0.21520632
Iteration 355, loss = 0.18205413
Iteration 356, loss = 0.17632098
Iteration 357, loss = 0.29760320
Iteration 358, loss = 0.19014626
Iteration 359, loss = 0.17066800
Iteration 360, loss = 0.16846425
Iteration 361, loss = 0.33112298
Iteration 362, loss = 0.31750713
Iteration 363, loss = 0.19228498
Iteration 364, loss = 0.17457898
Iteration 365, loss = 0.23505472
Iteration 366, loss = 0.16258889
Iteration 367, loss = 0.20449453
Iteration 368, loss = 0.16618323
Iteration 369, loss = 0.25122422
Iteration 370, loss = 0.16605087
Iteration 371, loss = 0.15851253
Iteration 372, loss = 0.15725671
Iteration 373, loss = 0.15626153
Iteration 374, loss = 0.16184306
Iteration 375, loss = 0.27864597
Iteration 376, loss = 0.17242408
Iteration 377, loss = 0.25311388
Iteration 378, loss = 0.16054483
Iteration 379, loss = 0.16586523
Iteration 380, loss = 0.15612684
Iteration 381, loss = 0.15271917
Iteration 382, loss = 0.14338205
Iteration 383, loss = 0.14576599
Iteration 384, loss = 0.14856326
Iteration 385, loss = 0.14906040
Iteration 386, loss = 0.15078332
Iteration 387, loss = 0.16312012
Iteration 388, loss = 0.17348876
Iteration 389, loss = 0.14614085
Iteration 390, loss = 0.24013805
Iteration 391, loss = 0.15991470
Iteration 392, loss = 0.14121736
Iteration 393, loss = 0.13829140
Iteration 394, loss = 0.13611360
Iteration 395, loss = 0.13957503
Iteration 396, loss = 0.34616708
Iteration 397, loss = 0.17156533
Iteration 398, loss = 0.16191005
Iteration 399, loss = 0.14970412
Iteration 400, loss = 0.14138515
Iteration 401, loss = 0.13482772
Iteration 402, loss = 0.14011477
Iteration 403, loss = 0.15109059
Iteration 404, loss = 0.25684779
Iteration 405, loss = 0.17334369
Iteration 406, loss = 0.13719433
Iteration 407, loss = 0.47797963
Iteration 408, loss = 0.36297558
Iteration 409, loss = 0.29520540
Iteration 410, loss = 0.23906590
Iteration 411, loss = 0.22037938
Iteration 412, loss = 0.19925232
Iteration 413, loss = 0.18805823
Iteration 414, loss = 0.25676207
Iteration 415, loss = 0.17797250
Iteration 416, loss = 0.17708130
Iteration 417, loss = 0.16782866
Iteration 418, loss = 0.16829063
Iteration 419, loss = 0.16531369
Iteration 420, loss = 0.16296342
Iteration 421, loss = 0.15146044
Iteration 422, loss = 0.15707462
Iteration 423, loss = 0.23143112
Iteration 424, loss = 0.15225075
Iteration 425, loss = 0.15847755
Iteration 426, loss = 0.43851424
Iteration 427, loss = 0.24816866
Iteration 428, loss = 0.20171329
Iteration 429, loss = 0.17147653
Iteration 430, loss = 0.15560864
Iteration 431, loss = 0.14891353
Iteration 432, loss = 0.14883808
Iteration 433, loss = 0.17718146
Iteration 434, loss = 0.14910944
Iteration 435, loss = 0.14170514
Iteration 436, loss = 0.14725053
Iteration 437, loss = 0.25596943
Iteration 438, loss = 0.15055692
Iteration 439, loss = 0.77150330
Iteration 440, loss = 0.56516046
Iteration 441, loss = 0.52989079
Iteration 442, loss = 0.50608732
Iteration 443, loss = 0.48526388
Iteration 444, loss = 0.47806246
Iteration 445, loss = 0.42537222
Iteration 446, loss = 0.41309122
Iteration 447, loss = 0.57193972
Iteration 448, loss = 0.49888177
Iteration 449, loss = 0.46276178
Iteration 450, loss = 0.43022112
Iteration 451, loss = 0.40696508
Iteration 452, loss = 0.39255343
Training loss did not improve more than tol=0.000000 for 50 consecutive epochs. Setting learning rate to 0.004000
Iteration 453, loss = 0.36314323
Iteration 454, loss = 0.34541510
Iteration 455, loss = 0.33897814
Iteration 456, loss = 0.33539953
Iteration 457, loss = 0.32847849
Iteration 458, loss = 0.32762007
Iteration 459, loss = 0.31942027
Iteration 460, loss = 0.31508242
Iteration 461, loss = 0.31028737
Iteration 462, loss = 0.30865291
Iteration 463, loss = 0.31098832
Iteration 464, loss = 0.30240283
Iteration 465, loss = 0.29712414
Iteration 466, loss = 0.29234763
Iteration 467, loss = 0.28786321
Iteration 468, loss = 0.28467107
Iteration 469, loss = 0.27932500
Iteration 470, loss = 0.27823215
Iteration 471, loss = 0.27465911
Iteration 472, loss = 0.26966135
Iteration 473, loss = 0.27443592
Iteration 474, loss = 0.26456298
Iteration 475, loss = 0.25912139
Iteration 476, loss = 0.25674386
Iteration 477, loss = 0.25327641
Iteration 478, loss = 0.24961365
Iteration 479, loss = 0.24648199
Iteration 480, loss = 0.24457187
Iteration 481, loss = 0.24117319
Iteration 482, loss = 0.23953095
Iteration 483, loss = 0.23615554
Iteration 484, loss = 0.23171504
Iteration 485, loss = 0.23032014
Iteration 486, loss = 0.23480823
Iteration 487, loss = 0.22569761
Iteration 488, loss = 0.22518993
Iteration 489, loss = 0.21999668
Iteration 490, loss = 0.21766831
Iteration 491, loss = 0.21681451
Iteration 492, loss = 0.21406469
Iteration 493, loss = 0.21204087
Iteration 494, loss = 0.21655806
Iteration 495, loss = 0.20624473
Iteration 496, loss = 0.20381265
Iteration 497, loss = 0.20019352
Iteration 498, loss = 0.19979185
Iteration 499, loss = 0.19635400
Iteration 500, loss = 0.19456250
Iteration 501, loss = 0.19451135
Iteration 502, loss = 0.19353174
Iteration 503, loss = 0.19693322
Training loss did not improve more than tol=0.000000 for 50 consecutive epochs. Setting learning rate to 0.000800
Iteration 504, loss = 0.18334647
Iteration 505, loss = 0.17993619
Iteration 506, loss = 0.17834410
Iteration 507, loss = 0.17733023
Iteration 508, loss = 0.17661048
Iteration 509, loss = 0.17605634
Iteration 510, loss = 0.17543265
Iteration 511, loss = 0.17519766
Iteration 512, loss = 0.17508450
Iteration 513, loss = 0.17384544
Iteration 514, loss = 0.17354200
Iteration 515, loss = 0.17292735
Iteration 516, loss = 0.17314152
Iteration 517, loss = 0.17250877
Iteration 518, loss = 0.17179139
Iteration 519, loss = 0.17357771
Iteration 520, loss = 0.17130067
Iteration 521, loss = 0.17054958
Iteration 522, loss = 0.17130531
Iteration 523, loss = 0.17016965
Iteration 524, loss = 0.16935377
Iteration 525, loss = 0.16923233
Iteration 526, loss = 0.16846157
Iteration 527, loss = 0.16791882
Iteration 528, loss = 0.16763083
Iteration 529, loss = 0.16659641
Iteration 530, loss = 0.16696834
Iteration 531, loss = 0.16627937
Iteration 532, loss = 0.16591833
Iteration 533, loss = 0.16541474
Iteration 534, loss = 0.16490934
Iteration 535, loss = 0.16486146
Iteration 536, loss = 0.16930541
Iteration 537, loss = 0.16466114
Iteration 538, loss = 0.16404731
Iteration 539, loss = 0.16431962
Iteration 540, loss = 0.16332053
Iteration 541, loss = 0.16560544
Iteration 542, loss = 0.16465136
Iteration 543, loss = 0.16252311
Iteration 544, loss = 0.16218103
Iteration 545, loss = 0.16226194
Iteration 546, loss = 0.16065180
Iteration 547, loss = 0.16483378
Iteration 548, loss = 0.16199347
Iteration 549, loss = 0.15987497
Iteration 550, loss = 0.15914196
Iteration 551, loss = 0.15876606
Iteration 552, loss = 0.15996318
Iteration 553, loss = 0.15846688
Iteration 554, loss = 0.15792289
Training loss did not improve more than tol=0.000000 for 50 consecutive epochs. Setting learning rate to 0.000160
Iteration 555, loss = 0.15817669
Iteration 556, loss = 0.15622588
Iteration 557, loss = 0.15582773
Iteration 558, loss = 0.15657326
Iteration 559, loss = 0.15611789
Iteration 560, loss = 0.15576958
Iteration 561, loss = 0.15548868
Iteration 562, loss = 0.15531042
Iteration 563, loss = 0.15519508
Iteration 564, loss = 0.15510623
Iteration 565, loss = 0.15495463
Iteration 566, loss = 0.15487093
Iteration 567, loss = 0.15483675
Iteration 568, loss = 0.15473874
Iteration 569, loss = 0.15463415
Iteration 570, loss = 0.15456982
Iteration 571, loss = 0.15445918
Iteration 572, loss = 0.15440402
Iteration 573, loss = 0.15430028
Iteration 574, loss = 0.15425548
Iteration 575, loss = 0.15414637
Iteration 576, loss = 0.15409595
Iteration 577, loss = 0.15404775
Iteration 578, loss = 0.15405658
Iteration 579, loss = 0.15426012
Iteration 580, loss = 0.15396917
Iteration 581, loss = 0.15376704
Iteration 582, loss = 0.15376100
Iteration 583, loss = 0.15361772
Iteration 584, loss = 0.15345492
Iteration 585, loss = 0.15339733
Iteration 586, loss = 0.15335340
Iteration 587, loss = 0.15333112
Iteration 588, loss = 0.15322458
Iteration 589, loss = 0.15338764
Iteration 590, loss = 0.15312139
Iteration 591, loss = 0.15297541
Iteration 592, loss = 0.15292804
Iteration 593, loss = 0.15286791
Iteration 594, loss = 0.15279897
Iteration 595, loss = 0.15274573
Iteration 596, loss = 0.15263508
Iteration 597, loss = 0.15258324
Iteration 598, loss = 0.15296264
Iteration 599, loss = 0.15259672
Iteration 600, loss = 0.15243085
Iteration 601, loss = 0.15233993
Iteration 602, loss = 0.15248246
Iteration 603, loss = 0.15225322
Iteration 604, loss = 0.15211536
Iteration 605, loss = 0.15204409
Training loss did not improve more than tol=0.000000 for 50 consecutive epochs. Setting learning rate to 0.000032
Iteration 606, loss = 0.15168847
Iteration 607, loss = 0.15162253
Iteration 608, loss = 0.15159155
Iteration 609, loss = 0.15159935
Iteration 610, loss = 0.15156232
Iteration 611, loss = 0.15154664
Iteration 612, loss = 0.15152027
Iteration 613, loss = 0.15149465
Iteration 614, loss = 0.15148186
Iteration 615, loss = 0.15146001
Iteration 616, loss = 0.15144674
Iteration 617, loss = 0.15143749
Iteration 618, loss = 0.15141525
Iteration 619, loss = 0.15139387
Iteration 620, loss = 0.15138700
Iteration 621, loss = 0.15136696
Iteration 622, loss = 0.15138551
Iteration 623, loss = 0.15135914
Iteration 624, loss = 0.15132832
Iteration 625, loss = 0.15132639
Iteration 626, loss = 0.15131053
Iteration 627, loss = 0.15129199
Iteration 628, loss = 0.15127809
Iteration 629, loss = 0.15125036
Iteration 630, loss = 0.15124654
Iteration 631, loss = 0.15122546
Iteration 632, loss = 0.15121544
Iteration 633, loss = 0.15119239
Iteration 634, loss = 0.15117764
Iteration 635, loss = 0.15116809
Iteration 636, loss = 0.15114747
Iteration 637, loss = 0.15113152
Iteration 638, loss = 0.15111655
Iteration 639, loss = 0.15111040
Iteration 640, loss = 0.15110277
Iteration 641, loss = 0.15109914
Iteration 642, loss = 0.15106607
Iteration 643, loss = 0.15105276
Iteration 644, loss = 0.15104198
Iteration 645, loss = 0.15101848
Iteration 646, loss = 0.15101173
Iteration 647, loss = 0.15099859
Iteration 648, loss = 0.15098257
Iteration 649, loss = 0.15096238
Iteration 650, loss = 0.15095006
Iteration 651, loss = 0.15093467
Iteration 652, loss = 0.15091879
Iteration 653, loss = 0.15090930
Iteration 654, loss = 0.15088202
Iteration 655, loss = 0.15087644
Iteration 656, loss = 0.15086687
Training loss did not improve more than tol=0.000000 for 50 consecutive epochs. Setting learning rate to 0.000006
Iteration 657, loss = 0.15078010
Iteration 658, loss = 0.15077711
Iteration 659, loss = 0.15077672
Iteration 660, loss = 0.15077272
Iteration 661, loss = 0.15076686
Iteration 662, loss = 0.15076496
Iteration 663, loss = 0.15076060
Iteration 664, loss = 0.15075905
Iteration 665, loss = 0.15075893
c:\program files\python37\lib\site-packages\sklearn\neural_network\multilayer_perceptron.py:568: UserWarning: Training interrupted by user.
  warnings.warn("Training interrupted by user.")
              precision    recall  f1-score   support

         0.0       0.75      0.75      0.75       951
         1.0       0.82      0.81      0.82      1290

    accuracy                           0.79      2241
   macro avg       0.78      0.78      0.78      2241
weighted avg       0.79      0.79      0.79      2241

[[ 715  236]
 [ 240 1050]]
TEST - MLPC
              precision    recall  f1-score   support

         0.0       0.46      0.50      0.48       416
         1.0       0.71      0.68      0.69       747

    accuracy                           0.61      1163
   macro avg       0.58      0.59      0.59      1163
weighted avg       0.62      0.61      0.62      1163

[[207 209]
 [241 506]]
scikit-learn
1个回答
0
投票

您描述的行为并没有让我感到奇怪,实际上,这似乎正是训练神经网络时想要的。当某个时刻的损失达到最低点,并且平稳状态或其他情况并未减少甚至增加时,人们可以认为学习率过高,即,较大的步长会导致优化仅在一些局部最小值附近“反弹”。

我绘制了训练损失(从第一次记录的迭代开始算起),并且确实跳了很多,甚至在某个时候“爆炸”。某些优化器可能会发生这种情况,这些优化器会在此学习率变得太小时调整每层的学习率,或者可能表明您的学习率太高。]

enter image description here

我使用此代码执行此操作:

s = """log string..."""
plt.plot(np.array(re.findall(r"^Iteration \d+, loss = (?P<loss>[\d\.]+)$", s, flags=re.MULTILINE)).astype(float))
plt.title("Training loss")
plt.xlabel("Iteration")
plt.ylabel("Loss")
# Could use a log scale here
#plt.yscale("log")
plt.show()

此外,尽早停止通常是一个不错的策略,可以防止模型过度拟合。您可以手动执行此操作,或者每第n步保存一次模型,也可以手动选择,或者使用scikit-learn提前停止策略。

© www.soinside.com 2019 - 2024. All rights reserved.