为什么 MLP 分类器算法给我一些极值?

问题描述 投票:0回答:1

我正在尝试使用 MLP 分类器算法来深入分析从 36 个不同的瓮中抽取单个球(每个瓮有 10 个球,编号为 0 到 9)的过去 666 个结果,但是当我要求算法给出下一次抽签时每个球的预期概率会得出极值。这是代码:

import numpy as np
from sklearn.neural_network import MLPClassifier
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import StandardScaler

# Assuming your data is stored in the 'results' list
# Each sublist represents the past results for an urn
results = [
    [3, 2, 7, ..., 6, 9],  # Urn 1
    [5, 2, ..., 0],       # Urn 2
    # ... (other urns)
    [4, 2, 7, ..., 1, 1]  # Urn 36
]

# Encode the past results as binary vectors
encoder = OneHotEncoder(categories=[range(10)] * 666, sparse=False)
X = np.vstack([encoder.fit_transform([urn]) for urn in results])

# Use the last ball drawn in each urn as the label
labels = np.array([urn[-1] for urn in results])

# Normalize the features
scaler = StandardScaler()
X_normalized = scaler.fit_transform(X)

# Initialize the MLP classifier with increased complexity
mlp = MLPClassifier(hidden_layer_sizes=(100, 100, 100), max_iter=1000, random_state=42)

# Train the model
mlp.fit(X_normalized, labels)

# Predict probabilities for each ball in the next drawing for all urns
for urn_idx, urn_results in enumerate(results):
    next_drawing_probs = mlp.predict_proba(scaler.transform(encoder.transform([urn_results])))
    print(f"Urn {urn_idx + 1} probabilities:")
    for ball, prob in enumerate(next_drawing_probs):
        print(f"  Ball {ball}: {prob:.4f}")


#And here is the output for the last 2 urns (which are like the output for all the other 34 urns):

Urn 35 probabilities:
  Ball 0: 0.0000
  Ball 1: 0.0000
  Ball 2: 0.0000
  Ball 3: 0.0000
  Ball 4: 0.0001
  Ball 5: 0.9999
  Ball 6: 0.0000
  Ball 7: 0.0000
  Ball 8: 0.0000
  Ball 9: 0.0000
Urn 36 probabilities:
  Ball 0: 0.0000
  Ball 1: 0.0000
  Ball 2: 0.0000
  Ball 3: 0.0000
  Ball 4: 0.0000
  Ball 5: 0.0000
  Ball 6: 0.0000
  Ball 7: 0.0000
  Ball 8: 0.0000
  Ball 9: 0.9999

Process finished with exit code 0

我尝试增加层数(最多 1000),但结果相同。我预计每个球的概率接近 0.1(低至 0.075,高至 0.125)

python random mlp probability-theory
1个回答
0
投票

您的算法对其预测如此有信心有两个原因。

一个热门编码训练数据

任何机器学习算法都会被激励产生针对标记数据的真实输出值给出的结果。由于您使用的是一种热编码,因此您将获得一个包含 0 和 1(与球的顺序相对应)的向量,用于训练网络。目前,您的 MLP 网络正在尝试通过生成与此类似的向量来复制此输出(因此您会得到一个非常自信的输出)。

过度拟合数据

您的网络有 3 个隐藏层,每个隐藏层有 100 个节点,这对于您要解决的问题来说可能太复杂。请记住,这个项目只有 1 个包含 256 个节点的隐藏层,但问题比您的问题复杂得多!

© www.soinside.com 2019 - 2024. All rights reserved.