从头开始逻辑回归,成本不降

问题描述 投票:0回答:1

我正在尝试从头开始执行逻辑回归,但成本并没有降低。成本函数数组是

J_all = [0.6931471785599453, 0.7013523852395079, 1.0799382321159159, 1.4184962890456663, 1.2090967630312366, 1.3564457452734269, 1.2571265595127734, 1.2870719263130037, 1.306411844446772, 1.229356753355045, 1.3446092043800832, 1.1813483789340946, 1.372813359239384, 1.1431300497707213, 1.391357334040078, 1.1143844952172193, 1.4016419750913938,...].

正如你所看到的,成本函数并没有减少。

下面是我的代码。请告诉我我做错了什么。

def logistic_regression(X, Y, iterations, learning_rate):

X = X.to_numpy()
Y = Y.to_numpy()

m = X.shape[0]
n = X.shape[1]

print(X.shape)
print(Y.shape)

print(m, n, sep = " ")

W = np.zeros(n)
B = 0

print(W.shape)

J_all = []

for i in range(iterations):
    print(W)
    print(B)
    Z = (np.dot(X, W)) + B

    print(Z.shape)

    F = sigmoid(Z)

    print(F.shape)
    
    print(Z)
    print(F)
    
    epsilon = 1e-9  # Small epsilon value to avoid division by zero

    E = np.sum(Y * np.log(F + epsilon) + (1 - Y) * np.log(1 - F + epsilon))

    print(E)

    J = (-(1/m)) * E

    print(J)

    DW = np.dot(X.T, (F - Y))
    DB = (F - Y)

    W = W - (learning_rate/m) * (DW)
    B = B - (learning_rate/m) * np.sum(DB)

    J_all.append(J)

return J_all, W, B
regression classification logistic-regression
1个回答
0
投票

实际上,我找到了解决方案。我提供的代码是正确的;解决方案只是将学习率从 0.005 降低到 0.001,这起到了神奇的作用。

© www.soinside.com 2019 - 2024. All rights reserved.