AssertionError:d['w'] 的值错误 |深度学习专业

问题描述 投票:0回答:3

我正在完成深度学习专业的第一门课程,其中第一个编程作业是从头开始构建逻辑回归模型。由于这是我第一次从头开始构建模型,而且我花了一些时间来消化高等数学,所以我有很多错误。其中,我发现了一个我完全无法修复并且无法理解的问题。这是一个断言错误,说 dw(成本相对于重量的导数)的形状实际上是错误的。

代码:

import numpy as np 

def sigmoid(x):
    return 1 / 1 + np.exp(x)

def propagate(w, b, X, Y):
    m = X.shape[1] 
    A = sigmoid(np.dot(w.T,X) + b)
    cost = np.sum(np.abs(Y * np.log(A) + (1-Y)*(np.log(1-A))) / m)
    dw = np.dot(X,(A-Y).T) / m
    db = np.sum(A - Y) /m
    cost = np.squeeze(np.array(cost))
    grads = {"dw": dw,"db": db}
    return grads, cost

def optimize(w, b, X, Y, num_iterations=100, learning_rate=0.009, print_cost=False):
    w = copy.deepcopy(w)
    b = copy.deepcopy(b)
    costs = []
    for i in range(num_iterations):      
        grads, cost = propagate(w, b ,X, Y)
        dw = grads["dw"]
        db = grads["db"]
        w = w - learning_rate * grads["dw"]
        b = b - learning_rate * grads["db"]
        if i % 100 == 0:
            costs.append(cost)
            if print_cost:
                print ("Cost after iteration %i: %f" %(i, cost))
    params = {"w": w,
              "b": b}
    grads = {"dw": dw,"db": db}
    return params, grads, costs

def predict(w, b, X):
    m = X.shape[1]
    Y_prediction = np.zeros((1, m))
    w = w.reshape(x[0], 1)
    A = sigmoid(np.dot(w.T, X) + b)   
    for i in range(A.shape[1]):
        if A[0, i] > 0.5:
            Y_prediction[0,i] = 1.0
        else:
            Y_prediction[0,i] = 0.0
    return Y_prediction

def model(X_train, Y_train, X_test, Y_test, num_iterations=2000, learning_rate=0.5, print_cost=False):
    w = np.zeros(shape=(X_train.shape[0],1))
    b = np.zeros(shape=(1,1))
    params, gards, costs = optimize(w, b, X_train, Y_train)
    b = params["b"]
    w = params["w"]
    Y_prediction_train = predict(w, b, X_train)
    Y_prediction_test = predict(w, b, X_test)
    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test, 
         "Y_prediction_train" : Y_prediction_train, 
         "w" : w, 
         "b" : b,
         "learning_rate" : learning_rate,
         "num_iterations": num_iterations}
    return d

model_test(model)

model_test 函数没有在课程中的任何地方定义,我认为它是内置于练习中的。但问题是:

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-36-7f17a31b22cb> in <module>
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
    117     assert type(d['w']) == np.ndarray, f"Wrong type for d['w']. {type(d['w'])} != np.ndarray"
    118     assert d['w'].shape == (X.shape[0], 1), f"Wrong shape for d['w']. {d['w'].shape} != {(X.shape[0], 1)}"
--> 119     assert np.allclose(d['w'], expected_output['w']), f"Wrong values for d['w']. {d['w']} != {expected_output['w']}"
    120 
    121     assert np.allclose(d['b'], expected_output['b']), f"Wrong values for d['b']. {d['b']} != {expected_output['b']}"

---------------------------------------------------------------------------
AssertionError                            Traceback (most recent call last)
<ipython-input-36-7f17a31b22cb> in <module>
----> 1 model_test(model)

~/work/release/W2A2/public_tests.py in model_test(target)
    117     assert type(d['w']) == np.ndarray, f"Wrong type for d['w']. {type(d['w'])} != np.ndarray"
    118     assert d['w'].shape == (X.shape[0], 1), f"Wrong shape for d['w']. {d['w'].shape} != {(X.shape[0], 1)}"
--> 119     assert np.allclose(d['w'], expected_output['w']), f"Wrong values for d['w']. {d['w']} != {expected_output['w']}"
    120 
    121     assert np.allclose(d['b'], expected_output['b']), f"Wrong values for d['b']. {d['b']} != {expected_output['b']}"

AssertionError: Wrong values for d['w']. [[ 0.28154433]
 [-0.11519574]
 [ 0.13142694]
 [ 0.20526551]] != [[ 0.00194946]
 [-0.0005046 ]
 [ 0.00083111]
 [ 0.00143207]]

此时我完全迷失了,我不知道该怎么办..

python numpy neural-network logistic-regression
3个回答
8
投票

问题来自这一行:

params, gards, costs = optimize(w, b, X_train, Y_train)

您仍然需要指定优化函数的参数。忽略最后一个参数将使模型使用默认值,该值不等于模型中指定的参数。所以上面的行应该是:

params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost=print_cost)

0
投票

关于Arthur L,我认为问题确实来自于线路

参数,gard,成本=优化(w,b,X_train,Y_train)

但我相信你只是拼错了你的 grads 变量。我相信无需指定其他参数它就能工作。不过,把它们放在里面也没什么坏处。


© www.soinside.com 2019 - 2024. All rights reserved.