了解逻辑回归上下文中的np.where()

问题描述 投票:2回答:1

我目前正在研究Andrew Ng在Coursera上教授的深度学习专业。在第一个作业中,我必须定义一个预测函数,并想知道我的替代解决方案是否与实际解决方案一样有效。

[请让我知道我对np.where()函数的理解是否正确,因为我在“替代解决方案注释”下的代码中对此进行了评论。同样,如果也可以根据我在“实际解决方案注释”下的理解进行检查,将不胜感激。

当我尝试将当前数量(m = 3)的示例/输入数增加到4(到5,依此类推)时,使用np.where()的替代解决方案也有效。

让我知道您的想法,如果两种解决方案都一样好!谢谢。

def predict(w, b, X):
    '''
    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)

    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''

    m = X.shape[1]
    Y_prediction = np.zeros((1,m))    # Initialize Y_prediction as an array of zeros 
    w = w.reshape(X.shape[0], 1)

    # Compute vector "A" predicting the probabilities of a cat being present in the picture
    ### START CODE HERE ### (≈ 1 line of code)
    A = sigmoid(np.dot(w.T, X) + b)   # Note: The shape of A will always be a (1,m) row vector
    ### END CODE HERE ###

    for i in range(A.shape[1]):       # for i in range(# of examples in A = # of examples in our set)

        # Convert probabilities A[0,i] to actual predictions p[0,i]
        ### START CODE HERE ### (≈ 4 lines of code)
        Y_prediction[0, i] = 1 if A[0, i] > 0.5 else 0

        ''' 
        ACTUAL SOLUTION COMMENTS: 

        The above reads as:

        Change/update the i-th value of Y_prediction to 1 if the corresponding i-th value in A is > 0.5. 
        Otherwise, change/update the i-th value of Y_prediction to 0. 

        '''


        ''' 
        ALTERNATIVE SOLUTION COMMENTS:

        To condense this code, you could delete the for loop and Y_prediction var from the top, 
        and then use the following one line: 

        return np.where(A > 0.5, np.ones((1,m)), np.zeros((1,m))) 

        This reads as: 
        Given the condition > 0.5, return np.ones((1,m)) if True, 
        or return np.zeros((1,m)) if False. 

        Another way to understand this is as follows:
        Tell me where in the array A, entries satisfies the condition A > 0.5,
        At those positions, give me np.ones((1,m)), otherwise, give me 
        np.zeros((1,m))

        '''
        ### END CODE HERE ###

    assert(Y_prediction.shape == (1, m))

    return Y_prediction

w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print(sigmoid(np.dot(w.T, X) + b))
print ("predictions = " + str(predict(w, b, X)))   # Output gives 1,1,0 as expected
python numpy machine-learning logistic-regression
1个回答
0
投票

您的替代方法似乎很好。作为补充,我将添加您甚至不需要np.onesnp.zeros,您可以直接指定整数01。使用np.where时,只要可以广播np.whereX(根据条件替换的值)和相同条件,它就可以正常工作。这是一个简单的示例:

y

并且使用整数:

y_pred = np.random.rand(1,6).round(2)
# array([[0.53, 0.54, 0.68, 0.34, 0.53, 0.46]])
np.where(y_pred> 0.5, np.ones((1,6)), np.zeros((1,6)))
# array([[1., 1., 1., 0., 1., 0.]])

根据您对函数工作方式的评论,它确实如您所描述的那样起作用。也许我认为不是np.where(y_pred> 0.5,1,0) # array([[1, 1, 1, 0, 1, 0]]) ,而是使用numpy使它更有效,并且在这种情况下也易于理解。

© www.soinside.com 2019 - 2024. All rights reserved.