我正在尝试在 python numpy 中执行以下代码:
def log_loss(X, y, w, b=0): '''
Input:
X: data matrix of shape nxd
y: n-dimensional vector of labels (+1 or -1)
w: d-dimensional vector
b: scalar (optional, default is 0)
Output:
scalar
'''
assert np.sum(np.abs(y)) == len(y) # check if all labels in y are either +1 or -1 wt = w.T
n,d = X.shape
y_pred = np.zeros(n)
# I want to somehow not use this for loop here
for i in range(n):
y_pred[i] = np.log( sigmoid( y[i]*( wt@X[i]+b )))
return np.negative(np.sum(y_pred))
#########################################
def sigmoid(z): '''
Calculates the sigmoid of z.
Input:
z: scalar or array of dimension n
Output:
scalar or array of dimension n
'''
sig = 1/(1+np.exp(-z))
return sig
我的问题是如何在不使用紧密循环的情况下更有效地做到这一点?或使用更有效的解决方案?我认为我的解决方案忽略了使用 numpy 的要点。请指教。
def log_loss(X, y, w, b=0):
'''
Input:
X: data matrix of shape nxd
y: n-dimensional vector of labels (+1 or -1)
w: d-dimensional vector
b: scalar (optional, default is 0)
Output:
scalar
'''
assert np.sum(np.abs(y)) == len(y)
wt = w.T
n,d = X.shape
linear_pred = X.dot(wt) + b
prob_pred = sigmoid(linear_pred)
log_loss = np.mean(-y*np.log(prob_pred) - (1-y)*np.log(1-prob_pred))
return log_loss
根据你的形状组织 x 和 w,我假设:
np.dot(x,w)
-> (n X 1)y_pred = np.log( sigmoid( y*( np.dot(x,w)+b )))
return np.negative(np.sum(y_pred))