如何在python中添加L1标准化?

问题描述 投票:0回答:2

我正在尝试从头开始编写逻辑回归代码。在这段代码中,我认为我的成本导数是我的正则化,但我的任务是添加 L1norm 正则化。你如何在Python中添加这个?是否应该在我定义成本导数的地方添加此内容?感谢任何正确方向的帮助。

def Sigmoid(z):
    return 1/(1 + np.exp(-z))

def Hypothesis(theta, X):   
    return Sigmoid(X @ theta)

def Cost_Function(X,Y,theta,m):
    hi = Hypothesis(theta, X)
    _y = Y.reshape(-1, 1)
    J = 1/float(m) * np.sum(-_y * np.log(hi) - (1-_y) * np.log(1-hi))
    return J

def Cost_Function_Derivative(X,Y,theta,m,alpha):
    hi = Hypothesis(theta,X)
    _y = Y.reshape(-1, 1)
    J = alpha/float(m) * X.T @ (hi - _y)
    return J

def Gradient_Descent(X,Y,theta,m,alpha):
    new_theta = theta - Cost_Function_Derivative(X,Y,theta,m,alpha)
    return new_theta

def Accuracy(theta):
    correct = 0
    length = len(X_test)
    prediction = (Hypothesis(theta, X_test) > 0.5) 
    _y = Y_test.reshape(-1, 1)
    correct = prediction == _y
    my_accuracy = (np.sum(correct) / length)*100
    print ('LR Accuracy: ', my_accuracy, "%")

def Logistic_Regression(X,Y,alpha,theta,num_iters):
    m = len(Y)
    for x in range(num_iters):
        new_theta = Gradient_Descent(X,Y,theta,m,alpha)
        theta = new_theta
        if x % 100 == 0:
            print #('theta: ', theta)    
            print #('cost: ', Cost_Function(X,Y,theta,m))
    Accuracy(theta)
ep = .012 
initial_theta = np.random.rand(X_train.shape[1],1) * 2 * ep - ep
alpha = 0.5
iterations = 10000
Logistic_Regression(X_train,Y_train,alpha,initial_theta,iterations)
python machine-learning logistic-regression regularized
2个回答
2
投票

正则化在成本函数中添加了一项,以便在最小化成本和最小化模型参数以减少过度拟合之间取得折衷。您可以通过为正则化项添加标量

e
来控制您想要的妥协程度。

所以只需将 theta 的 L1 范数添加到原始成本函数中即可:

J = J + e * np.sum(abs(theta))

既然这一项是被添加到成本函数中的,那么在计算成本函数的梯度时就应该考虑到这一点。

这很简单,因为和的导数就是导数之和。所以现在只需要弄清楚术语

sum(abs(theta))
的派生词是什么。由于它是线性项,因此导数是常数。如果 theta >= 0,则 = 1;如果 theta < 0 (note there is a mathematical undeterminity at 0, but we don't care about it).

,则为 -1

所以在函数

Cost_Function_Derivative
中我们添加:

J = J + alpha * e * (theta >= 0).astype(float)

0
投票

检查时标记的答案或代码本身表现得很奇怪:

import numpy as np
import pandas as pd
from scipy.special import expit

e=0.2

def Sigmoid(z):
    return expit(-z)

def Hypothesis(theta, X):
    return Sigmoid(X @ theta)

def Cost_Function(X,Y,theta,m):
    hi = Hypothesis(theta, X)
    _y = Y.reshape(-1, 1)
    J = 1/float(m) * np.sum(-_y * np.log(hi) - (1-_y) * np.log(1-hi))
    J = J + e * np.sum(abs(theta))
    return J

def Cost_Function_Derivative(X,Y,theta,m,alpha):
    hi = Hypothesis(theta,X)
    _y = Y.reshape(-1, 1)
    J = alpha/float(m) * X.T @ (hi - _y)
    J = J + alpha * e * (theta >= 0).astype(float)
    return J

def Gradient_Descent(X,Y,theta,m,alpha):
    new_theta = theta - Cost_Function_Derivative(X,Y,theta,m,alpha)
    return new_theta

def Accuracy(theta):
    correct = 0
    length = len(X_test)
    prediction = (Hypothesis(theta, X_test) > 0.5)
    _y = y_test.reshape(-1, 1)
    correct = prediction == _y
    my_accuracy = (np.sum(correct) / length)*100
    print ('hand-maded LR Accuracy: ', my_accuracy, "%")

def Logistic_Regression(X,Y,alpha,theta,num_iters):
    m = len(Y)
    for x in range(num_iters):
        new_theta = Gradient_Descent(X,Y,theta,m,alpha)
        theta = new_theta
        if x % 100 == 0:
            print #('theta: ', theta)
            print #('cost: ', Cost_Function(X,Y,theta,m))
    print(Accuracy(theta))


ep = .012

########## sklearn
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import accuracy_score

X, y =  make_blobs(1000, n_features=2, centers=2, random_state=0)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.33, random_state=42
)
# fit(X_train)
sc = StandardScaler()
sc.fit(X)
X = sc.transform(X)

from sklearn.linear_model import LogisticRegression
model_lr = LogisticRegression( C=ep, penalty="l1", tol=0.01, solver="saga", random_state=10)
model_lr.fit(X_train, y_train)
# predict(X_test)
y_pred_lr = model_lr.predict(X_test)
print("sklearn Accuracy Score: ", accuracy_score(y_pred_lr, y_test)*100)

########### hand-made

initial_theta = np.random.rand(X_train.shape[1],1) * 2 * ep - ep
alpha = 0.5
iterations = 10000
Logistic_Regression(X_train,y_train,alpha,initial_theta,iterations)

# sklearn Accuracy Score:  95.45454545454545
# hand-maded LR Accuracy:  44.24242424242424 %
© www.soinside.com 2019 - 2024. All rights reserved.