线性回归与python - 梯度下降错误

问题描述 投票:1回答:1

我一直在尝试使用python从头开始实现我自己的线性回归,但在最后几天一直面临着一个问题。

这是我正在使用的代码:

导入模块

import pandas as pd
import numpy as np
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt

初始化参数

def initialize_parameters(n):
    w = np.zeros(n,)
    b = 0.0
    return w,b

预测/假说

def predictor(x, w, b):
    return np.dot(x,w) + b

成本函数

def calculate_cost(X, y, theta, b):
    m = len(y)
    predictions = np.dot(X, theta)
    error = predictions - y
    cost = (1/2*m) * np.sum(np.power(error,2))
    return cost

梯度下降

def gradient_descent(X, W, b, y, learning_rate = 0.0001, epochs = 25):

    m = len(y)

    final_cost = 0

    for _ in range(epochs):
        predictions = predictor(X, W, b)
        error = predictions - y
        derivate = np.dot(error, X)

        print(derivate)

        W = W - (learning_rate/m) * derivate
        b = b - (learning_rate/m) * error.sum()

测试运行 :

# Load dataset
boston = load_boston()
data = pd.DataFrame(boston.data)
data.columns = boston.feature_names
data['PRICE'] = boston.target


# Split dataset
X = data.drop(columns=['PRICE']).values
Y = data['PRICE'].values
w, b = initialize_parameters(X.shape[1])
gradient_descent(X, w, b, Y)

在测试运行期间,我可以看到衍生物的价值疯狂增长:

[1.41239553e+06 3.20162679e+06 3.84829686e+06 2.17737688e+04
1.81667467e+05 1.99565485e+06 2.27660208e+07 1.15045731e+06
3.50107975e+06 1.40396525e+08 5.96494458e+06 1.14447329e+08
4.25947931e+06]

[-4.33362969e+07 -9.66008831e+07 -1.16941872e+08 -6.62733008e+05
-5.50761913e+06 -6.04452389e+07 -6.90425672e+08 -3.46792848e+07
-1.06967561e+08 -4.26847914e+09 -1.80579130e+08 -3.45024565e+09
-1.29016170e+08]

...


[-2.01209195e+34 -4.47742185e+34 -5.42629282e+34 -3.07294644e+32
-2.55503032e+33 -2.80363423e+34 -3.20314565e+35 -1.60824109e+34
-4.96433806e+34 -1.98052568e+36 -8.37673498e+34 -1.60024763e+36
-5.98654489e+34]

[6.09700758e+35 1.35674093e+36 1.64426623e+36 9.31159124e+33
 7.74221040e+34 8.49552585e+35 9.70611871e+36 4.87326542e+35
 1.50428547e+36 6.00135600e+37 2.53830431e+36 4.84904376e+37
 1.81403288e+36]

[-1.84750510e+37 -4.11117381e+37 -4.98242821e+37 -2.82158290e+35
 -2.34603173e+36 -2.57430013e+37 -2.94113196e+38 -1.47668879e+37
 -4.55826082e+37 -1.81852092e+39 -7.69152754e+37 -1.46934918e+39
 -5.49685229e+37]

[5.59827926e+38 1.24576106e+39 1.50976712e+39 8.54991361e+36
 7.10890636e+37 7.80060146e+38 8.91216919e+39 4.47463782e+38
 1.38123662e+39 5.51045187e+40 2.33067389e+39 4.45239747e+40
 1.66564705e+39]

[-1.69638128e+40 -3.77488445e+40 -4.57487122e+40 -2.59078061e+38
 -2.15412899e+39 -2.36372529e+40 -2.70055070e+41 -1.35589732e+40
 -4.18540025e+40 -1.66976797e+42 -7.06236930e+40 -1.34915808e+42
 -5.04721600e+40]

然后,由于高值,梯度下降运行在所有相互作用之前停止。

在某一点上,这些值形成了衍生物假设值为NaN。正如预期的那样,当我尝试预测测试用例时,我得到0.0作为输出:

sample_house = [[2.29690000e-01, 0.00000000e+00, 1.05900000e+01, 0.00000000e+00, 4.89000000e-01,
            6.32600000e+00, 5.25000000e+01, 4.35490000e+00,     4.00000000e+00, 2.77000000e+02,
            1.86000000e+01, 3.94870000e+02, 1.09700000e+01]]

test_predict = predictor(sample_house, w, b)
test_predict

------------------------------------------------

out : array([0.])

谢谢!

python pandas numpy linear-regression supervised-learning
1个回答
0
投票

你的成本函数是错误的,它应该是:

cost = 1/(2*m) * np.sum(np.power(error,2))

此外,尝试将权重初始化为0到1之间的随机值,并将输入缩放到0-1范围。

© www.soinside.com 2019 - 2024. All rights reserved.