我的神经网络只预测一件事

问题描述 投票:0回答:2

我正在尝试从头开始在 python 上实现它。我尝试了很多,但在我的实现中找不到错误。每当我使用“预测”函数时,它总是输出 0。

我还使用与 x 和 y 形状相同的随机数组测试了您将在下面的代码中看到的每个函数,并且所有函数似乎都运行良好。我之前也清理过数据。

import os
os.chdir(r'path where my data is store')#This block of code changes directory to where my data set

创建数据框并将值分配给输入和目标向量

import pandas as pd
import numpy as np
df = pd.read_csv('clean_data.csv')
X = df[['radius_mean', 'texture_mean', 'perimeter_mean',
   'area_mean', 'smoothness_mean', 'compactness_mean', 'concavity_mean',
   'concave points_mean', 'symmetry_mean', 'fractal_dimension_mean',
   'radius_se', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se',
   'compactness_se', 'concavity_se', 'concave points_se', 'symmetry_se',
   'fractal_dimension_se', 'radius_worst', 'texture_worst',
   'perimeter_worst', 'area_worst', 'smoothness_worst',
   'compactness_worst', 'concavity_worst', 'concave points_worst',
   'symmetry_worst', 'fractal_dimension_worst']].values
Y = df['diagnosis'].values 
Y = Y.reshape(569,1)

将数据拆分为训练数据和测试数据(x和y是训练集,xt和yt是测试集)

from sklearn.model_selection import train_test_split
x, xt, y, yt = train_test_split(X, Y, test_size = 0.2, random_state = 40)
x, xt, y, yt = x.T, xt.T, y.T, yt.T

初始化参数

def iniparams(layer_dims):
params = {}
for l in range(1,len(layer_dims)):
    params['W' + str(l)] = np.random.randn(layer_dims[l],layer_dims[l - 1])*0.01
    params['b' + str(l)] = np.zeros((layer_dims[l],1))
return params

编写辅助函数#1

def sigmoid(Z):
return 1/(1 + np.exp(-Z)), Z

#2

def relu(Z):
return np.maximum(0, Z), Z

直线前进

def linearfwd(W, A, b):
Z = np.dot(W, A) + b
linear_cache = (W, A, b)
return Z, linear_cache

转发激活

def fwdactivation(W, A_prev, b, activation):
if activation == 'sigmoid':
    Z, linear_cache = linearfwd(W, A_prev, b)
    A, activation_cache = sigmoid(Z)
elif activation == 'relu':
    Z, linear_cache = linearfwd(W, A_prev, b)
    A, activation_cache = relu(Z)
cache = (linear_cache, activation_cache)
return A, cache

正向模型

def fwdmodel(x, params):
caches = []
L = len(params)//2
A = x
for l in range(1, L):
    A_prev = A
    A, cache = fwdactivation(params['W' + str(l)], A_prev, params['b' + str(l)], 'relu')
    caches.append(cache)
AL, cache = fwdactivation(params['W' + str(L)], A, params['b' + str(L)], 'sigmoid')
caches.append(cache)
return AL, caches

计算成本

def J(AL, y):
return -np.sum(np.multiply(np.log(AL), y) + np.multiply(np.log(1 - AL), (1 - y)))/y.shape[1]

后向乙状结肠

def sigmoidbkwd(dA, cache):
Z = cache
s = 1/(1 + np.exp(-Z))
dZ = dA*s*(1 - s)
return dZ

向后relu`

def sigmoidbkwd(dA, cache):
Z = cache
s = 1/(1 + np.exp(-Z))
dZ = dA*s*(1 - s)
return dZ

线性bkwd

def linearbkwd(dZ, cache):
W, A_prev, b = cache
m = A_prev.shape[1]
dW = np.dot(dZ, A_prev.T)/m
db = np.sum(dZ, axis = 1, keepdims = True)/m
dA_prev = np.dot(W.T, dZ)
return dW, dA_prev, db

向后激活

def bkwdactivation(dA, cache, activation):
linear_cache, activation_cache = cache
if activation == 'sigmoid':
    dZ = sigmoidbkwd(dA, activation_cache)
    dW, dA_prev, db = linearbkwd(dZ, linear_cache)
if activation == 'relu':
    dZ = relubkwd(dA, activation_cache)
    dW, dA_prev, db = linearbkwd(dZ, linear_cache)
return dW, dA_prev, db

落后模式

def bkwdmodel(AL, y, cache):
grads = {}
L = len(cache)
dAL = -(np.divide(y, AL) - np.divide(1 - y,1 - AL))
current_cache = cache[L - 1]
grads['dW' + str(L)], grads['dA' + str(L - 1)], grads['db' + str(L)] = bkwdactivation(dAL, current_cache, 'sigmoid')
for l in reversed(range(L - 1)):
    current_cache = cache[l]
    dW_temp, dA_prev_temp, db_temp = bkwdactivation(grads['dA' + str(l + 1)], current_cache, 'relu')
    grads['dW' + str(l + 1)] = dW_temp
    grads['dA' + str(l)] = dA_prev_temp
    grads['db' + str(l + 1)] = db_temp
return grads

使用梯度下降优化参数

def optimize(grads, params, alpha):
L = len(params)//2
for l in range(1, L + 1):
    params['W' + str(l)] = params['W' + str(l)] - alpha*grads['dW' + str(l)]
    params['b' + str(l)] = params['b' + str(l)] - alpha*grads['db' + str(l)]
return params

神经网络模型

def model(x, y, layer_dims, iters):
costs = []
params = iniparams(layer_dims)
for i in range(1, iters):
    AL, caches = fwdmodel(x, params)
    cost = J(AL, y)
    costs.append(cost)
    grads = bkwdmodel(AL, y, caches)
    params = optimize(grads, params, 1.2)
    if i%100 == 0:
        print('Cost after', i,'iterations is:', cost)
        costs.append(cost)
return costs, params

计算(成本确实降低了Cost Vs Iterations(Y,X) curve

costs, params = model(x, y, [30,8,5,4,4,3,1], 3000)

预测功能

def predict(x,params):

AL, cache = fwdmodel(x,params)
predictions = AL >= 0.5

return predictions

最后当我这样做时

predictions = predict(xt,params)
predictions

我明白了:

array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
        0, 0, 0, 0]])

我做错了什么?

这是数据集的链接

python machine-learning deep-learning neural-network data-science
2个回答
0
投票

我不明白你为什么要调换训练-测试-分割输出。为什么要使用 xt.T, x.T 呢? 您应该尝试打印 params(array) 输出和 xt(array) 输出,看看它们如何。 它们相似吗?您的参数输出给出正确的结果吗?检查所有这些。


0
投票

我的问题是我的神经网络太深了。这是像我这样的新手容易犯的错误。我发现这个很棒的资源帮助我认识到了这个错误: http://theorangeduck.com/page/neural-network-not-working

© www.soinside.com 2019 - 2024. All rights reserved.