为什么我的实施深度神经网络的成本经过几次迭代后会增加?

问题描述 投票:1回答:1

我是机器学习和神经网络的初学者。最近,在观看了Andrew Ng关于深度学习的讲座之后,我尝试使用自己的深度神经网络实现二元分类器。 但是,每次迭代后,预计函数的成本会降低。在我的程序中,它在开始时略有下降,但后来迅速增加。我试图改变学习速度和迭代次数,但无济于事。我很迷茫。 这是我的代码

1.神经网络分类器类

class NeuralNetwork:
    def __init__(self, X, Y, dimensions, alpha=1.2, iter=3000):
        self.X = X
        self.Y = Y
        self.dimensions = dimensions    # Including input layer and output layer. Let example be dimensions=4
        self.alpha = alpha  # Learning rate
        self.iter = iter    # Number of iterations
        self.length = len(self.dimensions)-1
        self.params = {}    # To store parameters W and b for each layer
        self.cache = {}     # To store cache Z and A for each layer
        self.grads = {}     # To store dA, dZ, dW, db
        self.cost = 1       # Initial value does not matter

    def initialize(self):
        np.random.seed(3)
        # If dimensions is 4, then layer 0 and 3 are input and output layers
        # So we only need to initialize w1, w2 and w3
        # There is no need of w0 for input layer
        for l in range(1, len(self.dimensions)):
            self.params['W'+str(l)] = np.random.randn(self.dimensions[l], self.dimensions[l-1])*0.01
            self.params['b'+str(l)] = np.zeros((self.dimensions[l], 1))

    def forward_propagation(self):
        self.cache['A0'] = self.X
        # For last layer, ie, the output layer 3, we need to activate using sigmoid
        # For layer 1 and 2, we need to use relu
        for l in range(1, len(self.dimensions)-1):
            self.cache['Z'+str(l)] = np.dot(self.params['W'+str(l)], self.cache['A'+str(l-1)]) + self.params['b'+str(l)]
            self.cache['A'+str(l)] = relu(self.cache['Z'+str(l)])
        l = len(self.dimensions)-1
        self.cache['Z'+str(l)] = np.dot(self.params['W'+str(l)], self.cache['A'+str(l-1)]) + self.params['b'+str(l)]
        self.cache['A'+str(l)] = sigmoid(self.cache['Z'+str(l)])

    def compute_cost(self):
        m = self.Y.shape[1]
        A = self.cache['A'+str(len(self.dimensions)-1)]
        self.cost = -1/m*np.sum(np.multiply(self.Y, np.log(A)) + np.multiply(1-self.Y, np.log(1-A)))
        self.cost = np.squeeze(self.cost)

    def backward_propagation(self):
        A = self.cache['A' + str(len(self.dimensions) - 1)]
        m = self.X.shape[1]
        self.grads['dA'+str(len(self.dimensions)-1)] = -(np.divide(self.Y, A) - np.divide(1-self.Y, 1-A))
        # Sigmoid derivative for final layer
        l = len(self.dimensions)-1
        self.grads['dZ' + str(l)] = self.grads['dA' + str(l)] * sigmoid_prime(self.cache['Z' + str(l)])
        self.grads['dW' + str(l)] = 1 / m * np.dot(self.grads['dZ' + str(l)], self.cache['A' + str(l - 1)].T)
        self.grads['db' + str(l)] = 1 / m * np.sum(self.grads['dZ' + str(l)], axis=1, keepdims=True)
        self.grads['dA' + str(l - 1)] = np.dot(self.params['W' + str(l)].T, self.grads['dZ' + str(l)])
        # Relu derivative for previous layers
        for l in range(len(self.dimensions)-2, 0, -1):
            self.grads['dZ'+str(l)] = self.grads['dA'+str(l)] * relu_prime(self.cache['Z'+str(l)])
            self.grads['dW'+str(l)] = 1/m*np.dot(self.grads['dZ'+str(l)], self.cache['A'+str(l-1)].T)
            self.grads['db'+str(l)] = 1/m*np.sum(self.grads['dZ'+str(l)], axis=1, keepdims=True)
            self.grads['dA'+str(l-1)] = np.dot(self.params['W'+str(l)].T, self.grads['dZ'+str(l)])

    def update_parameters(self):
        for l in range(1, len(self.dimensions)):
            self.params['W'+str(l)] = self.params['W'+str(l)] - self.alpha*self.grads['dW'+str(l)]
            self.params['b'+str(l)] = self.params['b'+str(l)] - self.alpha*self.grads['db'+str(l)]

    def train(self):
        np.random.seed(1)
        self.initialize()
        for i in range(self.iter):
            #print(self.params)
            self.forward_propagation()
            self.compute_cost()
            self.backward_propagation()
            self.update_parameters()
            if i % 100 == 0:
                print('Cost after {} iterations is {}'.format(i, self.cost))

2.测试奇数或偶数分类器的代码

import numpy as np
from main import NeuralNetwork
X = np.array([[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]])
Y = np.array([[1, 0, 1, 0, 1, 0, 1, 0, 1, 0]])
clf = NeuralNetwork(X, Y, [1, 1, 1], alpha=0.003, iter=7000)
clf.train()

3.助手守则

import math
import numpy as np

def sigmoid_scalar(x):
    return 1/(1+math.exp(-x))
def sigmoid_prime_scalar(x):
    return sigmoid_scalar(x)*(1-sigmoid_scalar(x))
def relu_scalar(x):
    if x > 0:
        return x
    else:
        return 0
def relu_prime_scalar(x):
    if x > 0:
        return 1
    else:
        return 0
sigmoid = np.vectorize(sigmoid_scalar)
sigmoid_prime = np.vectorize(sigmoid_prime_scalar)
relu = np.vectorize(relu_scalar)
relu_prime = np.vectorize(relu_prime_scalar)

产量 enter image description here

numpy machine-learning neural-network deep-learning backpropagation
1个回答
1
投票

我相信你的交叉熵导数是错误的。而不是这个:

# WRONG!
self.grads['dA'+str(len(self.dimensions)-1)] = -(np.divide(self.Y, A) - np.divide(1-self.Y, A))

... 做这个:

# CORRECT
self.grads['dA'+str(len(self.dimensions)-1)] = np.divide(A - self.Y, (1 - A) *  A)

有关详细信息,请参阅these lecture notes。我想你的意思是公式(5),但忘记了1-A。无论如何,使用公式(6)。

© www.soinside.com 2019 - 2024. All rights reserved.