如何训练具有两层或多层的网络

问题描述 投票:0回答:1

我正在使用 python 从头开始实现一个神经网络。我有一个神经元类、层类和网络类。

我已经成功训练并使用了一个具有 1 层、1 个神经元和 3 个输入的网络。

我现在想尝试使用 2 层或更多层,均具有任意数量的神经元。我的问题是,我现在如何更改“训练”函数来训练这样的网络?

目前,如果层数为0,那么它将把网络输入输入到神经元中。如果该层在 0 以上,则它将输入前一层的输出。

但是接下来我该怎么办?

我使用了以下代码:


import numpy as np
from numpy import exp, random
import math

from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt

np.random.seed(1)

class Neuron:

    def __init__(self, weights, bias):

        self.weights = weights
        self.bias = bias

    def sigmoid(self, x):

        output = 1/(1+exp(-x))

        return output

    def compute(self, inputs):

        self.output = self.sigmoid(np.dot(inputs, self.weights) + self.bias)

        return self.output

class Layer: 

    def __init__(self, numberOfNeurons, numberOfInputs):

        self.neurons = []
        self.outputs = []
        self.numberOfNeurons = numberOfNeurons
        self.numberOfInputs = numberOfInputs

        self.initialiseWeightsAndBiases()

        for i in range(0,numberOfNeurons):

            self.neurons.append(Neuron(self.weights, self.biases))

    def initialiseWeightsAndBiases(self):

        self.weights = 2 * random.random((self.numberOfInputs, self.numberOfNeurons)) - 1

        self.biases = 2 * random.random((1, self.numberOfNeurons)) - 1

    
    def forward(self, inputs):

        self.outputs = np.array([])

        for i in self.neurons:

            self.outputs = np.append(self.outputs, i.compute(inputs))

class NeuralNetwork:

    def __init__(self, layers):

        self.layers = layers

    def forwardPass(self, inputs):

        for i in range(0,len(layers)):

            if i == 0:

                layers[i].forward(inputs)   

            else:
                
                layers[i].forward(layers[i-1].outputs)

        return layers[-1].outputs

    def calculateError(self, predictedOutputs, trueOutputs):

        error = (trueOutputs - predictedOutputs) * predictedOutputs * (1 - predictedOutputs)

        return error

    def trainNetwork(self, trainingDataInputs, trainingDataOutputs, numberOfIterations):

        #initialise the best weights with random values

        for y in range(0, numberOfIterations):

            predictedOutputs = self.forwardPass(trainingDataInputs)

            error = self.calculateError(predictedOutputs, trainingDataOutputs)

            for i in layers[0].neurons:             

                i.weights += np.dot(trainingDataInputs.T, error.T)


    def visualiseNetwork(self):

        pass


#Layer(numberOfNeurons, numberOfInputs)

inputLayer = Layer( 1, 3)

layers = [inputLayer]

network1 = NeuralNetwork(layers)

inputTrainingData = np.array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
outputTrainingData = [[0, 1, 1, 0]]

network1.trainNetwork(inputTrainingData, outputTrainingData, 10000)

outputs = network1.forwardPass([[0,1,1]])

print(outputs)

python numpy machine-learning neural-network artificial-intelligence
1个回答
0
投票

你就快到了。我正是把这个作为一项作业完成的。您需要在 Layer 中使用一个列表来存储每个层的错误。 从最后一层开始,误差将反向传播到第一层。

def trainNetwork(self, trainingDataInputs, trainingDataOutputs, numberOfIterations):
#initialise the best weights with random values
for y in range(0, numberOfIterations):
    predictedOutputs = self.forwardPass(trainingDataInputs)
    error = self.calculateError(predictedOutputs, trainingDataOutputs)
    for i in layers[0].neurons:             
        i.weights += np.dot(trainingDataInputs.T, error.T)

    N = predictedOutputs.len()
    next_layer = layers[-1]
    next_layer.error = error

    for i in range(len(layers) - 2, -1):
        next_layer = layers[i+1]
        current_layer = layers[i]
        current_layer.error = np.dot(next_layer.error, next_layer.weights.T) * current_layer.output * (1-current_layer.output)
        for i in next_layer.neurons:
            i.weights += np.dot(current_layer.output.T, next_layer.error)/N

    // for the first layer, updating weights needs to be done against the inputs
    for i in current_layer.neurons:
        i.weights += np.dot(trainingDataInputs.T, current_layer.error)/N

*PS:如果我错了,请纠正我,但您的代码的权重更新公式是 np.dot(trainingDataInputs.T, error.T) 您还可以将学习率添加到代码中 *

© www.soinside.com 2019 - 2024. All rights reserved.