神经网络-矢量中心Python实现

问题描述 投票:0回答:1

您好,我尝试使用具有一个输入节点,一个输出节点和两个具有3个节点的隐藏层的矢量中心神经网络,以适应非常简单的x ** 2函数。 -只是为了验证其功能。因此,我使用下面的代码。结果,我得到橙色线,蓝色线是真实线。

enter image description here

您可以看到有些东西不起作用。我试图改变迭代次数以及学习率的值,但是没有成功。如果在迭代中绘制损耗,则会得到100个迭代的下图:

enter image description here

我还没有添加偏差,但是我认为这个简单的函数应该合适,没有额外的偏差节点。另外,我认为代码中的失败最有可能是代码的“计算权重的梯度”部分...

所以原则上我有两个问题:

  1. 我的代码中是否存在任何基本故障,使得代码无法正常工作
  2. 如果没有,为什么我的模型无法拟合简单数据

谢谢您的帮助!

这里有代码-准备播放:

class Neural_Net:
    """
    """
    def __init__(self, activation_function, learning_rate, runs):
        self.activation_function = activation_function
        self.X_train = np.linspace(0,1,1000)
        self.y_train = self.X_train**2
        plt.plot(self.X_train, self.y_train)
        self.y_pred = None
        self.W_input = np.random.randn(1, 3)
        self.Partials_W_input = np.random.randn(1, 3)
        self.W_hidden = np.random.randn(3,3)
        self.Partials_W_hidden = np.random.randn(3,3)
        self.W_output = np.random.randn(3,1)
        self.Partials_W_output = np.random.randn(3,1)
        self.Activations = np.ones((3,2))
        self.Partials = np.ones((3,2))
        self.Output_Gradient = None
        self.Loss = 0
        self.learning_rate = learning_rate
        self.runs = runs
        self.Losses = []
        self.i = 0

    def apply_activation_function(self, activation_vector):
            return 1/(1+np.exp(-activation_vector))

    def forward_pass(self, training_instance):
        for layer in range(len(self.Activations[0])):
            # For the first layer between X and the first hidden layer
            if layer == 0:
                pre_activation = self.W_input.T @ training_instance.reshape(1,1)
                # print('pre activation: ', pre_activation)

                # Apply the activation function
                self.Activations[:,0] = self.apply_activation_function(pre_activation).ravel()
            else:
                self.Activations[:, layer] = self.W_hidden.T @ self.Activations[:, layer-1]
                # print('Activations: ', self.Activations)
        output = self.W_output.T @ self.Activations[:, -1]
        # print('output: ', output)
        return output

    def backpropagation(self, y_true, training_instance):
        if self.activation_function == 'linear':
            # Calculate the ouput gradient
            self.Output_Gradient = -(y_true-self.y_pred)
            # print('Output Gradient: ', self.Output_Gradient)

            # Calculate the partial gradients of the Error with respect to the pre acitvation values in the nodes
            self.Partials[:, 1] = self.Activations[:, 1]*(1-self.Activations[:, 1])*(self.W_output @ self.Output_Gradient)
            self.Partials[:, 0] = self.Activations[:, 0]*(1-self.Activations[:, 0])*(self.W_hidden @ self.Partials[:, 1])
            # print('Partials: ', self.Partials)

            # Calculate the Gradients with respect to the weights
            self.Partials_W_output = self.Output_Gradient * self.Activations[:, -1]
            # print('Partials_W_output: ', self.Partials_W_output)
            self.Partials_W_hidden = self.Partials[:, -1].reshape(3,1) * self.Activations[:, 0].reshape(1,3)
            # print('Partials_W_hidden: ',self.Partials_W_hidden)
            self.Partials_W_input = (self.Partials[:, 0].reshape(3,1) * training_instance.T).T
            # print('Partials_W_input: ', self.Partials_W_input)

    def weight_update(self, training_instance, learning_rate):

        # Output Layer weights
        w_output_old = self.W_output.copy()
        self.W_output = w_output_old - learning_rate*self.Output_Gradient

        # Hidden Layer weights
        w_hidden_old = self.W_hidden.copy()
        self.W_hidden = w_hidden_old - learning_rate * self.W_hidden
        # print('W_hidden new: ', self.W_hidden)

        # Input Layer weights
        w_input_old = self.W_input.copy()
        self.W_input = w_input_old - learning_rate * self.W_input
        # print('W_input new: ', self.W_input)


    def train_model(self):
        for _ in range(self.runs):
            for instance in range(len(self.X_train)):
                # forward pass
                self.y_pred = self.forward_pass(self.X_train[instance])

                # Calculate loss
                self.Loss = self.calc_loss(self.y_pred, self.y_train[instance])
                # print('Loss: ', self.Loss)

                # Calculate backpropagation
                self.backpropagation(self.y_train[instance], self.X_train[instance])

                # Update weights
                self.weight_update(self.X_train[instance], self.learning_rate)

        # print(self.Losses)
        # plt.plot(range(len(self.Losses)), self.Losses)
        # plt.show()

        # Make predictions on training data to check if the model is basically able to fit the training data
        predictions = []
        for i in np.linspace(0,1,1000):
            predictions.append(self.make_prediction(i))
        plt.plot(np.linspace(0,1,1000), predictions)


    def make_prediction(self, X_new):
        return self.forward_pass(X_new)


    def calc_loss(self, y_pred, y_true):
        loss = (1/2)*(y_true-y_pred)**2
        self.Losses.append(loss[0])
        return (1/2)*(y_true-y_pred)**2

    def accuracy(self):
        pass


Neural_Net('linear', 0.0001, 10).train_model()
python machine-learning neural-network gradient-descent backpropagation
1个回答
0
投票

只要您的激活函数是线性的,整个ANN将提供简单的加权和,即线性输出。实际上,您目前正在执行线性回归。验证学习在某种程度上是有用的(尝试教一些线性函数),但仅此而已,对于真正的东西,您需要非线性。

请参阅Wikipedia上的https://en.wikipedia.org/wiki/Activation_function#Comparison_of_activation_functions了解想法。实际上,功能的比较始于对所需功能的概述,而第一个是非线性的:

激活功能的比较

激活函数中的某些理想属性包括:

  • 非线性-当激活函数为非线性时,则可以证明两层神经网络是通用函数逼近器。[6]身份激活功能不满足此属性。当多层使用身份激活功能时,整个网络等效于单层模型。
© www.soinside.com 2019 - 2024. All rights reserved.