为什么没有BIAS的简单AND门神经网络不起作用?

问题描述 投票:1回答:1

像这样的简单的由2个输入和一个无偏差的输出组成的神经网络-似乎不起作用。

|input1||weight1 weight2| = Z
|input2|

输出= S型(Z)

然而,当添加BIAS时,它可以完美地工作,为什么它起作用以及其背后的数学原理是什么?

|input1||weight1 weight2| = Z
|input2|

输出= Sigmoid(Z-BIAS)

这是BIAS的工作版本代码:

import numpy as np
import random as r
import sys

def sigmoid(ip, derivate=False):
    if derivate:
        return ip*(1-ip)
    return 1.0/(1+np.exp(-1*ip))

class NeuralNet:
    global sigmoid 

    def __init__(self):
        self.inputLayers = 2
        self.outputLayer = 1
        self.bias = r.random()

    def setup(self):
        self.i = np.array([r.random(), r.random()], dtype=float).reshape(2,)
        self.w = np.array([r.random(), r.random()], dtype=float).reshape(2,)

    def forward_propogate(self):
        self.z = self.w*self.i
        self.o = sigmoid(sum(self.z)-self.bias)

    def optimize_cost(self, desired):
        i=0
        current_cost = pow(desired - self.o, 2)
        for weight in self.w:
            dpdw =  -1*(desired-self.o) * (sigmoid(self.o, derivate=True)) * self.i[i]
            self.w[i] = self.w[i] - 2*dpdw
            i+=1
        #calculate dp/dB
        dpdB = -1*(desired-self.o) * (sigmoid(self.o, derivate=True)) * -1
        self.bias = self.bias - 2*dpdB
        self.forward_propogate()

    def train(self, ip, op):
        self.i = np.array(ip).reshape(2,)
        self.forward_propogate()
        self.optimize_cost(op[0])

n = NeuralNet()
n.setup()
# while sys.stdin.read(1):
success_rate = 0
trial=0
done = False
while not done:
    a = [0.1,1,0.1,1]
    b = [0.1,0.1,1,1]
    c = [0,0,0,1]
    for i in range(len(a)):
        trial +=1
        n.train([a[i],b[i]],[c[i]])
        if c[i] - n.o < 0.01:
            success_rate +=1
            print(100*success_rate/trial, "%")
        if 100*success_rate/trial > 99 and trial > 4:
            print(100*success_rate/trial, "%")
            print("Network trained, took: {} trials".format(trial))
            print("Network weights:{}, bias:{}".format(n.w, n.bias))
            done = True
            break
python machine-learning artificial-intelligence gradient-descent perceptron
1个回答
0
投票

偏差只是截距的偏移。在此示例中,您设置的NN似乎是没有隐藏层的单层神经网络,这实际上是对数回归,它只是一个线性模型。

© www.soinside.com 2019 - 2024. All rights reserved.