NEAT-Python无法捕获极端值

问题描述 投票:0回答:1

我正在使用NEAT-Python来模拟基于曲线与0的绝对差值的常规正弦函数的过程。配置文件几乎完全从basic XOR example采用,除了设置为1的输入数量。偏移的方向是从实际预测步骤之后的原始数据推断出来的,所以这实际上都是关于预测[0, 1]范围内的偏移。

健身功能和大部分剩余代码也已从帮助页面中采用,这就是为什么我相信从代码技术角度来看代码是一致的。从下面包含的观察到的与预测的偏移的可视化中可以看出,该模型在大多数情况下产生了非常好的结果。但是,它无法捕获值范围的下端和上端。

任何有关如何提高算法性能的帮助,特别是在下边缘/上边缘,都将受到高度赞赏。或者到目前为止我还没有考虑任何有条不紊的限制?


config-feedforward位于当前工作目录中:

#--- parameters for the XOR-2 experiment ---#

[NEAT]
fitness_criterion     = max
fitness_threshold     = 3.9
pop_size              = 150
reset_on_extinction   = False

[DefaultGenome]
# node activation options
activation_default      = sigmoid
activation_mutate_rate  = 0.0
activation_options      = sigmoid

# node aggregation options
aggregation_default     = sum
aggregation_mutate_rate = 0.0
aggregation_options     = sum

# node bias options
bias_init_mean          = 0.0
bias_init_stdev         = 1.0
bias_max_value          = 30.0
bias_min_value          = -30.0
bias_mutate_power       = 0.5
bias_mutate_rate        = 0.7
bias_replace_rate       = 0.1

# genome compatibility options
compatibility_disjoint_coefficient = 1.0
compatibility_weight_coefficient   = 0.5

# connection add/remove rates
conn_add_prob           = 0.5
conn_delete_prob        = 0.5

# connection enable options
enabled_default         = True
enabled_mutate_rate     = 0.01

feed_forward            = True
initial_connection      = full

# node add/remove rates
node_add_prob           = 0.2
node_delete_prob        = 0.2

# network parameters
num_hidden              = 0
num_inputs              = 1
num_outputs             = 1

# node response options
response_init_mean      = 1.0
response_init_stdev     = 0.0
response_max_value      = 30.0
response_min_value      = -30.0
response_mutate_power   = 0.0
response_mutate_rate    = 0.0
response_replace_rate   = 0.0

# connection weight options
weight_init_mean        = 0.0
weight_init_stdev       = 1.0
weight_max_value        = 30
weight_min_value        = -30
weight_mutate_power     = 0.5
weight_mutate_rate      = 0.8
weight_replace_rate     = 0.1

[DefaultSpeciesSet]
compatibility_threshold = 3.0

[DefaultStagnation]
species_fitness_func = max
max_stagnation       = 20
species_elitism      = 2

[DefaultReproduction]
elitism            = 2
survival_threshold = 0.2

NEAT功能:

# . fitness function ----

def eval_genomes(genomes, config):
  for genome_id, genome in genomes:
    genome.fitness = 4.0
    net = neat.nn.FeedForwardNetwork.create(genome, config)
    for xi in zip(abs(x)):
      output = net.activate(xi)
      genome.fitness -= abs(output[0] - xi[0]) ** 2


# . neat run ----

def run(config_file, n = None):
  # load configuration
  config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
                       neat.DefaultSpeciesSet, neat.DefaultStagnation,
                       config_file)
  # create the population, which is the top-level object for a NEAT run
  p = neat.Population(config)
  # add a stdout reporter to show progress in the terminal
  p.add_reporter(neat.StdOutReporter(True))
  stats = neat.StatisticsReporter()
  p.add_reporter(stats)
  p.add_reporter(neat.Checkpointer(5))
  # run for up to n generations
  winner = p.run(eval_genomes, n)
  return(winner)

码:

### ENVIRONMENT ====

### . packages ----

import os
import neat

import numpy as np
import matplotlib.pyplot as plt
import random


### . sample data ----

x = np.sin(np.arange(.01, 4000 * .01, .01))


### NEAT ALGORITHM ====

### . model evolution ----

random.seed(1899)
winner = run('config-feedforward', n = 25)


### . prediction ----

## extract winning model
config = neat.Config(neat.DefaultGenome, neat.DefaultReproduction,
                     neat.DefaultSpeciesSet, neat.DefaultStagnation,
                     'config-feedforward')

winner_net = neat.nn.FeedForwardNetwork.create(winner, config)

## make predictions
y = []
for xi in zip(abs(x)):
  y.append(winner_net.activate(xi))

## if required, adjust signs
for i in range(len(y)):
  if (x[i] < 0):
    y[i] = [x * -1 for x in y[i]]

## display sample vs. predicted data
plt.scatter(range(len(x)), x, color='#3c8dbc', label = 'observed') # blue
plt.scatter(range(len(x)), y, color='#f39c12', label = 'predicted') # orange
plt.hlines(0, xmin = 0, xmax = len(x), colors = 'grey', linestyles = 'dashed')
plt.xlabel("Index")
plt.ylabel("Offset")
plt.legend(bbox_to_anchor = (0., 1.02, 1., .102), loc = 10,
           ncol = 2, mode = None, borderaxespad = 0.)
plt.show()
plt.clf()

observed vs. predicted offset

python neural-network genetic-algorithm evolutionary-algorithm neat
1个回答
2
投票

存在NEAT的不同实现,因此细节可以变化。

通常,NEAT通过包含一个始终有效的特殊输入神经元来处理偏差(激活后1)。我怀疑bias_max_value和bias_min_value确定了这个偏置神经元和隐藏神经元之间连接的最大允许强度。在我使用的NEAT代码中,这两个参数不存在,偏置到隐藏的连接被视为正常(具有它们自己的允许范围,在我们的例子中为-5到5)。

如果您正在使用Sigmoid函数,您的输出神经元将在0到1范围内工作(考虑更快的激活隐藏神经元,也许是RELU)。

如果你试图预测接近0或1的值,这是一个问题,因为你真的需要将你的神经元推到它们的范围的极限,并且Sigmoids渐渐地(慢慢地)接近这些极端值:

幸运的是,有一种非常简单的方法可以看出这是否是问题:只需重新调整输出!就像是

out = raw_out * 1.2 - 0.1

这将使您的理论输出在超出预期输出的范围内(在我的示例中为-0.1到1.1),并且更容易达到0和1(实际上可能严格来说)。

© www.soinside.com 2019 - 2024. All rights reserved.