检测 lava-nc 中神经元的输入和输出尖峰?

问题描述 投票:0回答:2

Lava 软件框架文档将 Leaky-Integrate-And-Fire 神经元描述为:

class LIF(AbstractProcess):
    """Leaky-Integrate-and-Fire neural process with activation input and spike
    output ports a_in and s_out.

    Realizes the following abstract behavior:
    u[t] = u[t-1] * (1-du) + a_in
    v[t] = v[t-1] * (1-dv) + u[t] + bias
    s_out = v[t] > vth
    v[t] = v[t] - s_out*vth
    """
    def __init__(self, **kwargs):
       super().__init__(**kwargs)
       shape = kwargs.get("shape", (1,))
       self.a_in = InPort(shape=shape)
       self.s_out = OutPort(shape=shape)
       self.u = Var(shape=shape, init=0)
       self.v = Var(shape=shape, init=0)
       self.du = Var(shape=(1,), init=kwargs.pop("du", 0))
       self.dv = Var(shape=(1,), init=kwargs.pop("dv", 0))
       self.bias = Var(shape=shape, init=kwargs.pop("b", 0))
       self.vth = Var(shape=(1,), init=kwargs.pop("vth", 10))

并且可以使用以下方法创建一个在

t=6
处尖峰的神经元:

def two_lif_neurons():
    # Instantiate Lava processes to build network
    from lava.proc.dense.process import Dense
    from lava.proc.lif.process import LIF

    lif1 = LIF(u=0, du=3, dv=0, bias=2)
    dense = Dense()
    lif2 = LIF()

    # Connect processes via their directional input and output ports
    lif1.out_ports.s_out.connect(dense.in_ports.s_in)
    dense.out_ports.a_out.connect(lif2.in_ports.a_in)

    # Execute process lif1 and all processes connected to it for fixed number of steps
    from lava.magma.core.run_conditions import RunSteps
    from lava.magma.core.run_configs import Loihi1SimCfg
    for t in range(1, 10):
        lif1.run(condition=RunSteps(num_steps=1), run_cfg=Loihi1SimCfg())
        if t == 6:
            # print the output spike
            print(lif1.s_out)
    lif1.stop()

相关线位于

t=6
,此时神经元出现尖峰。我想验证尖峰确实已发送。然而,
print(lif1.s_out)
输出:

所以我想打印

lif1.s_out
对象的属性:

'__abstractmethods__',
 '__class__',
 '__delattr__',
 '__dict__',
 '__dir__',
 '__doc__',
 '__eq__',
 '__format__',
 '__ge__',
 '__getattribute__',
 '__gt__',
 '__hash__',
 '__init__',
 '__init_subclass__',
 '__le__',
 '__lt__',
 '__module__',
 '__ne__',
 '__new__',
 '__reduce__',
 '__reduce_ex__',
 '__repr__',
 '__setattr__',
 '__sizeof__',
 '__slots__',
 '__str__',
 '__subclasshook__',
 '__weakref__',
 '_abc_impl',
 '_add_inputs',
 '_add_outputs',
 '_connect_backward',
 '_connect_forward',
 '_name',
 '_process',
 '_validate_ports',
 'concat_with',
 'connect',
 'connect_from',
 'flatten',
 'get_dst_ports',
 'get_incoming_virtual_ports',
 'get_outgoing_virtual_ports',
 'get_src_ports',
 'in_connections',
 'name',
 'out_connections',
 'process',
 'reshape',
 'shape',
 'size',
 'transpose']

我查看了

port_in
对象的
dense()
对象:https://github.com/lava-nc/lava/blob/81336e63783edf6b27f2f019797728b208458cb8/src/lava/magma/core/process/ports/ports。 py 但是,我还没有找到如何打印
lif1
神经元尖峰的布尔值。所以想请教一下:

如何在英特尔 Lava 神经形态计算框架中打印 lif 神经元输出尖峰的布尔值?

python intel
2个回答
1
投票

可以在这里找到示例

在开始模拟神经元之前,可以添加一个监视器来监视和存储神经元的行为。然后,您可以读出神经元做了什么。

# Instantiate Lava processes to build network
from lava.magma.core.run_conditions import RunSteps
from lava.magma.core.run_configs import Loihi1SimCfg
from lava.proc.dense.process import Dense
from lava.proc.lif.process import LIF
from lava.proc.monitor.process import Monitor
from pprint import pprint
import numpy as np

lif1 = LIF(bias=2, vth=1)
dense = Dense(shape=(1, 1), weights=np.ones((1, 1)))
lif2 = LIF(vth=np.inf, dv=0, du=1)
mon_lif_1_v = Monitor()
mon_lif_2_v = Monitor()
mon_spike = Monitor()

mon_lif_1_v.probe(lif1.v, 20)
mon_lif_2_v.probe(lif2.v, 20)
mon_spike.probe(lif1.s_out, 20)


# This is used to get the name of the process that is used for monitoring.
mon_lif_1_v_process = list(mon_lif_1_v.get_data())[0]
mon_lif_2_v_process = list(mon_lif_2_v.get_data())[0]
mon_spike_process = list(mon_spike.get_data())[0]
# Note it follows the order of declaration/initialisation of the neuron/dense it follows.
print(f"mon_lif_1_v_process={mon_lif_1_v_process}")
print(f"mon_lif_2_v_process={mon_lif_2_v_process}")
print(f"mon_spike_process={mon_spike_process}")


# Connect processes via their directional input and output ports
lif1.out_ports.s_out.connect(dense.in_ports.s_in)
dense.out_ports.a_out.connect(lif2.in_ports.a_in)

for run in range(10):
    t = run
    # Execute process lif1 and all processes connected to it for fixed number of steps
    lif1.run(condition=RunSteps(num_steps=1), run_cfg=Loihi1SimCfg())

    # Print the currents that have accumulated in the post synaptic neuron (lif2)
    print(
        f'lif1.v={mon_lif_1_v.get_data()[mon_lif_1_v_process]["v"][t]},lif1.s_out={mon_spike.get_data()[mon_spike_process]["s_out"][t]}, lif2.v={mon_lif_2_v.get_data()[mon_lif_2_v_process]["v"][t]}'
    )

# Change and Print the weights of the synapse (dense) to 2 from its initial state of 1
dense.weights.set(np.ones((1, 1)) * 2)
print(dense.weights)

for run in range(10):
    t = run + 10
    # Run the simulation for 10 more timesteps
    lif1.run(condition=RunSteps(num_steps=1), run_cfg=Loihi1SimCfg())
    # Show that the voltage increase reflects the increase in the synaptic weights
    print(
        f'lif1.v={mon_lif_1_v.get_data()[mon_lif_1_v_process]["v"][t]},lif1.s_out={mon_spike.get_data()[mon_spike_process]["s_out"][t]}, lif2.v={mon_lif_2_v.get_data()[mon_lif_2_v_process]["v"][t]}'
    )

print(f"voltage_list after={voltage_list}")
lif1.stop()

0
投票

我正在尝试在 loihi2 中运行类似的部署代码,但由于某种原因,我的代码总是卡在运行网络的部分:

from lava.magma.core.run_conditions import RunSteps
from lava.magma.core.run_configs import Loihi1SimCfg
# Import monitoring Process.
from lava.proc.monitor.process import Monitor

# Configurations for execution.
num_steps = 1
rcfg = Loihi1SimCfg(select_tag='rate_neurons')
run_cond = RunSteps(num_steps=num_steps)

# Instantiating network and IO processes.
network_balanced = EINetwork(**network_params_balanced)
state_monitor = Monitor()

state_monitor.probe(target=network_balanced.state,  num_steps=num_steps)

# Here it gets stuck:
network_balanced.run(run_cfg=rcfg, condition=run_cond)

我让它运行了好几个小时,但它永远不会结束。有人知道我如何解决这个问题吗? 我有一台 i5 第十代 1.10GHz 的电脑。是不是电脑速度的问题?

提前非常感谢!

© www.soinside.com 2019 - 2024. All rights reserved.