DPDK:使用 RPi-CM4 + Intel-i210-NIC 丢弃所有 TX 和 RX 数据包

问题描述 投票:0回答:1

无法在配备 Intel i210 NIC 的 Raspberry Pi CM4 上使用

testpmd
pktgen
发送数据包,尽管相同的设置在具有相同 NIC 的 x86 Dell 工作站上运行良好。

这个问题是否是由于DPDK在ARM或RPi上的兼容性,或者我做错了什么?

环境:

host:
    Raspberry Pi CM4 + standard IO board

NIC:
    I tried both NIC following
    Intel i210-GE-1T-X1 (1Gb)
    Intel i210-X1-V2 (10Gb)

Kernel:
    I tried both linux-raspi 5.4 & linux-raspi 5.15

DPDK:
    dpdk-23.07
    I tried both vfio_pic (no-iommu) & uio_pci_generic driver

测试结果:

运行testpmd:

./dpdk-testpmd -- -i

EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_e1000_igb (8086:1533) device: 0000:01:00.0 (socket -1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.

Configuring Port 0 (socket 0)
Port 0: 98:B7:85:00:89:4D
Checking link statuses...
Done
testpmd> 
Port 0: link state change event

启动仅 TX 转发模式

set fwd txonly
start <--- In this step, I can see the RJ45 lights on, but no blink. 
stop
txonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  txonly packet forwarding packets/burst=32
  packet len=64 - nb packet segments=1
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=512 - RX free threshold=32
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x0
    TX queue: 0
      TX desc=512 - TX free threshold=0
      TX threshold registers: pthresh=8 hthresh=1  wthresh=16
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 55054369      TX-total: 55054369
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 0              TX-dropped: 55054369      TX-total: 55054369
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

查看端口信息:

show port info 0

********************* Infos for port 0  *********************
MAC address: 98:B7:85:00:89:4D
Device name: 0000:01:00.0
Driver name: net_e1000_igb
Firmware-version: 3.16, 0x800004ff, 1.304.0
Connect to socket: 0
memory allocation on the socket: 0
Link status: up
Link speed: 1 Gbps
Link duplex: full-duplex
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmulticast mode: disabled
Maximum number of MAC addresses: 16
Maximum number of MAC addresses of hash filtering: 0
VLAN offload: 
  strip off, filter off, extend off, qinq strip off
Hash key size in bytes: 40
Redirection table size: 128
Supported RSS offload flow types:
  ipv4  ipv4-tcp  ipv4-udp  ipv6  ipv6-tcp  ipv6-udp  ipv6-ex
  ipv6-tcp-ex  ipv6-udp-ex
Minimum size of RX buffer: 256
Maximum configurable length of RX packet: 16383
Maximum configurable size of LRO aggregated packet: 0
Current number of RX queues: 1
Max possible RX queues: 4
Max possible number of RXDs per queue: 4096
Min possible number of RXDs per queue: 32
RXDs number alignment: 8
Current number of TX queues: 1
Max possible TX queues: 4
Max possible number of TXDs per queue: 4096
Min possible number of TXDs per queue: 32
TXDs number alignment: 8
Max segment number per packet: 255
Max segment number per MTU/TSO: 255
Device capabilities: 0x0( )
Device error handling mode: none
Device private info:
  none

检查Linux VFIO模块:

lsmod


    vfio_pci               16384  0
    vfio_pci_core          77824  1 vfio_pci
    vfio_virqfd            16384  1 vfio_pci_core
    vfio_iommu_type1       49152  0
    vfio                   45056  2 vfio_pci_core,vfio_iommu_type1

检查设备绑定:

dpdk-devbind.py -s

    Network devices using DPDK-compatible driver
    ============================================
    0000:01:00.0 'I210 Gigabit Network Connection 1533' drv=vfio-pci unused=igb

内核日志:

[    0.565384] pci 0000:01:00.0: [8086:1533] type 00 class 0x020000
[    0.565449] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x000fffff]
[    0.565521] pci 0000:01:00.0: reg 0x1c: [mem 0x00000000-0x00003fff]
[    0.565590] pci 0000:01:00.0: reg 0x30: [mem 0x00000000-0x000fffff pref]
[    0.565881] pci 0000:01:00.0: PME# supported from D0 D3hot D3cold
[    0.581511] pci_bus 0000:01: busn_res: [bus 01-ff] end is updated to 01
[    0.581600] pci 0000:00:00.0: BAR 14: assigned [mem 0x600000000-0x6002fffff]
[    0.581628] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x6000fffff]
[    0.581655] pci 0000:01:00.0: BAR 6: assigned [mem 0x600100000-0x6001fffff pref]
[    0.581673] pci 0000:01:00.0: BAR 3: assigned [mem 0x600200000-0x600203fff]

启动 testpmd 时的内核日志:

[   54.287336] vfio_pci: unknown parameter 'enable_unsafe_noiommu_mode' ignored
[   82.338189] vfio-pci 0000:01:00.0: Adding to iommu group 0
[   82.338210] vfio-pci 0000:01:00.0: Adding kernel taint for vfio-noiommu group on device
[  121.902266] audit: type=1326 audit(1696353003.891:65): auid=1000 uid=1000 gid=1000 ses=2 subj=snap.snap-store.ubuntu-software pid=1979 comm="pool-org.gnome." exe="/snap/snap-store/639/usr/bin/snap-store" sig=0 arch=c00000b7 syscall=55 compat=0 ip=0xffff9e093ee8 code=0x50000
[  126.390013] vfio-pci 0000:01:00.0: enabling device (0000 -> 0002)
[  126.498474] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:3057)
[  215.791550] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (pktgen:3938)
[ 4084.938242] vfio-pci 0000:01:00.0: timed out waiting for pending transaction; performing function level reset anyway
[ 5253.671006] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5719)
[ 5532.162849] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5754)
[ 5541.958851] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5765)
[ 5647.800353] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5796)
[ 5709.481186] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5819)
[ 5744.089540] vfio-pci 0000:01:00.0: vfio-noiommu device opened by user (dpdk-testpmd:5830)

与使用

uio_pci_generic
驱动程序的结果相同。

我也尝试过使用rx_only,所有接收的数据包也被丢弃。

我测试过RPi-CM4 + i210基于Linux API可以很好地工作。

如果我可以提供任何其他信息,请告诉我。

raspberry-pi ethernet dpdk nic
1个回答
0
投票

我目前正在尝试复制相同的设置,因此我还没有回答您的问题。但是,我什至无法执行 testpmd 应用程序。

它目前给了我:

dpdk-testpmd -- -i
EAL: Detected CPU lcores: 4
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /run/user/1000/dpdk/rte/mp_socket
EAL: FATAL: Cannot use IOVA as 'PA' since physical addresses are not available
EAL: Cannot use IOVA as 'PA' since physical addresses are not available
EAL: Error - exiting with code: 1
  Cause: Cannot init EAL: Invalid argument

在另一个应用程序中,我看到了大页面的问题。您是从头开始编译内核还是启用了任何其他选项来允许巨大表?

我运行的默认内核(

6.1.0-rpi7-rpi-v8
)似乎没有启用巨大表,至少没有通过 sysfs。

© www.soinside.com 2019 - 2024. All rights reserved.