DPDK TestPMD 应用结果 0 个接收数据包

问题描述 投票:0回答:1

我正在 Avleo u200 中测试 DPDK TestPMD 应用程序。我正在执行以下命令

dpdk-20.11]$ sudo /home/admin/SmartNIC/dpdk-20.11/usertools/dpdk-devbind.py -b vfio-pci 08:00.0 08:00.1

dpdk-20.11]$ sudo ./build/app/dpdk-testpmd -l 1-3 -n 4 -a 0000:08:00.0 -a 0000:08:00.1 -- --burst=256 -i --nb-cores=1  --forward-mode=io --rxd=2048 --txd=2048 --mbcache=512 --mbuf-size=4096 
EAL: Detected 12 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: No available hugepages reported in hugepages-2048kB
EAL: Debug dataplane logs available - lower performance
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL:   using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_qdma (10ee:903f) device: 0000:08:00.0 (socket 0)
Device Type: Soft IP
IP Type: EQDMA Soft IP
Vivado Release: vivado 2020.2
PMD: qdma_get_hw_version(): QDMA RTL VERSION : RTL Base

PMD: qdma_get_hw_version(): QDMA DEVICE TYPE : Soft IP

PMD: qdma_get_hw_version(): QDMA VIVADO RELEASE ID : vivado 2020.2

PMD: qdma_identify_bars(): QDMA config bar idx :0

PMD: qdma_identify_bars(): QDMA AXI Master Lite bar idx :2

PMD: qdma_identify_bars(): QDMA AXI Bridge Master bar idx :-1

PMD: qdma_eth_dev_init(): QDMA device driver probe:
PMD: qdma_device_attributes_get(): qmax = 512, mm 1, st 1.

PMD: qdma_eth_dev_init(): PCI max bus number : 0x8
PMD: qdma_eth_dev_init(): PF function ID: 0
PMD: QDMA PMD VERSION: 2020.2.1
qdma_dev_entry_create: Created the dev entry successfully
EAL: Probe PCI driver: net_qdma (10ee:913f) device: 0000:08:00.1 (socket 0)
Device Type: Soft IP
IP Type: EQDMA Soft IP
Vivado Release: vivado 2020.2
PMD: qdma_get_hw_version(): QDMA RTL VERSION : RTL Base

PMD: qdma_get_hw_version(): QDMA DEVICE TYPE : Soft IP

PMD: qdma_get_hw_version(): QDMA VIVADO RELEASE ID : vivado 2020.2

PMD: qdma_identify_bars(): QDMA config bar idx :0

PMD: qdma_identify_bars(): QDMA AXI Master Lite bar idx :2

PMD: qdma_identify_bars(): QDMA AXI Bridge Master bar idx :-1

PMD: qdma_eth_dev_init(): QDMA device driver probe:
PMD: qdma_device_attributes_get(): qmax = 512, mm 1, st 1.

PMD: qdma_eth_dev_init(): PCI max bus number : 0x8
PMD: qdma_eth_dev_init(): PF function ID: 1
qdma_dev_entry_create: Created the dev entry successfully
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
Set io packet forwarding mode
testpmd: create a new mbuf pool <mb_pool_0>: n=180224, size=4096, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
PMD: qdma_dev_configure(): Configure the qdma engines

PMD: qdma_dev_configure(): Bus: 0x0, PF-0(DEVFN) queue_base: 0

PMD: qdma_dev_tx_queue_setup(): Configuring Tx queue id:0 with 2048 desc

PMD: qdma_dev_tx_queue_setup(): Tx ring phys addr: 0x1515C6000, Tx Ring virt addr: 0x1515C6000
PMD: qdma_dev_rx_queue_setup(): Configuring Rx queue id:0

PMD: qdma_dev_start(): qdma-dev-start: Starting

PMD: qdma_dev_link_update(): Link update done

Port 0: 15:16:17:18:19:1A
Configuring Port 1 (socket 0)
PMD: qdma_dev_configure(): Configure the qdma engines

PMD: qdma_dev_configure(): Bus: 0x0, PF-1(DEVFN) queue_base: 1

PMD: qdma_dev_tx_queue_setup(): Configuring Tx queue id:0 with 2048 desc

PMD: qdma_dev_tx_queue_setup(): Tx ring phys addr: 0x1515A7000, Tx Ring virt addr: 0x1515A7000
PMD: qdma_dev_rx_queue_setup(): Configuring Rx queue id:0

PMD: qdma_dev_start(): qdma-dev-start: Starting

PMD: qdma_dev_link_update(): Link update done

Port 1: 15:16:17:18:19:1A
Checking link statuses...
PMD: qdma_dev_link_update(): Link update done

PMD: qdma_dev_link_update(): Link update done

Done
Error during enabling promiscuous mode for port 0: Operation not supported - ignore
Error during enabling promiscuous mode for port 1: Operation not supported - ignore
testpmd> start tx_first
io packet forwarding - ports=2 - cores=1 - streams=2 - NUMA support enabled, MP allocation mode: native
Logical Core 2 (socket 0) forwards packets on 2 streams:
  RX P=0/Q=0 (socket 0) -> TX P=1/Q=0 (socket 0) peer=02:00:00:00:00:01
  RX P=1/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

  io packet forwarding packets/burst=256
  nb forwarding cores=1 - nb forwarding ports=2
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=2
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
  port 1: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x0 Tx offloads=0x0
    RX queue: 0
      RX desc=2048 - RX free threshold=0
      RX threshold registers: pthresh=0 hthresh=0  wthresh=2
      RX Offloads=0x0
    TX queue: 0
      TX desc=2048 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x0 - TX RS bit threshold=0
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

  ---------------------- Forward statistics for port 0  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 256            TX-dropped: 0             TX-total: 256
  ----------------------------------------------------------------------------

  ---------------------- Forward statistics for port 1  ----------------------
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 256            TX-dropped: 0             TX-total: 256
  ----------------------------------------------------------------------------

  +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++
  RX-packets: 0              RX-dropped: 0             RX-total: 0
  TX-packets: 512            TX-dropped: 0             TX-total: 512
  ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

当我运行 dpdk-devbind.py 命令时,“ip link”下的界面将消失。但它列在

dpdk-devbind.py --status

请帮我调试为什么 TX 数据包能够发送时 RX 数据包显示为 0。

我尝试过环回设置来执行 TestPMD。

任何建议都会有帮助。 预先感谢

ethernet xilinx dpdk pci vfio
1个回答
0
投票

至少尝试使用“--nb-cores=2”,因为 1 个核心将一次处理 1 个核心。我认为您的 rx 队列配置不正确,因为您可以看到 tx 队列初始化的详细日志,但没有看到有关 rx 队列初始化的详细日志。

您正在使用带有 tx 突发的 io 模式,其中第一个端口将生成数据包突发并将其发送到另一个端口。然后,该数据包将使用两个端口的 rx 和 tx 队列在端口之间遍历。一旦你的接收队列配置正确,你的问题就会得到解决

© www.soinside.com 2019 - 2024. All rights reserved.