当两个 数据映射矩阵(Eigen::Map
) 我注意到,根据内存的分配方式,性能上有明显的差异。当使用来自自定义分配的内存时,它的速度几乎是使用来自 std::vector
数据的分配也由 Eigen::aligned_allocator
.
最小的基准。
#include <Eigen/Core>
#include <Eigen/StdVector>
#include <chrono>
#include <iostream>
using Matrix = Eigen::Matrix<float, Eigen::Dynamic, Eigen::Dynamic, Eigen::ColMajor>;
using Mapped = Eigen::Map<Matrix, Eigen::Aligned16>;
using aligned_vector = std::vector<float, Eigen::aligned_allocator<float>>;
void measure(const std::string& name, const Mapped& a, const Mapped& b, Mapped& c)
{
using namespace std::chrono;
const auto start_time_ns = high_resolution_clock::now().time_since_epoch().count();
const std::size_t runs = 10;
for (size_t i = 0; i < runs; ++i)
{
c.noalias() = a * b;
}
const auto end_time_ns = high_resolution_clock::now().time_since_epoch().count();
const auto elapsed_ms = (end_time_ns - start_time_ns) / 1000000;
std::cout << name << ": " << elapsed_ms << " ms" << std::endl;
}
int main()
{
unsigned int size_1 = 1;
unsigned int size_2 = 8192;
unsigned int size_3 = 16384;
aligned_vector a_vec(size_1 * size_2);
aligned_vector b_vec(size_2 * size_3);
aligned_vector c_vec(size_1 * size_3);
Mapped a_mapped_vec(a_vec.data(), size_1, size_2);
Mapped b_mapped_vec(b_vec.data(), size_2, size_3);
Mapped c_mapped_vec(c_vec.data(), size_1, size_3);
measure("Mapped vector memory", a_mapped_vec, b_mapped_vec, c_mapped_vec);
Eigen::aligned_allocator<float> allocator;
float* a_mem = allocator.allocate(size_1 * size_2);
float* b_mem = allocator.allocate(size_2 * size_3);
float* c_mem = allocator.allocate(size_1 * size_3);
Mapped a_mapped_mem(a_mem, size_1, size_2);
Mapped b_mapped_mem(b_mem, size_2, size_3);
Mapped c_mapped_mem(c_mem, size_1, size_3);
measure("Mapped custom memory", a_mapped_mem, b_mapped_mem, c_mapped_mem);
allocator.deallocate(a_mem, size_1 * size_2);
allocator.deallocate(b_mem, size_2 * size_3);
allocator.deallocate(c_mem, size_1 * size_3);
}
在我的机器上输出(Core i5-6600)。
Mapped vector memory: 661 ms
Mapped custom memory: 370 ms
Dockerfile
快速重现效果。
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install -y build-essential cmake git wget
RUN git clone -b '3.3.7' --single-branch --depth 1 https://github.com/eigenteam/eigen-git-mirror && cd eigen-git-mirror && mkdir -p build && cd build && cmake .. && make && make install && ln -s /usr/local/include/eigen3/Eigen /usr/local/include/Eigen
RUN wget https://gist.githubusercontent.com/Dobiasd/4b80aa0d5d19f8112656794ab94a061b/raw/c9cca8abc16ab35e71070aed5e779c7a8ebb3a7e/main.cpp
RUN g++ -std=c++14 -O3 -march=native main.cpp -o main
ADD "https://www.random.org/cgi-bin/randbyte?nbytes=10&format=h" skipcache
RUN ./main
为什么会有这么大的差别?我想Eigen应该不会知道内存的来源吧)。
而对我来说更重要的是,如何提高对来自内存的性能?std::vector
?
正如评论中所指出的 PeterT 和 chtz的版本,手动分配的版本不初始化内存(与之相反的是 std::vector
),访问它是未定义的行为,因此MMU很可能做了一些聪明的事情,即没有实际访问内存。
当同样在第二部分初始化内存时,两个版本都表现出相似的性能。
float* a_mem = allocator.allocate(size_1 * size_2);
memset(a_mem, 0, size_1 * size_2 * sizeof(float));
float* b_mem = allocator.allocate(size_2 * size_3);
memset(b_mem, 0, size_2 * size_3 * sizeof(float));
float* c_mem = allocator.allocate(size_1 * size_3);
memset(c_mem, 0, size_1 * size_3 * sizeof(float));
Mapped vector memory: 654 ms
Mapped custom memory: 655 ms