无法理解或在我的程序解决竞争条件

问题描述 投票:2回答:2

我执行我自己版本的malloc,这是非常类似的glibc的malloc,在其创造的舞台,这是一个存储区,一个线程可以工作,而另一个线程竞争的风险,支持多线程。

我的数据结构如下:

typedef struct          s_arena {
    pthread_mutex_t     mutex;
    t_pool              *main_pool;
}                       t_arena;

typedef struct          s_arena_data {
    _Atomic int         arena_count;
    t_arena             arenas[M_ARENA_MAX];
}                       t_arena_data;

t_arena_data是包含已创建,从0开始针对第一呼叫并且在M_ARENA_MAX封盖(我目前限定在8)领域的数目,以及包含所有我的领域阵列的全局变量。

竞技场仅包含一个互斥,其与调用pthread_mutex_init(初始化),和一个指针的存储器池。随着比赛的状态到达之前发生的内存池是不重要的这个话题。

我的计划是如何工作的:因为每个线程进入的malloc,它试图pthread_try_lock第一舞台的互斥。如果是这样,一切都很好,进到我还没有在这里描述的分配。如果没有,有几个事情都可能发生。

如果数组中的下一个项目是空的,没有达到M_ARENA_MAX,一个新的互斥量将被锁定,以创建一个新的舞台,并将其添加到阵列中。互斥锁是全球性的,这意味着没有两个线程可以同时创建一个舞台。

如果互斥锁是,那么线程将环回到赛场[0],并继续寻找一个开放的互斥。

现在,我敢肯定的竞争条件发生,因为变量arena_count的。我观察到由于调试printf语句,任何时候的功能段错误,M_ARENA_MAX还没有达到。如果有,程序不会崩溃。因此,我怀疑一个线程可能会被读arena_count的价值只是一个其他线程之前递增它,并通过一次读完它,增加它的线程释放new_arena_mutex和第一线去创造与舞台一个错误的索引。

这是我的第一个多线程程序,所以我很抱歉,如果我的解释或代码是不明确的,但我一直在这个问题上花的最后4个小时,而我想我收窄的问题,我真的不知道该怎么解决这个问题。

这里是有故障代码的一部分:

    current_arena = &arena_data.arenas[0];
    int arena_index = 0;
    while (pthread_mutex_trylock(&current_arena->mutex) != 0) {

        printf("THREAD %p READS[0] ARENA COUNT AT %d\n", (void *)pthread_self(), arena_data.arena_count);
        if (arena_index == arena_data.arena_count - 1) {
            printf("THREAD %p READS[1] ARENA COUNT AT %d\n", (void *)pthread_self(), arena_data.arena_count);

            if (pthread_mutex_trylock(&new_arena_mutex) != 0 || arena_data.arena_count == M_ARENA_MAX) {
                current_arena = &arena_data.arenas[(arena_index = 0)];
                continue;
            }

            creator = true;
            break;
        }

        current_arena = &arena_data.arenas[arena_index++];
    }

    /* All arenas are occupied by other threads but M_ARENA_MAX isn't reached. Let's just create a new one. */
    if (creator == true) {

        printf("THREAD %p READS[2] ARENA COUNT AT %d\n", (void *)pthread_self(), arena_data.arena_count);

        current_pool = create_new_pool(MAIN_POOL, chunk_type, size, pagesize, &new_arena_mutex);
        if (current_pool == MAP_FAILED) return NULL;

        ++arena_data.arena_count;
        arena_data.arenas[arena_index + 1] = (t_arena){ .main_pool = current_pool };
        pthread_mutex_init(&arena_data.arenas[arena_index + 1].mutex, NULL);
        pthread_mutex_lock(&arena_data.arenas[arena_index + 1].mutex);
        pthread_mutex_unlock(&new_arena_mutex);

        return user_area((t_alloc_chunk *)current_pool->chunk, size, &arena_data.arenas[arena_index + 1].mutex);
    }

下面是printf语句之一,一个是安慰我的理论是有竞争条件:

THREAD 0x7f9c3b216700 READS[1] ARENA COUNT AT 4
THREAD 0x7f9c3b216700 READS[2] ARENA COUNT AT 5

该值应等于,但它不是。

c multithreading race-condition
2个回答
1
投票

我可以在代码中发现三期。

1. Race condition when two threads create arenas

这是你在你的问题描述竞争条件:

因此,我怀疑一个线程可能会被读arena_count的价值只是一个其他线程之前递增它,并通过一次读完它,增加它的线程释放new_arena_mutex和第一线去创造与舞台一个错误的索引。

是的,这可能发生。从arena_data.arena_count负载发生原子,但线程可能不是一般认为的值是(仍然)是正确的。修改后的版本in your answer不能解决问题。

为了解决这个问题,下面的保证可能会有所帮助:任何商店arena_data.arena_count同时保持new_arena_mutex发生。其结果是,持有的互斥体的线程可以安全地装载arena_data.arena_count(按住当然互斥),它可以肯定的是它的价值不会改变,直到它解锁互斥量。让我尝试改变和评论你更新的代码解释:

  while (pthread_mutex_trylock(&current_arena->mutex) != 0) {

    if (arena_index == arena_data.arena_count - 1) {
// This thread checks the condition above _without_ holding the
// `new_arena_mutex`. Another thread may hold the mutex (and hence it
// may increment `arena_count`).

      if (pthread_mutex_trylock(&new_arena_mutex) == 0) {
// Now, this thread can assume that no other thread writes to
// `arena_data.arena_count`. However, the condition
//
//     arena_index == arena_data.arena_count - 1
//
// may no longer be true (because it had been checked before locking).

    if (arena_data.arena_count < M_ARENA_MAX) {
// This thread may create a new arena at index
// `arena_data.arena_count`. That is safe because this thread holds
// the `new_arena_mutex` (preventing other threads from modifying
// `arena_count`.
//
// However, it is possible that `arena_index` is not at the position
// of the most recently created arena (checked _before_ locking). Let
// us just assume that all the recently created arenas are still
// locked. Hence we just skip the check and directly jump to the most
// recently created arena (as if we failed locking).
      arena_index = arena_data.arena_count - 1;
      current_arena = &arena_data.arenas[arena_index];
      ++arena_data.arena_count;
      assert(
        arena_index + 1 == arena_data.arena_count &&
        "... and this thread is holding the mutex, so it stays true."
      );
      creator = true;
      break;
    } else {
      pthread_mutex_unlock(&new_arena_mutex);
    }

在我看来,如果你提取那些行动纳入功能,如代码变得更具可读性

// both functions return `arena_index` or `-1`
int try_find_and_lock_arena();
int try_create_and_lock_arena();

2. Suspicious (wrong?) post-increment operator

在下面的行后递增运算符看起来我错了:

current_arena = &arena_data.arenas[arena_index++];// post-increment
// now, `&arena_data.arenas[arena_index]` is one beyond `current_arena`.

写两行,它可能会更容易推理的行为:

assert(
  current_arena == &arena_data.arenas[arena_index] &&
  "this is an invariant I expect to hold"
);

current_arena = &arena_data.arenas[arena_index];// this is a no-op...
arena_index++;// ... and now, they are out of sync

assert(
  current_arena == &arena_data.arenas[arena_index] &&
  "now, the invariant is broken (and this assert should fire)"
);

3. Readability of mutex lock/unlock pairs

我觉得很难匹配互斥”锁定/解锁操作的所有可能的路径,因为他们在不同范围内发生。

    // [new_arena_mutex is locked]

    current_pool = create_new_pool(/* ... */, &new_arena_mutex);

    if (current_pool == MAP_FAILED) return NULL;// error-path return
    // `create_new_pool` unlocks iff it returns `MAP_FAILED`...

    /* ... */

    pthread_mutex_unlock(&new_arena_mutex);
    // ... otherwise, the mutex is unlocked here

    return user_area(/* ... */);

0
投票

(编辑):没有。

这似乎已经解决了这个问题:

    /* Look for an open arena. */
    current_arena = &arena_data.arenas[0];
    int arena_index = 0;
    while (pthread_mutex_trylock(&current_arena->mutex) != 0) {

        if (arena_index == arena_data.arena_count - 1) {

            if (pthread_mutex_trylock(&new_arena_mutex) == 0) {

                if (arena_data.arena_count < M_ARENA_MAX) {
                    ++arena_data.arena_count;
                    creator = true;
                    break;
                } else {
                    pthread_mutex_unlock(&new_arena_mutex);
                }
            }

            current_arena = &arena_data.arenas[(arena_index = 0)];
            continue;
        }

        current_arena = &arena_data.arenas[arena_index++];
    }

    /* All arenas are occupied by other threads but M_ARENA_MAX isn't reached. Let's just create a new one. */
    if (creator == true) {

        current_pool = create_new_pool(MAIN_POOL, chunk_type, size, pagesize, &new_arena_mutex);
        if (current_pool == MAP_FAILED) return NULL;

        arena_data.arenas[arena_index + 1] = (t_arena){ .main_pool = current_pool };
        pthread_mutex_init(&arena_data.arenas[arena_index + 1].mutex, NULL);
        pthread_mutex_lock(&arena_data.arenas[arena_index + 1].mutex);
        pthread_mutex_unlock(&new_arena_mutex);

        return user_area((t_alloc_chunk *)current_pool->chunk, size, &arena_data.arenas[arena_index + 1].mutex);
    }
© www.soinside.com 2019 - 2024. All rights reserved.