本文介绍了真实示例,其中std :: atomic :: compare_exchange与两个memory_order参数一起使用的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

您能否给出一个真实的示例,其中出于某种原因使用了两个std::atomic::compare_exchange的memory_order参数版本(因此,一个单独的memory_order参数版本不合适)?

Can you give a real-world example where two memory_order parameter version of std::atomic::compare_exchange is used for a reason (so the one memory_order parameter version is not adequate)?

推荐答案

在许多情况下,compare_exchange上的第二个内存排序参数设置为memory_order_relaxed.在那些情况下,省略它通常是没有错的,只是效率可能较低.

In many cases, the second memory orderering parameter on compare_exchange is set to memory_order_relaxed. In those cases, it is usually not wrong to omit it, just potentially less efficient.

这里是一个简单的,无锁的列表/堆栈的示例,该列表/堆栈需要compare_exchange_weak上的第二个不同的排序参数才能不涉及数据争用.

Here is an example of a simple, lock-free, list/stack that requires a second, different, ordering parameter on compare_exchange_weak in order to be data-race-free.

可以同时执行对push的调用,但是为了避免无锁数据操作的复杂性,假定执行push的调用无法将节点从堆栈中删除;即避免悬空的指针.

Calls to push can be executed concurrently, but to avoid the complexities of lock-free data manipulation,the assumption is made that nodes cannot be removed from the stack while calls to push are executed; i.e. to avoid dangling pointers.

template<typename T>
class mystack {

    struct node {
        node *next = nullptr;

        T data;
        int id;

        node(int id) : id{id} { }
    };

    std::atomic<node *> head{nullptr};

public:
    void push(T data, int id);
    bool pop(T &data); // not implemented
};


template<typename T>
void mystack<T>::push(T data, int id)
{
    node *newnode = new node{id};

    newnode->data = std::move(data);

    node *current_head = head.load(std::memory_order_relaxed);   // A

    for (;;)
    {
        newnode->next = current_head;

        if (head.compare_exchange_weak(current_head, newnode,
                                       std::memory_order_release,   // B
                                       std::memory_order_acquire))  // C
        {
            /*
             * 'current_head' may not be derefenced here since the initial load (at A)
             * does not order memory 'current_head' is pointing at.
             *
             * a release barrier (at B) is necessary to make 'newnode' available
             * to other threads
             */
            std::cout << "Insertion successful\n";

            break;

        } else
        {
            /*
             * 'current_head' is the updated head pointer after 'compare_exchange' failed
             * Since it was inserted by another thread (the CAS failed),
             * an acquire barrier must be set (at C) in order to be able to access data
             * 'current_head' is pointing at.
             */
            std::cout << "Insertion failed after head changed to id: " <<
                          current_head->id << std::endl;
        }
    }
}

push中,初始的load(在A处)是宽松的操作,这意味着即使head指针是原子加载的,由于它所引用的内存在此线程中是无序的,因此可能无法取消引用.

In push, the initial load (at A) is a relaxed operation, meaning that even though the head pointer is loaded atomically,it may not be dereferenced since the memory it refers to is unordered in this thread.

如果compare_exchange_weak返回成功,则通过设置释放屏障(在B处)将newnode插入列表的开头,并使其可用于其他线程.访问此数据的另一个线程(后来通过pop)需要设置获取屏障.

In case compare_exchange_weak returns success, newnode is inserted at the head of the list and made available to other threads by setting a release barrier (at B).Another thread that accesses this data (later, via pop) needs to set an acquire barrier.

如果compare_exchange_weak返回失败(错误地忘记),则另一个线程刚刚插入了新的node实例,并且current_head被新的值head更新.由于current_head现在指向在另一个线程中分配和释放的数据,因此如果要取消引用current_head,则需要获取屏障.
这是正确的,因为cout失败消息包括current_head->id.

In case compare_exchange_weak returns failure (forget spuriously), another thread just inserted a new node instance and current_head is updated with the new value of head.Since current_head is now pointing at data that was allocated and released in another thread, an acquire barrier is necessary if current_head is going to be dereferenced.
This is true since the cout failure message includes current_head->id.

如果省略了最后一个参数,则第一个屏障参数将用于故障情况load,但是由于这是释放屏障,有效屏障将衰减至memory_order_relaxed,从而导致current_head->id上的数据争用.

Had the last parameter been omitted, the first barrier parameter would have been used for the failure load scenario, but since this is a release barrier, the effective barrier would have decayed to memory_order_relaxed, causing a data race on current_head->id.

这篇关于真实示例,其中std :: atomic :: compare_exchange与两个memory_order参数一起使用的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!

10-30 04:23