>in Advanced Programming course at uni >all C++, which is fine >used to developing in C# and Python, but C++ seems fine >first miniproject >has to be completed in order to be allowed to go to the exam >implement a list data structure >with atomic operations >no semaphores >no mutex >for each operation, you can only use a single semi-colon (????????)
I've been working at it for the last couple of days and I have literally no idea how to do this. Am I fucking retarded? Are you seriously expected to be able to do this in the industry??
std::atomic? Spin lock? What complexities are expected for the list? Your specifications are kinda fucky
Caleb Ramirez
Use std::atomic and compare_and_swap primitives. >Am I fucking retarded? Nah, you're simply not used to it yet. >Are you seriously expected to be able to do this in the industry?? It's already more than the average programmer is capable of nowadays.
Here is a very simple concurrent lock-free queue I implemented some time ago for a project of mine, to give you an idea without outright giving you the solution. template class SynchronizedQueue { std::array queue_buffer; std::atomic head; std::atomic tail;
inline T get() { const auto current_head = head.load(); while (current_head == tail.load()) ; const auto element = std::move(queue_buffer[current_head]); head.store(get_next_index(head)); return element; }
inline void put(T &&element) { const auto current_tail = tail.load(); const auto next_tail = get_next_index(tail); while (next_tail == head.load()) ; queue_buffer[current_tail] = std::move(element); tail.store(next_tail); } };
Leo Phillips
>>with atomic operations >>no semaphores >>no mutex Impossible without not portable code.
Thomas Cruz
You can build those with atomic intergers. Neither mutexes are portable since there are CPU architectures without atomic instructions altogether where you avoid data races by disabling interrupts.
Samuel Morales
>atomic intergers Those are masqueraded mutex.
Chase Roberts
Depends, on the architecture. It could be actually some non blocking magic bytefuckering.
I am not. Besides, that thing is not actually lockfree because of this line: while (current_head == head.load());
Christopher Edwards
Glad that line isn't actually there. There are no locks or mutexes used. Apart from std::atomic_flag, which is _guaranteed_ to be lock-free, std::atomic is _almost always_ implemented without locks. In my case, it was. So, this implementation is either fully lock-free, or it isn't because of the way atomic operations are implemented. It's not because of "one line". Continuously checking a condition with atomic operations is not the same thing as using a spin lock.
You could argue that, in that particular example, a client reading from an empty queue would block, but that's intended behavior, since it was a one-way queue to communicate between exactly two threads.
Aaron Scott
>You could argue that, in that particular example, a client reading from an empty queue would block That is my point. It is NOT lock free. Lock/Wait-freedom is a progress guarantee. If the queue is empty/full, there is no progress guarantee. >block It is not blocking, it is busy waiting >but that's intended behavior How the fuck can that be intended. Burning CPU cycles like that forever is madness. >since it was a one-way queue to communicate between exactly two threads. I think it is irresponsible of you to post that without explicitly pointing out that it is only correct in the case of a single producer and a single consumer. Someone might just try to use it without knowing so. Also "get" and "put" are the wrong terms to use. They should be "push" and "pop".
Ethan Green
>That is my point. It is NOT lock free. Lock/Wait-freedom is a progress guarantee. If the queue is empty/full, there is no progress guarantee. Stop arguing semantics, you know very well what we all meant by "lock-free": not using locks or semaphores. "Progress guarantee" or anything else is something entirely different and unrelated.
>It is not blocking, it is busy waiting Yes, but it's not using locks, which was the point.
>How the fuck can that be intended. Burning CPU cycles like that forever is madness. Context switching and acquiring/releasing locks is usually much more expensive in terms of total time spent than repeatedly checking a condition. Such techniques are used all the time in HPC.
>I think it is irresponsible of you to post that without explicitly pointing out that it is only correct in the case of a single producer and a single consumer. Someone might just try to use it without knowing so. Alright, I'll give you that. That's a very old assignment and the first thing I had to show an example.
>Also "get" and "put" are the wrong terms to use. They should be "push" and "pop". This is nitpicking, but no, that structure is a queue, not a stack. The first element inserted in the queue is also the first one to be removed.
Aiden Taylor
>atomic operations >no semaphores >no mutex
easy peasy
every push operation returns a new list object.
done.
Anthony Bailey
Use STM
Christian Myers
STM is implemented with locks behind the hood so that's cheating
Justin Hughes
Not necessarily with optimistic stm, the objects just check if they are commiting to a modified version, and rollback if so
Isaiah Nguyen
brainlet here, what's atomic operations?
Gavin Carter
Operations that totally failed, or totally succeed.
Connor Bailey
Lock-free means not blocking, retard. And yours is slower than a mutex since you're not even using relaxed atomics.
Isaiah Butler
>Lock-free means not blocking, retard. It means not blocking via mechanisms that cause context switches, such as locks and mutexes. Busy waiting is legitimately lock-free, in fact, I have yet to see a lock-free algorithm that didn't use some kind of condition checking or busy waiting via atomic operations. >And yours is slower than a mutex Benchmarks said otherwise. >since you're not even using relaxed atomics. Have fun debugging badly ordered allocations.
Gavin Walker
you just change the interface to make it non-blocking bool push(T elem); bool pop(T ∗elem);
Why do you link papers you didn't even read? Also, the algorithm proposed (and "changing the interface", as you suggest) merely moves retrying to the caller. It will still involve trying to retrieve the item again in case of failure. Based on my particular case, it was simply more convenient to just move retrying directly in the put and get functions. The caller would have just called those again.
>In computer science, an algorithm is called non-blocking if failure or suspension of any thread cannot cause failure or suspension of another thread; By "suspension", we mean suspension by the operating system, a context switch. Continuously checking a condition does not qualify as "suspension", don't try to twist it. In the example I posted, threads are never suspended, where they would be otherwise with locks. Even if there is no waiting, merely acquiring and releasing the locks causes a small overhead, which becomes non-negligible when done rmany times.
>A non-blocking algorithm is _lock-free_ if there is guaranteed system-wide progress, and _wait-free_ if there is also guaranteed per-thread progress.
By this definition, my example fully qualifies as lock-free. It's not wait-free, sure, but it's still lock-free, which was the entire point.