-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Rework get_task and steal_task to better interact with out_of_work checks #1779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
0ec492d to
0e277fe
Compare
0e277fe to
13d8ce9
Compare
abde43c to
deda35d
Compare
|
@kboyarinov @isaevil @dnmokhov Please take a look. |
|
|
||
| if ( tasks_skipped ) { | ||
| __TBB_ASSERT( is_task_pool_published(), nullptr ); // the pool was not reset | ||
| tail.store(T0, std::memory_order_release); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do I understand correctly that we do not need to restore head here since it is a stealing thread responsibility and is done in the steal_task?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is correct.
Generally, H0 represents the state of the pool head as it was seen by the owner; it might get outdated at any time. The core principle therefore is that the owner only works with the tail and does not change the head.
Indeed, if there was no conflict for the last task, the owner has no idea what the proper value for the head should be. And in case of a conflict the pool lock is taken and the head is re-read, and we can be sure that there is no skipped tasks beyond the head, so there is no need to change anything.
Prior to the patch, there is a store of H0 to the head - but it is done at the point where the pool is temporarily quiescent, and therefore it is safe. It "optimizes" the case when the task at the head is taken while others were skipped. In the patch, the pool is not reset if tasks were skipped, as that would also mislead observers. So this optimization cannot be safely performed anymore.
Co-authored-by: Konstantin Boyarinov <[email protected]>
1da1b25 to
796db43
Compare
|
The commit 796db43 is for code refactoring and is not strictly necessary. It significantly changes |
5469345 to
6f0d728
Compare
isaevil
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PR looks good to me, though I think we probably should do some performance evaluation prior merging. Do we need to review #1805 first or can we merge this one and rebase #1805 on latest master?
I also have one more observation. So the inspecting thread checks whether task pools are empty during out_of_work call. And if during thorough inspection in case of skipped tasks it determines that the arena, in fact, still has some tasks (they will become available once owning thread restores the tail), the thread won't leave the arena since waiter.continue_execution(slot, t) will return true and the thread will be in the stealing loop again. But as I understand, tail might still not be restored, meaning that waiter.pause(slot) will again be invoked. But the backoff of waiter object is already in its reached threshold, meaning out_of_work will be instantly called. Shouldn't the backoff be reset once it returns back or that is not a big deal?
I tend to think that it's not a big deal, and will in practice happen rather rarely - as there is seemingly much more for that inspecting thread to do in
Unless this patch adds much overhead - which I do not think it does - merging it first seems better to me.
Agreed. |
This PR "revives" the improvement originated in #417, but takes a different approach to the problem.
To remind, when a thread looks for a task to take from a task pool, it might skip some tasks due to affinity or isolation restrictions, The task pool still contains pointers to the tasks, but the observable limits of the pool (the head, modified by thieves, and the tail, modified by the owning thread) might temporary exclude the skipped tasks. Due to that, another thread that inspects the arena for work availability might find the task pool "empty" and potentially mark the whole arena empty, causing premature leaving of worker threads. The current implementation mitigates that by issuing a "work advertisement" signal when the skipped tasks are "returned" to the observable pool.
The PR #417 tried to improve the implementation by adding "shadow" head and tail indexes for the slot inspection, which are not changed until an operation on the task pool is complete, and so they should never exclude "skipped" tasks. In my opinion, however, it puts the burden on the wrong side and complicates the arbitration protocol between the pool owner and thieves. As implemented, it also does not achieve the goal, as in the case of pool "exhaustion" the shadow limits would be temporarily reset, similar to the real limits.
This PR takes a different approach and puts more burden on the inspecting thread, which anyway has no tasks to execute. If that thread suspects the task pool to be empty after comparing its head and tail, it locks the pool and re-reads its state. By locking, any temporary modifications by stealing threads are prevented. To coordinate the inspection with changes made by the owning thread, a new flag is added into
arena_slot. The flag is set by the owning thread inget_taskif it skips one or more tasks, and is reset once the pool limits are restored. The flag is read and tested by the inspecting thread, and the slot is only considered empty when both the pool limits show no tasks and the skipping flag is not set.Tests
Documentation
Breaks backward compatibility?