Skip to content

Commit 3c97ea9

Browse files
committed
Added numa_support rfc
1 parent 42b833f commit 3c97ea9

File tree

1 file changed

+179
-0
lines changed
  • rfcs/proposed/simplified_numa_support

1 file changed

+179
-0
lines changed
Lines changed: 179 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,179 @@
1+
# Simplified NUMA support in oneTBB
2+
3+
## Introduction
4+
5+
In Non-Uniform Memory Access (NUMA) systems, the cost of memory accesses depends on the
6+
*nearness* of the processor to the memory resource on which the accessed data resides.
7+
While oneTBB has core support that enables developers to tune for Non-Uniform Memory
8+
Access (NUMA) systems, we believe this support can be simplified and improved to provide
9+
an improved user experience.
10+
11+
This early proposal recommends addressing for areas for improvement:
12+
13+
1. improved reliability of HWLOC-dependent topology and pinning support in,
14+
2. addition of a NUMA-aware allocation,
15+
3. simplified approaches to associate task distribution with data placement and
16+
4. where possible, improved out-of-the-box performance for high-level oneTBB features.
17+
18+
We expect that this draft proposal may be broken into smaller proposals based on feedback
19+
and prioritization of the suggested features.
20+
21+
The features for NUMA tuning already available in the oneTBB 1.3 specification include:
22+
23+
- Functions in the `tbb::info` namespace **[info_namespace]**
24+
- `std::vector<numa_node_id> numa_nodes()`
25+
- `int default_concurrency(numa_node_id id = oneapi::tbb::task_arena::automatic)`
26+
- `tbb::task_arena::constraints` in **[scheduler.task_arena]**
27+
28+
Below is the example that demonstrates the use of these APIs to pin threads to different
29+
arenas to each of the NUMA nodes available on a system, submit work across those `task_arena`
30+
objects and into associated `task_group`` objects, and then wait for work again using both
31+
the `task_arena` and `task_group` objects.
32+
33+
#include "oneapi/tbb/task_group.h"
34+
#include "oneapi/tbb/task_arena.h"
35+
36+
#include <vector>
37+
38+
int main() {
39+
std::vector<oneapi::tbb::numa_node_id> numa_nodes = oneapi::tbb::info::numa_nodes();
40+
std::vector<oneapi::tbb::task_arena> arenas(numa_nodes.size());
41+
std::vector<oneapi::tbb::task_group> task_groups(numa_nodes.size());
42+
43+
// Initialize the arenas and place memory
44+
for (int i = 0; i < numa_nodes.size(); i++) {
45+
arenas[i].initialize(oneapi::tbb::task_arena::constraints(numa_nodes[i]));
46+
arenas[i].execute([i] {
47+
// allocate/place memory on NUMA node i
48+
});
49+
}
50+
51+
for (int j 0; j < NUM_STEPS; ++i) {
52+
53+
// Distribute work across the arenas / NUMA nodes
54+
for (int i = 0; i < numa_nodes.size(); i++) {
55+
arenas[i].execute([&task_groups, i] {
56+
task_groups[i].run([] {
57+
/* executed by the thread pinned to specified NUMA node */
58+
});
59+
});
60+
}
61+
62+
// Wait for the work in each arena / NUMA node to complete
63+
for (int i = 0; i < numa_nodes.size(); i++) {
64+
arenas[i].execute([&task_groups, i] {
65+
task_groups[i].wait();
66+
});
67+
}
68+
}
69+
70+
return 0;
71+
}
72+
73+
### The need for application-specific knowledge
74+
75+
In general when tuning a parallel application for NUMA systems, the goal is to expose sufficient
76+
parallelism while minimizing (or at least controlling) data access and communication costs. The
77+
tradeoffs involved in this tuning often rely on application-specific knowledge.
78+
79+
In particular, NUMA tuning typically involves:
80+
81+
1. Understanding the overall application problem and its use of algorithms and data containers
82+
2. Placement of data container objects onto memory resources
83+
3. Distribution of tasks to hardware resources that optimize for data placement
84+
85+
As shown in the previous example, the oneTBB 1.3 specification only provides low-level
86+
support for NUMA optimization. The `tbb::info` namespace provides topology discovery. And the
87+
combination of `task_arena`, `task_arena::constraints` and `task_group` provide a mechanism for
88+
placing tasks onto specific processors. There is no high-level support for memory allocation
89+
or placement, or for guiding the task distribution of algorithms.
90+
91+
### Issues that should be resolved in the oneTBB library
92+
93+
**The behavior of existing features is not always predictable.** There is a note in
94+
section **[info_namespace]** of the oneTBB specification that describes
95+
the function `std::vector<numa_node_id> numa_nodes()`, "If error occurs during system topology
96+
parsing, returns vector containing single element that equals to `task_arena::automatic`."
97+
98+
In practice, the error often occurs because HWLOC is not detected on the system. While the
99+
oneTBB documentation states in several places that HWLOC is required for NUMA support and
100+
even provides guidance on
101+
[how to check for HWLOC](https://www.intel.com/content/www/us/en/docs/onetbb/get-started-guide/2021-12/next-steps.html),
102+
the failure to resolve HWLOC at runtime silently returns a default of `task_arena::automatic`. This
103+
default does not pin threads to NUMA nodes. It is too easy to write code similar to the preceding
104+
example and be unaware that a HWLOC installation error (or lack of HWLOC) has undone all your effort.
105+
106+
**Getting good performance using these tools requres notable manual coding effort by users.** As we
107+
can see in the preceding example, if we want to spread work across the NUMA nodes in
108+
a system we need to query the topology using functions in the `tbb::info` namespace, create
109+
one `task_arena` per NUMA node, along with one `task_group` per NUMA node, and then add an
110+
extra loop that iterates overs these `task_arena` and `task_group` objects to execute the
111+
work on the desired NUMA nodes. We also need to handle all container allocations using OS-specific
112+
APIs (or behaviors, such as first-touch) to allocator or place them on the appropriate NUMA nodes.
113+
114+
**The out-of-the-box performance of the generic TBB APIs on NUMA systems is not good enough.**
115+
Should the oneTBB library do anything special be default if the system is a NUMA system? Or should
116+
regular random stealing distribute the work across all of the cores, regardless of which NUMA first
117+
touched the data?
118+
119+
Is it reasonable for a developer to expect that a series of loops, such as the ones that follow, will
120+
try to create a NUMA-friendly distribution of tasks so that accesses to the same elements of `b` and `c`
121+
in the two loops are from the same NUMA nodes? Or is this too much to expect without providing hints?
122+
123+
tbb::parallel_for(0, N,
124+
[](int i) {
125+
b[i] = f(i);
126+
c[i] = g(i);
127+
});
128+
129+
tbb::parallel_for(0, N,
130+
[](int i) {
131+
a[i] = b[i] + c[i];
132+
});
133+
134+
## Proposal
135+
136+
### Increased availability of NUMA support
137+
138+
The oneTBB 1.3 specification states for `tbb::info::numa_nodes`, "If error occurs during system
139+
topology parsing, returns vector containing single element that equals to task_arena::automatic."
140+
141+
Since the oneTBB library dynamically loads the HWLOC library, a misconfiguration can cause the HWLOC
142+
to fail to be found. In that case, a call like:
143+
144+
std::vector<oneapi::tbb::numa_node_id> numa_nodes = oneapi::tbb::info::numa_nodes();
145+
146+
will return a vector with a single element of `task_arena::automatic`. This behavior, as we have noticed
147+
through user questions, can lead to unexpected performance from NUMA optimizations. When running
148+
on a NUMA system, a developer that has not fully read the documentation may expect that `numa_nodes()`
149+
will give a proper accounting of the NUMA nodes. When the code, without raising any alarm, returns only
150+
a single, valid element due to the environmental configuation (such as lack of HWLOCK), it is too easy
151+
for developers to not notice that the code is acting in a valid, but unexpected way.
152+
153+
We propose that the oneTBB library implementation include, wherever possibly, a statically-linked fallback
154+
to decrease that likelihood of such failures. The oneTBB specification will remain unchanged.
155+
156+
### NUMA-aware allocation
157+
158+
We will define allocators of other features that simplify the process of allocating or places data onto
159+
specific NUMA nodes.
160+
161+
### Simplified approaches to associate task distribution with data placement
162+
163+
As discussed earlier, NUMA-aware allocation is just the first step in optimizing for NUMA architectures.
164+
We also need to deliver mechanisms to guide task distribution so that tasks are executed on execution
165+
resources that are near to the data they access. oneTBB already provides low-level support through
166+
`tbb::info` and `tbb::task_arena`, but we should up-level this support into the high-level algorithms,
167+
flow graph and containers where appropriate.
168+
169+
### Improved out-of-the-box performance for high-level oneTBB features.
170+
171+
For high-level oneTBB features that are modified to provide improved NUMA support, we should try to
172+
align default behaviors for those features with user-expectations when used on NUMA systems.
173+
174+
## Open Questions
175+
176+
1. Do we need simplified support, or are users that want NUMA support in oneTBB
177+
willing to, or perhaps even prefer, to manage the details manually?
178+
2. Is it reasonable to expect good out-of-the-box performance on NUMA systems
179+
without user hints or guidance.

0 commit comments

Comments
 (0)