Lines Matching refs:of
40 number of workers as the number of CPUs. The kernel grew a lot of MT
41 wq users over the years and with the number of CPU cores continuously
45 Although MT wq wasted a lot of resource, the level of concurrency
53 The tension between the provided level of concurrency and resource
58 higher level of concurrency, like async or fscache, had to implement
61 Concurrency Managed Workqueue (cmwq) is a reimplementation of wq with
67 flexible level of concurrency on demand without wasting a lot of
70 * Automatically regulate worker pool and level of concurrency so that
76 In order to ease the asynchronous execution of functions a new
86 off of the queue, one after the other. If no work is queued, the
97 number of these backing pools is dynamic.
101 aspects of the way the work items are executed by setting flags on the
104 get a detailed overview refer to the API description of
109 and appended on the shared worklist of the worker-pool. For example,
110 unless specifically overridden, a work item of a bound workqueue will
111 be queued on the worklist of either normal or highpri worker-pool that
122 whenever an active worker wakes up or sleeps and keeps track of the
123 number of the currently runnable workers. Generally, work items are
127 workers on the CPU, the worker-pool doesn't start execution of a new
130 are pending work items. This allows using a minimal number of workers
137 For unbound workqueues, the number of backing pools is dynamic.
140 backing worker pools matching the attributes. The responsibility of
147 through the use of rescue workers. All work items which might be used
159 name of the wq and also used as the name of the rescuer thread if
175 worker-pools try to start execution of work items as soon as
181 of mostly unused workers across different CPUs as the issuer
189 A freezable wq participates in the freeze phase of the system
197 execution context regardless of memory pressure.
201 Work items of a highpri wq are queued to the highpri
202 worker-pool of the target cpu. Highpri worker-pools are
206 each other. Each maintain its separate pool of workers and
211 Work items of a CPU intensive wq do not contribute to the
219 concurrency level, start of their executions is still
221 non-CPU-intensive work items can delay execution of CPU
232 @max_active determines the maximum number of execution contexts per
233 CPU which can be assigned to the work items of a wq. For example,
234 with @max_active of 16, at most 16 work items of the wq can be
239 wq, the limit is higher of 512 and 4 * num_possible_cpus(). These
243 The number of active work items of a wq is usually regulated by the
244 users of the wq, more specifically, by how many work items the users
246 throttling the number of active work items, specifying '0' is
249 Some users depend on the strict execution ordering of ST wq. The
250 combination of @max_active of 1 and WQ_UNBOUND is used to achieve this
268 of possible sequences of events with the original wq.
342 part of a group of work items, and don't require any special
343 attribute, can use one of the system wq. There is no difference in
347 * Unless work items are expected to consume a huge amount of CPU
349 level of locality in wq operations and work item execution.
366 of possible problems:
369 2. A single work item that consumes lots of cpu cycles
382 For the second type of problems it should be possible to just check
383 the stack trace of the offending worker thread.