Lines Matching refs:that
26 applying a filter to each packet that assigns it to one of a small number
32 that is not the focus of these techniques.
55 one for each memory domain, where a memory domain is a set of CPUs that
71 that can route each interrupt to a particular CPU. The active mapping
77 will be running irqbalance, a daemon that dynamically optimizes IRQ
92 Per-cpu load can be observed using the mpstat utility, but note that on
104 Whereas RSS selects the queue and hence CPU that will run the hardware
116 selects the queue that should process a packet.
132 and the packet is queued to the tail of that CPU’s backlog queue. At
158 interrupting CPU from the map since that already performs much work.
160 For a multi-queue system, if RSS is configured so that a hardware
163 RPS might be beneficial if the rps_cpus for each queue are the ones that
164 share the same memory domain as the interrupting CPU for that queue.
176 Flow Limit is an optional RPS feature that prioritizes small flows
202 the same that selects a CPU in RPS, but as the number of buckets can
216 In such environments, enable the feature on all CPUs that handle
234 to enqueue packets onto the backlog of another CPU and to wake up that
242 If an entry does not hold a valid CPU, then packets mapped to that entry
244 same CPU. Indeed, with many flows and few CPUs, it is very likely that
247 rps_sock_flow_table is a global flow table that contains the *desired* CPU
248 for flows: the CPU that is currently processing the flow in userspace.
249 Each table value is a CPU index that is updated during calls to recvmsg
267 queue has a head counter that is incremented on dequeue. A tail counter
269 in rps_dev_flow[i] records the last element in flow i that has
276 and the rps_dev_flow table of the queue that the packet was received on
279 table), the packet is enqueued onto that CPU’s backlog. If they differ,
289 CPU. These rules aim to ensure that a flow only moves to a new CPU when
312 connections. We have found that a value of 32768 for rps_sock_flow_entries
328 balancing mechanism that uses soft state to steer flows based on where
354 It also requires that ntuple filtering is enabled via ethtool. The map
377 reduced, in particular for data cache lines that hold the sk_buff
380 XPS is configured per transmit queue by setting a bitmap of CPUs that
381 may use that queue to transmit. The reverse mapping, from CPUs to
386 ID matches a single queue, that is used for transmission. If multiple
396 skb->ooo_okay is set for a packet in the flow. This flag indicates that
407 configured. To enable XPS, the bitmap of CPUs that may use a transmit
416 system, XPS is preferably configured so that each CPU maps onto one queue.
418 queue can also map onto one CPU, resulting in exclusive pairings that
420 best CPUs to share a given queue are probably those that share the cache
421 with the CPU that processes transmit completions for that queue