Lines Matching refs:the

26 routine to be invoked when the breakpoint is hit.
27 (*: some parts of the kernel code can not be trapped, see 1.5 Blacklist)
31 on virtually any instruction in the kernel. A jprobe is inserted at
32 the entry to a kernel function, and provides convenient access to the
36 In the typical case, Kprobes-based instrumentation is packaged as
38 one or more probes, and the exit function unregisters them. A
40 the probe is to be inserted and what handler is to be called when
41 the probe is hit.
48 The next four subsections explain how the different types of
50 things that you'll need to know in order to make the best use of
51 Kprobes -- e.g., the difference between a pre_handler and
52 a post_handler, and how to use the maxactive and nmissed fields of
58 When a kprobe is registered, Kprobes makes a copy of the probed
59 instruction and replaces the first byte(s) of the probed instruction
62 When a CPU hits the breakpoint instruction, a trap occurs, the CPU's
63 registers are saved, and control passes to Kprobes via the
64 notifier_call_chain mechanism. Kprobes executes the "pre_handler"
65 associated with the kprobe, passing the handler the addresses of the
66 kprobe struct and the saved registers.
68 Next, Kprobes single-steps its copy of the probed instruction.
69 (It would be simpler to single-step the actual instruction in place,
70 but then Kprobes would have to temporarily remove the breakpoint
72 could sail right past the probepoint.)
74 After the instruction is single-stepped, Kprobes executes the
75 "post_handler," if any, that is associated with the kprobe.
76 Execution then continues with the instruction following the probepoint.
82 seamless access to the probed function's arguments. The jprobe
83 handler routine should have the same signature (arg list and return
84 type) as the function being probed, and must always end by calling
85 the Kprobes function jprobe_return().
87 Here's how it works. When the probe is hit, Kprobes makes a copy of
88 the saved registers and a generous portion of the stack (see below).
89 Kprobes then points the saved instruction pointer at the jprobe's
90 handler routine, and returns from the trap. As a result, control
91 passes to the handler, which is presented with the same register and
92 stack contents as the probed function. When it is done, the handler
93 calls jprobe_return(), which traps again to restore the original stack
94 contents and processor state and switch to the probed function.
96 By convention, the callee owns its arguments, so gcc may produce code
97 that unexpectedly modifies that portion of the stack. This is why
98 Kprobes saves a copy of the stack and restores it after the jprobe
102 Note that the probed function's args may be passed on the stack
103 or in registers. The jprobe will work in either case, so long as the
104 handler's prototype matches that of the probed function.
111 the entry to the function. When the probed function is called and this
112 probe is hit, Kprobes saves a copy of the return address, and replaces
113 the return address with the address of a "trampoline." The trampoline
115 At boot time, Kprobes registers a kprobe at the trampoline.
117 When the probed function executes its return instruction, control
118 passes to the trampoline and that probe is hit. Kprobes' trampoline
119 handler calls the user-specified return handler associated with the
120 kretprobe, then sets the saved instruction pointer to the saved return
121 address, and that's where execution resumes upon return from the trap.
123 While the probed function is executing, its return address is
125 register_kretprobe(), the user sets the maxactive field of the
126 kretprobe struct to specify how many instances of the specified
128 pre-allocates the indicated number of kretprobe_instance objects.
130 For example, if the function is non-recursive and is called with a
131 spinlock held, maxactive = 1 should be enough. If the function is
132 non-recursive and can never relinquish the CPU (e.g., via a semaphore
134 set to a default value. If CONFIG_PREEMPT is enabled, the default
135 is max(10, 2*NR_CPUS). Otherwise, the default is NR_CPUS.
138 some probes. In the kretprobe struct, the nmissed field is set to
139 zero when the return probe is registered, and is incremented every
140 time the probed function is entered but there is no kretprobe_instance
141 object available for establishing the return probe.
146 on function entry. This handler is specified by setting the entry_handler
147 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
148 function entry is hit, the user-defined entry_handler, if any, is invoked.
149 If the entry_handler returns 0 (success) then a corresponding return handler
150 is guaranteed to be called upon function return. If the entry_handler
151 returns a non-zero error then Kprobes leaves the return address as is, and
152 the kretprobe has no further effect for that particular function instance.
154 Multiple entry and return handler invocations are matched using the unique
160 setting the data_size field of the kretprobe struct. This data can be
161 accessed through the data field of each kretprobe_instance object.
164 object available, then in addition to incrementing the nmissed count,
165 the user entry_handler invocation is also skipped.
171 the "debug.kprobes_optimization" kernel parameter is set to 1 (see
178 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
184 Before optimizing a probe, Kprobes performs the following safety checks:
186 - Kprobes verifies that the region that will be replaced by the jump
187 instruction (the "optimized region") lies entirely within one function.
191 - Kprobes analyzes the entire function and verifies that there is no
192 jump into the optimized region. Specifically:
193 - the function contains no indirect jump;
194 - the function contains no instruction that causes an exception (since
195 the fixup code triggered by the exception could jump back into the
196 optimized region -- Kprobes checks the exception tables to verify this);
198 - there is no near jump to the optimized region (other than to the first
201 - For each instruction in the optimized region, Kprobes verifies that
202 the instruction can be executed out of line.
206 Next, Kprobes prepares a "detour" buffer, which contains the following
208 - code to push the CPU's registers (emulating a breakpoint trap)
209 - a call to the trampoline code which calls user's probe handlers.
211 - the instructions from the optimized region
212 - a jump back to the original execution path.
216 After preparing the detour buffer, Kprobes verifies that none of the
220 - Other instructions in the optimized region are probed.
222 In any of the above cases, Kprobes won't start optimizing the probe.
224 optimizing it again if the situation is changed.
226 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
227 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
228 it. If the to-be-optimized probepoint is hit before being optimized,
229 Kprobes returns control to the original instruction path by setting
230 the CPU's instruction pointer to the copied code in the detour buffer
231 -- thus at least avoiding the single-step.
235 The Kprobe-optimizer doesn't insert the jump instruction immediately;
237 possible for a CPU to be interrupted in the middle of executing the
243 After that, the Kprobe-optimizer calls stop_machine() to replace
244 the optimized region with a jump instruction to the detour buffer,
251 the optimization is complete, the kprobe is just dequeued from the
252 optimized list. If the optimization has been done, the jump is
253 replaced with the original code (except for an int3 breakpoint in
254 the first byte) by using text_poke_smp().
256 (*)Please imagine that the 2nd instruction is interrupted and then
257 the optimizer replaces the 2nd instruction with the jump *address*
258 while the interrupt handler is running. When the interrupt
262 (**)This optimization-safety checking may be replaced with the
267 The jump optimization changes the kprobe's pre_handler behavior.
268 Without optimization, the pre_handler can change the kernel's execution
269 path by changing regs->ip and returning 1. However, when the probe
271 tweak the kernel's execution path, you need to suppress optimization,
272 using one of the following techniques:
273 - Specify an empty function for the kprobe's post_handler or break_handler.
279 Kprobes can probe most of the kernel except itself. This means
282 fault) or the nested probe handler may never be called.
284 If you want to add a function into the blacklist, you just need
287 Kprobes checks the given probe address against the blacklist and
288 rejects registering it, if the given address is in the blacklist.
292 Kprobes, jprobes, and return probes are implemented on the following
307 When configuring the kernel using make menuconfig/xconfig/oldconfig,
316 are set to "y", since kallsyms_lookup_name() is used by the in-kernel
319 If you need to insert a probe in the middle of a function, you may find
320 it useful to "Compile the kernel with debug info" (CONFIG_DEBUG_INFO),
321 so you can use "objdump -d -l vmlinux" to see the source-to-object
330 the associated probe handlers that you'll write. See the files in the
338 Sets a breakpoint at the address kp->addr. When the breakpoint is
339 hit, Kprobes calls kp->pre_handler. After the probed instruction
342 or during single-stepping of the probed instruction, Kprobes calls
348 1. With the introduction of the "symbol_name" field to struct kprobe,
349 the probepoint address resolution will now be taken care of by the kernel.
357 2. Use the "offset" field of struct kprobe if the offset into the symbol
358 to install a probepoint is known. This field is used to calculate the
361 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
364 4. With CISC architectures (such as i386 and x86_64), the kprobes code
365 does not validate if the kprobe.addr is at an instruction boundary.
375 Called with p pointing to the kprobe associated with the breakpoint,
376 and regs pointing to the struct containing the registers saved when
377 the breakpoint was hit. Return 0 here unless you're a Kprobes geek.
385 p and regs are as described for the pre_handler. flags always seems
393 p and regs are as described for the pre_handler. trapnr is the
394 architecture-specific trap number associated with the fault (e.g.,
396 Returns 1 if it successfully handled the exception.
403 Sets a breakpoint at the address jp->kp.addr, which must be the address
404 of the first instruction of a function. When the breakpoint is hit,
405 Kprobes runs the handler whose address is jp->entry.
407 The handler should have the same arg list and return type as the probed
410 control to Kprobes.) If the probed function is declared asmlinkage
411 or anything else that affects how args are passed, the handler's
421 Establishes a return probe for the function whose address is
434 regs is as described for kprobe.pre_handler. ri points to the
435 kretprobe_instance object, of which the following fields may be
437 - ret_addr: the return address
438 - rp: points to the corresponding kretprobe object
439 - task: points to the corresponding task struct
444 extract the return value from the appropriate register as defined by
445 the architecture's ABI.
456 Removes the specified probe. The unregister function can be called
457 at any time after the probe has been registered.
460 If the functions find an incorrect probe (ex. an unregistered probe),
461 they clear the addr field of the probe.
470 Registers each of the num probes in the specified array. If any
471 error occurs during registration, all probes in the array, up to
472 the bad probe, are safely unregistered before the register_*probes
475 - num: the number of the array entries.
479 of the array entries before using these functions.
488 Removes each of the num probes in the specified array at once.
491 If the functions find some incorrect probes (ex. unregistered
492 probes) in the specified array, they clear the addr field of those
493 incorrect probes. However, other probes in the array are
503 Temporarily disables the specified *probe. You can enable it again by using
504 enable_*probe(). You must specify the probe which has been registered.
514 the probe which has been registered.
518 Kprobes allows multiple probes at the same address. Currently,
519 however, there cannot be multiple jprobes on the same function at
520 the same time. Also, a probepoint for which there is a jprobe or
522 or a kprobe with a post_handler, at an optimized probepoint, the
525 In general, you can install a probe anywhere in the kernel.
530 to install a probe in the code that implements Kprobes (mostly
535 no attempt to chase down all inline instances of the function and
537 so keep this in mind if you're not seeing the probe hits you expect.
539 A probe handler can modify the environment of the probed function
540 -- e.g., by modifying kernel data structures, or by modifying the
541 contents of the pt_regs struct (which are restored to the registers
542 upon return from the breakpoint). So Kprobes can be used, for example,
544 course, has no way to distinguish the deliberately injected faults
545 from the accidental ones. Don't drink and probe.
550 handlers won't be run in that instance, and the kprobe.nmissed member
551 of the second probe will be incremented.
554 the same handler) may run concurrently on different CPUs.
559 Probe handlers are run with preemption disabled. Depending on the
563 your handler should not yield the CPU (e.g., by attempting to acquire
566 Since a return probe is implemented by replacing the return
567 address with the trampoline's address, stack backtraces and calls
568 to __builtin_return_address() will typically yield the trampoline's
569 address instead of the real return address for kretprobed functions.
573 If the number of times a function is called does not match the number
577 gets printed. With this information, one will be able to correlate the
578 exact instance of the kretprobe that caused the problem. We have the
582 If, upon entry to or exit from a function, the CPU is running on
583 a stack other than that of the current task, registering a return
586 on the x86_64 version of __switch_to(); the registration functions
589 On x86/x86-64, since the Jump Optimization of Kprobes modifies
609 The instructions in DCR are copied to the out-of-line buffer
610 of the kprobe, because the bytes in DCR are replaced by
616 d) DCR must not straddle the border between functions.
618 Anyway, these limitations are checked by the in-kernel instruction
624 microseconds to process. Specifically, a benchmark that hits the same
626 million hits per second, depending on the architecture. A jprobe or
629 the entry to that function adds essentially no overhead.
678 For additional information on Kprobes, refer to the following URLs:
687 With recent kernels (> 2.6.20) the list of registered kprobes is visible
688 under the /sys/kernel/debug/kprobes/ directory (assuming debugfs is mounted at //sys/kernel/debug).
690 /sys/kernel/debug/kprobes/list: Lists all registered probes on the system
696 The first column provides the kernel address where the probe is inserted.
697 The second column identifies the type of probe (k - kprobe, r - kretprobe
698 and j - jprobe), while the third column specifies the symbol+offset of
699 the probe. If the probed function belongs to a module, the module name
700 is also specified. Following columns show probe status. If the probe is on
703 such probes are marked with [GONE]. If the probe is temporarily disabled,
704 such probes are marked with [DISABLED]. If the probe is optimized, it is
705 marked with [OPTIMIZED]. If the probe is ftrace-based, it is marked with
728 knob *changes* the optimized state. This means that optimized probes
730 removed). If the knob is turned on, they will be optimized again.