Lines Matching refs:kprobe

30 kretprobes (also called return probes).  A kprobe can be inserted
58 When a kprobe is registered, Kprobes makes a copy of the probed
65 associated with the kprobe, passing the handler the addresses of the
66 kprobe struct and the saved registers.
75 "post_handler," if any, that is associated with the kprobe.
80 A jprobe is implemented using a kprobe that is placed on a function's
110 When you call register_kretprobe(), Kprobes establishes a kprobe at
115 At boot time, Kprobes registers a kprobe at the trampoline.
147 field of the kretprobe struct. Whenever the kprobe placed by kretprobe at the
178 Kprobes inserts an ordinary, breakpoint-based kprobe at the specified
226 If the kprobe can be optimized, Kprobes enqueues the kprobe to an
227 optimizing list, and kicks the kprobe-optimizer workqueue to optimize
241 of kprobe optimization supports only kernels with CONFIG_PREEMPT=n.(**)
249 When an optimized kprobe is unregistered, disabled, or blocked by
250 another kprobe, it will be unoptimized. If this happens before
251 the optimization is complete, the kprobe is just dequeued from the
267 The jump optimization changes the kprobe's pre_handler behavior.
273 - Specify an empty function for the kprobe's post_handler or break_handler.
317 kprobe address resolution code.
336 int register_kprobe(struct kprobe *kp);
348 1. With the introduction of the "symbol_name" field to struct kprobe,
357 2. Use the "offset" field of struct kprobe if the offset into the symbol
361 3. Specify either the kprobe "symbol_name" OR the "addr". If both are
362 specified, kprobe registration will fail with -EINVAL.
365 does not validate if the kprobe.addr is at an instruction boundary.
373 int pre_handler(struct kprobe *p, struct pt_regs *regs);
375 Called with p pointing to the kprobe associated with the breakpoint,
382 void post_handler(struct kprobe *p, struct pt_regs *regs,
391 int fault_handler(struct kprobe *p, struct pt_regs *regs, int trapnr);
434 regs is as described for kprobe.pre_handler. ri points to the
452 void unregister_kprobe(struct kprobe *kp);
466 int register_kprobes(struct kprobe **kps, int num);
484 void unregister_kprobes(struct kprobe **kps, int num);
499 int disable_kprobe(struct kprobe *kp);
509 int enable_kprobe(struct kprobe *kp);
522 or a kprobe with a post_handler, at an optimized probepoint, the
550 handlers won't be run in that instance, and the kprobe.nmissed member
561 interrupts disabled (e.g., kretprobe handlers and optimized kprobe
610 of the kprobe, because the bytes in DCR are replaced by
623 On a typical CPU in use in 2005, a kprobe hit takes 0.5 to 1.0
627 return-probe hit typically takes 50-75% longer than a kprobe hit.
628 When you have a return probe set on a function, adding a kprobe at
632 k = kprobe; j = jprobe; r = return probe; kr = kprobe + return probe
646 Typically, an optimized kprobe hit takes 0.07 to 0.1 microseconds to
648 k = unoptimized kprobe, b = boosted (single-step skipped), o = optimized kprobe,
697 The second column identifies the type of probe (k - kprobe, r - kretprobe