1		ftrace - Function Tracer
2		========================
3
4Copyright 2008 Red Hat Inc.
5   Author:   Steven Rostedt <srostedt@redhat.com>
6  License:   The GNU Free Documentation License, Version 1.2
7               (dual licensed under the GPL v2)
8Reviewers:   Elias Oltmanns, Randy Dunlap, Andrew Morton,
9	     John Kacur, and David Teigland.
10Written for: 2.6.28-rc2
11Updated for: 3.10
12
13Introduction
14------------
15
16Ftrace is an internal tracer designed to help out developers and
17designers of systems to find what is going on inside the kernel.
18It can be used for debugging or analyzing latencies and
19performance issues that take place outside of user-space.
20
21Although ftrace is typically considered the function tracer, it
22is really a frame work of several assorted tracing utilities.
23There's latency tracing to examine what occurs between interrupts
24disabled and enabled, as well as for preemption and from a time
25a task is woken to the task is actually scheduled in.
26
27One of the most common uses of ftrace is the event tracing.
28Through out the kernel is hundreds of static event points that
29can be enabled via the debugfs file system to see what is
30going on in certain parts of the kernel.
31
32
33Implementation Details
34----------------------
35
36See ftrace-design.txt for details for arch porters and such.
37
38
39The File System
40---------------
41
42Ftrace uses the debugfs file system to hold the control files as
43well as the files to display output.
44
45When debugfs is configured into the kernel (which selecting any ftrace
46option will do) the directory /sys/kernel/debug will be created. To mount
47this directory, you can add to your /etc/fstab file:
48
49 debugfs       /sys/kernel/debug          debugfs defaults        0       0
50
51Or you can mount it at run time with:
52
53 mount -t debugfs nodev /sys/kernel/debug
54
55For quicker access to that directory you may want to make a soft link to
56it:
57
58 ln -s /sys/kernel/debug /debug
59
60Any selected ftrace option will also create a directory called tracing
61within the debugfs. The rest of the document will assume that you are in
62the ftrace directory (cd /sys/kernel/debug/tracing) and will only concentrate
63on the files within that directory and not distract from the content with
64the extended "/sys/kernel/debug/tracing" path name.
65
66That's it! (assuming that you have ftrace configured into your kernel)
67
68After mounting debugfs, you can see a directory called
69"tracing".  This directory contains the control and output files
70of ftrace. Here is a list of some of the key files:
71
72
73 Note: all time values are in microseconds.
74
75  current_tracer:
76
77	This is used to set or display the current tracer
78	that is configured.
79
80  available_tracers:
81
82	This holds the different types of tracers that
83	have been compiled into the kernel. The
84	tracers listed here can be configured by
85	echoing their name into current_tracer.
86
87  tracing_on:
88
89	This sets or displays whether writing to the trace
90	ring buffer is enabled. Echo 0 into this file to disable
91	the tracer or 1 to enable it. Note, this only disables
92	writing to the ring buffer, the tracing overhead may
93	still be occurring.
94
95  trace:
96
97	This file holds the output of the trace in a human
98	readable format (described below).
99
100  trace_pipe:
101
102	The output is the same as the "trace" file but this
103	file is meant to be streamed with live tracing.
104	Reads from this file will block until new data is
105	retrieved.  Unlike the "trace" file, this file is a
106	consumer. This means reading from this file causes
107	sequential reads to display more current data. Once
108	data is read from this file, it is consumed, and
109	will not be read again with a sequential read. The
110	"trace" file is static, and if the tracer is not
111	adding more data,they will display the same
112	information every time they are read.
113
114  trace_options:
115
116	This file lets the user control the amount of data
117	that is displayed in one of the above output
118	files. Options also exist to modify how a tracer
119	or events work (stack traces, timestamps, etc).
120
121  options:
122
123	This is a directory that has a file for every available
124	trace option (also in trace_options). Options may also be set
125	or cleared by writing a "1" or "0" respectively into the
126	corresponding file with the option name.
127
128  tracing_max_latency:
129
130	Some of the tracers record the max latency.
131	For example, the time interrupts are disabled.
132	This time is saved in this file. The max trace
133	will also be stored, and displayed by "trace".
134	A new max trace will only be recorded if the
135	latency is greater than the value in this
136	file. (in microseconds)
137
138  tracing_thresh:
139
140	Some latency tracers will record a trace whenever the
141	latency is greater than the number in this file.
142	Only active when the file contains a number greater than 0.
143	(in microseconds)
144
145  buffer_size_kb:
146
147	This sets or displays the number of kilobytes each CPU
148	buffer holds. By default, the trace buffers are the same size
149	for each CPU. The displayed number is the size of the
150	CPU buffer and not total size of all buffers. The
151	trace buffers are allocated in pages (blocks of memory
152	that the kernel uses for allocation, usually 4 KB in size).
153	If the last page allocated has room for more bytes
154	than requested, the rest of the page will be used,
155	making the actual allocation bigger than requested.
156	( Note, the size may not be a multiple of the page size
157	  due to buffer management meta-data. )
158
159  buffer_total_size_kb:
160
161	This displays the total combined size of all the trace buffers.
162
163  free_buffer:
164
165	If a process is performing the tracing, and the ring buffer
166	should be shrunk "freed" when the process is finished, even
167	if it were to be killed by a signal, this file can be used
168	for that purpose. On close of this file, the ring buffer will
169	be resized to its minimum size. Having a process that is tracing
170	also open this file, when the process exits its file descriptor
171	for this file will be closed, and in doing so, the ring buffer
172	will be "freed".
173
174	It may also stop tracing if disable_on_free option is set.
175
176  tracing_cpumask:
177
178	This is a mask that lets the user only trace
179	on specified CPUs. The format is a hex string
180	representing the CPUs.
181
182  set_ftrace_filter:
183
184	When dynamic ftrace is configured in (see the
185	section below "dynamic ftrace"), the code is dynamically
186	modified (code text rewrite) to disable calling of the
187	function profiler (mcount). This lets tracing be configured
188	in with practically no overhead in performance.  This also
189	has a side effect of enabling or disabling specific functions
190	to be traced. Echoing names of functions into this file
191	will limit the trace to only those functions.
192
193	This interface also allows for commands to be used. See the
194	"Filter commands" section for more details.
195
196  set_ftrace_notrace:
197
198	This has an effect opposite to that of
199	set_ftrace_filter. Any function that is added here will not
200	be traced. If a function exists in both set_ftrace_filter
201	and set_ftrace_notrace,	the function will _not_ be traced.
202
203  set_ftrace_pid:
204
205	Have the function tracer only trace a single thread.
206
207  set_graph_function:
208
209	Set a "trigger" function where tracing should start
210	with the function graph tracer (See the section
211	"dynamic ftrace" for more details).
212
213  available_filter_functions:
214
215	This lists the functions that ftrace
216	has processed and can trace. These are the function
217	names that you can pass to "set_ftrace_filter" or
218	"set_ftrace_notrace". (See the section "dynamic ftrace"
219	below for more details.)
220
221  enabled_functions:
222
223	This file is more for debugging ftrace, but can also be useful
224	in seeing if any function has a callback attached to it.
225	Not only does the trace infrastructure use ftrace function
226	trace utility, but other subsystems might too. This file
227	displays all functions that have a callback attached to them
228	as well as the number of callbacks that have been attached.
229	Note, a callback may also call multiple functions which will
230	not be listed in this count.
231
232	If the callback registered to be traced by a function with
233	the "save regs" attribute (thus even more overhead), a 'R'
234	will be displayed on the same line as the function that
235	is returning registers.
236
237	If the callback registered to be traced by a function with
238	the "ip modify" attribute (thus the regs->ip can be changed),
239	an 'I' will be displayed on the same line as the function that
240	can be overridden.
241
242  function_profile_enabled:
243
244	When set it will enable all functions with either the function
245	tracer, or if enabled, the function graph tracer. It will
246	keep a histogram of the number of functions that were called
247	and if run with the function graph tracer, it will also keep
248	track of the time spent in those functions. The histogram
249	content can be displayed in the files:
250
251	trace_stats/function<cpu> ( function0, function1, etc).
252
253  trace_stats:
254
255	A directory that holds different tracing stats.
256
257  kprobe_events:
258 
259	Enable dynamic trace points. See kprobetrace.txt.
260
261  kprobe_profile:
262
263	Dynamic trace points stats. See kprobetrace.txt.
264
265  max_graph_depth:
266
267	Used with the function graph tracer. This is the max depth
268	it will trace into a function. Setting this to a value of
269	one will show only the first kernel function that is called
270	from user space.
271
272  printk_formats:
273
274	This is for tools that read the raw format files. If an event in
275	the ring buffer references a string (currently only trace_printk()
276	does this), only a pointer to the string is recorded into the buffer
277	and not the string itself. This prevents tools from knowing what
278	that string was. This file displays the string and address for
279	the string allowing tools to map the pointers to what the
280	strings were.
281
282  saved_cmdlines:
283
284	Only the pid of the task is recorded in a trace event unless
285	the event specifically saves the task comm as well. Ftrace
286	makes a cache of pid mappings to comms to try to display
287	comms for events. If a pid for a comm is not listed, then
288	"<...>" is displayed in the output.
289
290  snapshot:
291
292	This displays the "snapshot" buffer and also lets the user
293	take a snapshot of the current running trace.
294	See the "Snapshot" section below for more details.
295
296  stack_max_size:
297
298	When the stack tracer is activated, this will display the
299	maximum stack size it has encountered.
300	See the "Stack Trace" section below.
301
302  stack_trace:
303
304	This displays the stack back trace of the largest stack
305	that was encountered when the stack tracer is activated.
306	See the "Stack Trace" section below.
307
308  stack_trace_filter:
309
310	This is similar to "set_ftrace_filter" but it limits what
311	functions the stack tracer will check.
312
313  trace_clock:
314
315	Whenever an event is recorded into the ring buffer, a
316	"timestamp" is added. This stamp comes from a specified
317	clock. By default, ftrace uses the "local" clock. This
318	clock is very fast and strictly per cpu, but on some
319	systems it may not be monotonic with respect to other
320	CPUs. In other words, the local clocks may not be in sync
321	with local clocks on other CPUs.
322
323	Usual clocks for tracing:
324
325	  # cat trace_clock
326	  [local] global counter x86-tsc
327
328	  local: Default clock, but may not be in sync across CPUs
329
330	  global: This clock is in sync with all CPUs but may
331	  	  be a bit slower than the local clock.
332
333	  counter: This is not a clock at all, but literally an atomic
334	  	   counter. It counts up one by one, but is in sync
335		   with all CPUs. This is useful when you need to
336		   know exactly the order events occurred with respect to
337		   each other on different CPUs.
338
339	  uptime: This uses the jiffies counter and the time stamp
340	  	  is relative to the time since boot up.
341
342	  perf: This makes ftrace use the same clock that perf uses.
343	  	Eventually perf will be able to read ftrace buffers
344		and this will help out in interleaving the data.
345
346	  x86-tsc: Architectures may define their own clocks. For
347	  	   example, x86 uses its own TSC cycle clock here.
348
349	To set a clock, simply echo the clock name into this file.
350
351	  echo global > trace_clock
352
353  trace_marker:
354
355	This is a very useful file for synchronizing user space
356	with events happening in the kernel. Writing strings into
357	this file will be written into the ftrace buffer.
358
359	It is useful in applications to open this file at the start
360	of the application and just reference the file descriptor
361	for the file.
362
363	void trace_write(const char *fmt, ...)
364	{
365		va_list ap;
366		char buf[256];
367		int n;
368
369		if (trace_fd < 0)
370			return;
371
372		va_start(ap, fmt);
373		n = vsnprintf(buf, 256, fmt, ap);
374		va_end(ap);
375
376		write(trace_fd, buf, n);
377	}
378
379	start:
380
381		trace_fd = open("trace_marker", WR_ONLY);
382
383  uprobe_events:
384 
385	Add dynamic tracepoints in programs.
386	See uprobetracer.txt
387
388  uprobe_profile:
389
390	Uprobe statistics. See uprobetrace.txt
391
392  instances:
393
394	This is a way to make multiple trace buffers where different
395	events can be recorded in different buffers.
396	See "Instances" section below.
397
398  events:
399
400	This is the trace event directory. It holds event tracepoints
401	(also known as static tracepoints) that have been compiled
402	into the kernel. It shows what event tracepoints exist
403	and how they are grouped by system. There are "enable"
404	files at various levels that can enable the tracepoints
405	when a "1" is written to them.
406
407	See events.txt for more information.
408
409  per_cpu:
410
411	This is a directory that contains the trace per_cpu information.
412
413  per_cpu/cpu0/buffer_size_kb:
414
415	The ftrace buffer is defined per_cpu. That is, there's a separate
416	buffer for each CPU to allow writes to be done atomically,
417	and free from cache bouncing. These buffers may have different
418	size buffers. This file is similar to the buffer_size_kb
419	file, but it only displays or sets the buffer size for the
420	specific CPU. (here cpu0).
421
422  per_cpu/cpu0/trace:
423
424	This is similar to the "trace" file, but it will only display
425	the data specific for the CPU. If written to, it only clears
426	the specific CPU buffer.
427
428  per_cpu/cpu0/trace_pipe
429
430	This is similar to the "trace_pipe" file, and is a consuming
431	read, but it will only display (and consume) the data specific
432	for the CPU.
433
434  per_cpu/cpu0/trace_pipe_raw
435
436	For tools that can parse the ftrace ring buffer binary format,
437	the trace_pipe_raw file can be used to extract the data
438	from the ring buffer directly. With the use of the splice()
439	system call, the buffer data can be quickly transferred to
440	a file or to the network where a server is collecting the
441	data.
442
443	Like trace_pipe, this is a consuming reader, where multiple
444	reads will always produce different data.
445
446  per_cpu/cpu0/snapshot:
447
448	This is similar to the main "snapshot" file, but will only
449	snapshot the current CPU (if supported). It only displays
450	the content of the snapshot for a given CPU, and if
451	written to, only clears this CPU buffer.
452
453  per_cpu/cpu0/snapshot_raw:
454
455	Similar to the trace_pipe_raw, but will read the binary format
456	from the snapshot buffer for the given CPU.
457
458  per_cpu/cpu0/stats:
459
460	This displays certain stats about the ring buffer:
461
462	 entries: The number of events that are still in the buffer.
463
464	 overrun: The number of lost events due to overwriting when
465	 	  the buffer was full.
466
467	 commit overrun: Should always be zero.
468	 	This gets set if so many events happened within a nested
469		event (ring buffer is re-entrant), that it fills the
470		buffer and starts dropping events.
471
472	 bytes: Bytes actually read (not overwritten).
473
474	 oldest event ts: The oldest timestamp in the buffer
475
476	 now ts: The current timestamp
477
478	 dropped events: Events lost due to overwrite option being off.
479
480	 read events: The number of events read.
481
482The Tracers
483-----------
484
485Here is the list of current tracers that may be configured.
486
487  "function"
488
489	Function call tracer to trace all kernel functions.
490
491  "function_graph"
492
493	Similar to the function tracer except that the
494	function tracer probes the functions on their entry
495	whereas the function graph tracer traces on both entry
496	and exit of the functions. It then provides the ability
497	to draw a graph of function calls similar to C code
498	source.
499
500  "irqsoff"
501
502	Traces the areas that disable interrupts and saves
503	the trace with the longest max latency.
504	See tracing_max_latency. When a new max is recorded,
505	it replaces the old trace. It is best to view this
506	trace with the latency-format option enabled.
507
508  "preemptoff"
509
510	Similar to irqsoff but traces and records the amount of
511	time for which preemption is disabled.
512
513  "preemptirqsoff"
514
515	Similar to irqsoff and preemptoff, but traces and
516	records the largest time for which irqs and/or preemption
517	is disabled.
518
519  "wakeup"
520
521	Traces and records the max latency that it takes for
522	the highest priority task to get scheduled after
523	it has been woken up.
524        Traces all tasks as an average developer would expect.
525
526  "wakeup_rt"
527
528        Traces and records the max latency that it takes for just
529        RT tasks (as the current "wakeup" does). This is useful
530        for those interested in wake up timings of RT tasks.
531
532  "nop"
533
534	This is the "trace nothing" tracer. To remove all
535	tracers from tracing simply echo "nop" into
536	current_tracer.
537
538
539Examples of using the tracer
540----------------------------
541
542Here are typical examples of using the tracers when controlling
543them only with the debugfs interface (without using any
544user-land utilities).
545
546Output format:
547--------------
548
549Here is an example of the output format of the file "trace"
550
551                             --------
552# tracer: function
553#
554# entries-in-buffer/entries-written: 140080/250280   #P:4
555#
556#                              _-----=> irqs-off
557#                             / _----=> need-resched
558#                            | / _---=> hardirq/softirq
559#                            || / _--=> preempt-depth
560#                            ||| /     delay
561#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
562#              | |       |   ||||       |         |
563            bash-1977  [000] .... 17284.993652: sys_close <-system_call_fastpath
564            bash-1977  [000] .... 17284.993653: __close_fd <-sys_close
565            bash-1977  [000] .... 17284.993653: _raw_spin_lock <-__close_fd
566            sshd-1974  [003] .... 17284.993653: __srcu_read_unlock <-fsnotify
567            bash-1977  [000] .... 17284.993654: add_preempt_count <-_raw_spin_lock
568            bash-1977  [000] ...1 17284.993655: _raw_spin_unlock <-__close_fd
569            bash-1977  [000] ...1 17284.993656: sub_preempt_count <-_raw_spin_unlock
570            bash-1977  [000] .... 17284.993657: filp_close <-__close_fd
571            bash-1977  [000] .... 17284.993657: dnotify_flush <-filp_close
572            sshd-1974  [003] .... 17284.993658: sys_select <-system_call_fastpath
573                             --------
574
575A header is printed with the tracer name that is represented by
576the trace. In this case the tracer is "function". Then it shows the
577number of events in the buffer as well as the total number of entries
578that were written. The difference is the number of entries that were
579lost due to the buffer filling up (250280 - 140080 = 110200 events
580lost).
581
582The header explains the content of the events. Task name "bash", the task
583PID "1977", the CPU that it was running on "000", the latency format
584(explained below), the timestamp in <secs>.<usecs> format, the
585function name that was traced "sys_close" and the parent function that
586called this function "system_call_fastpath". The timestamp is the time
587at which the function was entered.
588
589Latency trace format
590--------------------
591
592When the latency-format option is enabled or when one of the latency
593tracers is set, the trace file gives somewhat more information to see
594why a latency happened. Here is a typical trace.
595
596# tracer: irqsoff
597#
598# irqsoff latency trace v1.1.5 on 3.8.0-test+
599# --------------------------------------------------------------------
600# latency: 259 us, #4/4, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
601#    -----------------
602#    | task: ps-6143 (uid:0 nice:0 policy:0 rt_prio:0)
603#    -----------------
604#  => started at: __lock_task_sighand
605#  => ended at:   _raw_spin_unlock_irqrestore
606#
607#
608#                  _------=> CPU#            
609#                 / _-----=> irqs-off        
610#                | / _----=> need-resched    
611#                || / _---=> hardirq/softirq 
612#                ||| / _--=> preempt-depth   
613#                |||| /     delay             
614#  cmd     pid   ||||| time  |   caller      
615#     \   /      |||||  \    |   /           
616      ps-6143    2d...    0us!: trace_hardirqs_off <-__lock_task_sighand
617      ps-6143    2d..1  259us+: trace_hardirqs_on <-_raw_spin_unlock_irqrestore
618      ps-6143    2d..1  263us+: time_hardirqs_on <-_raw_spin_unlock_irqrestore
619      ps-6143    2d..1  306us : <stack trace>
620 => trace_hardirqs_on_caller
621 => trace_hardirqs_on
622 => _raw_spin_unlock_irqrestore
623 => do_task_stat
624 => proc_tgid_stat
625 => proc_single_show
626 => seq_read
627 => vfs_read
628 => sys_read
629 => system_call_fastpath
630
631
632This shows that the current tracer is "irqsoff" tracing the time
633for which interrupts were disabled. It gives the trace version (which
634never changes) and the version of the kernel upon which this was executed on
635(3.10). Then it displays the max latency in microseconds (259 us). The number
636of trace entries displayed and the total number (both are four: #4/4).
637VP, KP, SP, and HP are always zero and are reserved for later use.
638#P is the number of online CPUs (#P:4).
639
640The task is the process that was running when the latency
641occurred. (ps pid: 6143).
642
643The start and stop (the functions in which the interrupts were
644disabled and enabled respectively) that caused the latencies:
645
646 __lock_task_sighand is where the interrupts were disabled.
647 _raw_spin_unlock_irqrestore is where they were enabled again.
648
649The next lines after the header are the trace itself. The header
650explains which is which.
651
652  cmd: The name of the process in the trace.
653
654  pid: The PID of that process.
655
656  CPU#: The CPU which the process was running on.
657
658  irqs-off: 'd' interrupts are disabled. '.' otherwise.
659	    Note: If the architecture does not support a way to
660		  read the irq flags variable, an 'X' will always
661		  be printed here.
662
663  need-resched:
664	'N' both TIF_NEED_RESCHED and PREEMPT_NEED_RESCHED is set,
665	'n' only TIF_NEED_RESCHED is set,
666	'p' only PREEMPT_NEED_RESCHED is set,
667	'.' otherwise.
668
669  hardirq/softirq:
670	'H' - hard irq occurred inside a softirq.
671	'h' - hard irq is running
672	's' - soft irq is running
673	'.' - normal context.
674
675  preempt-depth: The level of preempt_disabled
676
677The above is mostly meaningful for kernel developers.
678
679  time: When the latency-format option is enabled, the trace file
680	output includes a timestamp relative to the start of the
681	trace. This differs from the output when latency-format
682	is disabled, which includes an absolute timestamp.
683
684  delay: This is just to help catch your eye a bit better. And
685	 needs to be fixed to be only relative to the same CPU.
686	 The marks are determined by the difference between this
687	 current trace and the next trace.
688	  '$' - greater than 1 second
689	  '#' - greater than 1000 microsecond
690	  '!' - greater than 100 microsecond
691	  '+' - greater than 10 microsecond
692	  ' ' - less than or equal to 10 microsecond.
693
694  The rest is the same as the 'trace' file.
695
696  Note, the latency tracers will usually end with a back trace
697  to easily find where the latency occurred.
698
699trace_options
700-------------
701
702The trace_options file (or the options directory) is used to control
703what gets printed in the trace output, or manipulate the tracers.
704To see what is available, simply cat the file:
705
706  cat trace_options
707print-parent
708nosym-offset
709nosym-addr
710noverbose
711noraw
712nohex
713nobin
714noblock
715nostacktrace
716trace_printk
717noftrace_preempt
718nobranch
719annotate
720nouserstacktrace
721nosym-userobj
722noprintk-msg-only
723context-info
724latency-format
725sleep-time
726graph-time
727record-cmd
728overwrite
729nodisable_on_free
730irq-info
731markers
732function-trace
733
734To disable one of the options, echo in the option prepended with
735"no".
736
737  echo noprint-parent > trace_options
738
739To enable an option, leave off the "no".
740
741  echo sym-offset > trace_options
742
743Here are the available options:
744
745  print-parent - On function traces, display the calling (parent)
746		 function as well as the function being traced.
747
748  print-parent:
749   bash-4000  [01]  1477.606694: simple_strtoul <-kstrtoul
750
751  noprint-parent:
752   bash-4000  [01]  1477.606694: simple_strtoul
753
754
755  sym-offset - Display not only the function name, but also the
756	       offset in the function. For example, instead of
757	       seeing just "ktime_get", you will see
758	       "ktime_get+0xb/0x20".
759
760  sym-offset:
761   bash-4000  [01]  1477.606694: simple_strtoul+0x6/0xa0
762
763  sym-addr - this will also display the function address as well
764	     as the function name.
765
766  sym-addr:
767   bash-4000  [01]  1477.606694: simple_strtoul <c0339346>
768
769  verbose - This deals with the trace file when the
770            latency-format option is enabled.
771
772    bash  4000 1 0 00000000 00010a95 [58127d26] 1720.415ms \
773    (+0.000ms): simple_strtoul (kstrtoul)
774
775  raw - This will display raw numbers. This option is best for
776	use with user applications that can translate the raw
777	numbers better than having it done in the kernel.
778
779  hex - Similar to raw, but the numbers will be in a hexadecimal
780	format.
781
782  bin - This will print out the formats in raw binary.
783
784  block - When set, reading trace_pipe will not block when polled.
785
786  stacktrace - This is one of the options that changes the trace
787	       itself. When a trace is recorded, so is the stack
788	       of functions. This allows for back traces of
789	       trace sites.
790
791  trace_printk - Can disable trace_printk() from writing into the buffer.
792
793  branch - Enable branch tracing with the tracer.
794
795  annotate - It is sometimes confusing when the CPU buffers are full
796  	     and one CPU buffer had a lot of events recently, thus
797	     a shorter time frame, were another CPU may have only had
798	     a few events, which lets it have older events. When
799	     the trace is reported, it shows the oldest events first,
800	     and it may look like only one CPU ran (the one with the
801	     oldest events). When the annotate option is set, it will
802	     display when a new CPU buffer started:
803
804          <idle>-0     [001] dNs4 21169.031481: wake_up_idle_cpu <-add_timer_on
805          <idle>-0     [001] dNs4 21169.031482: _raw_spin_unlock_irqrestore <-add_timer_on
806          <idle>-0     [001] .Ns4 21169.031484: sub_preempt_count <-_raw_spin_unlock_irqrestore
807##### CPU 2 buffer started ####
808          <idle>-0     [002] .N.1 21169.031484: rcu_idle_exit <-cpu_idle
809          <idle>-0     [001] .Ns3 21169.031484: _raw_spin_unlock <-clocksource_watchdog
810          <idle>-0     [001] .Ns3 21169.031485: sub_preempt_count <-_raw_spin_unlock
811
812  userstacktrace - This option changes the trace. It records a
813		   stacktrace of the current userspace thread.
814
815  sym-userobj - when user stacktrace are enabled, look up which
816		object the address belongs to, and print a
817		relative address. This is especially useful when
818		ASLR is on, otherwise you don't get a chance to
819		resolve the address to object/file/line after
820		the app is no longer running
821
822		The lookup is performed when you read
823		trace,trace_pipe. Example:
824
825		a.out-1623  [000] 40874.465068: /root/a.out[+0x480] <-/root/a.out[+0
826x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
827
828
829  printk-msg-only - When set, trace_printk()s will only show the format
830  		    and not their parameters (if trace_bprintk() or
831		    trace_bputs() was used to save the trace_printk()).
832
833  context-info - Show only the event data. Hides the comm, PID,
834  	         timestamp, CPU, and other useful data.
835
836  latency-format - This option changes the trace. When
837                   it is enabled, the trace displays
838                   additional information about the
839                   latencies, as described in "Latency
840                   trace format".
841
842  sleep-time - When running function graph tracer, to include
843  	       the time a task schedules out in its function.
844	       When enabled, it will account time the task has been
845	       scheduled out as part of the function call.
846
847  graph-time - When running function graph tracer, to include the
848  	       time to call nested functions. When this is not set,
849	       the time reported for the function will only include
850	       the time the function itself executed for, not the time
851	       for functions that it called.
852
853  record-cmd - When any event or tracer is enabled, a hook is enabled
854  	       in the sched_switch trace point to fill comm cache
855	       with mapped pids and comms. But this may cause some
856	       overhead, and if you only care about pids, and not the
857	       name of the task, disabling this option can lower the
858	       impact of tracing.
859
860  overwrite - This controls what happens when the trace buffer is
861              full. If "1" (default), the oldest events are
862              discarded and overwritten. If "0", then the newest
863              events are discarded.
864	        (see per_cpu/cpu0/stats for overrun and dropped)
865
866  disable_on_free - When the free_buffer is closed, tracing will
867  		    stop (tracing_on set to 0).
868
869  irq-info - Shows the interrupt, preempt count, need resched data.
870  	     When disabled, the trace looks like:
871
872# tracer: function
873#
874# entries-in-buffer/entries-written: 144405/9452052   #P:4
875#
876#           TASK-PID   CPU#      TIMESTAMP  FUNCTION
877#              | |       |          |         |
878          <idle>-0     [002]  23636.756054: ttwu_do_activate.constprop.89 <-try_to_wake_up
879          <idle>-0     [002]  23636.756054: activate_task <-ttwu_do_activate.constprop.89
880          <idle>-0     [002]  23636.756055: enqueue_task <-activate_task
881
882
883  markers - When set, the trace_marker is writable (only by root).
884  	    When disabled, the trace_marker will error with EINVAL
885	    on write.
886
887
888  function-trace - The latency tracers will enable function tracing
889  	    if this option is enabled (default it is). When
890	    it is disabled, the latency tracers do not trace
891	    functions. This keeps the overhead of the tracer down
892	    when performing latency tests.
893
894 Note: Some tracers have their own options. They only appear
895       when the tracer is active.
896
897
898
899irqsoff
900-------
901
902When interrupts are disabled, the CPU can not react to any other
903external event (besides NMIs and SMIs). This prevents the timer
904interrupt from triggering or the mouse interrupt from letting
905the kernel know of a new mouse event. The result is a latency
906with the reaction time.
907
908The irqsoff tracer tracks the time for which interrupts are
909disabled. When a new maximum latency is hit, the tracer saves
910the trace leading up to that latency point so that every time a
911new maximum is reached, the old saved trace is discarded and the
912new trace is saved.
913
914To reset the maximum, echo 0 into tracing_max_latency. Here is
915an example:
916
917 # echo 0 > options/function-trace
918 # echo irqsoff > current_tracer
919 # echo 1 > tracing_on
920 # echo 0 > tracing_max_latency
921 # ls -ltr
922 [...]
923 # echo 0 > tracing_on
924 # cat trace
925# tracer: irqsoff
926#
927# irqsoff latency trace v1.1.5 on 3.8.0-test+
928# --------------------------------------------------------------------
929# latency: 16 us, #4/4, CPU#0 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
930#    -----------------
931#    | task: swapper/0-0 (uid:0 nice:0 policy:0 rt_prio:0)
932#    -----------------
933#  => started at: run_timer_softirq
934#  => ended at:   run_timer_softirq
935#
936#
937#                  _------=> CPU#            
938#                 / _-----=> irqs-off        
939#                | / _----=> need-resched    
940#                || / _---=> hardirq/softirq 
941#                ||| / _--=> preempt-depth   
942#                |||| /     delay             
943#  cmd     pid   ||||| time  |   caller      
944#     \   /      |||||  \    |   /           
945  <idle>-0       0d.s2    0us+: _raw_spin_lock_irq <-run_timer_softirq
946  <idle>-0       0dNs3   17us : _raw_spin_unlock_irq <-run_timer_softirq
947  <idle>-0       0dNs3   17us+: trace_hardirqs_on <-run_timer_softirq
948  <idle>-0       0dNs3   25us : <stack trace>
949 => _raw_spin_unlock_irq
950 => run_timer_softirq
951 => __do_softirq
952 => call_softirq
953 => do_softirq
954 => irq_exit
955 => smp_apic_timer_interrupt
956 => apic_timer_interrupt
957 => rcu_idle_exit
958 => cpu_idle
959 => rest_init
960 => start_kernel
961 => x86_64_start_reservations
962 => x86_64_start_kernel
963
964Here we see that that we had a latency of 16 microseconds (which is
965very good). The _raw_spin_lock_irq in run_timer_softirq disabled
966interrupts. The difference between the 16 and the displayed
967timestamp 25us occurred because the clock was incremented
968between the time of recording the max latency and the time of
969recording the function that had that latency.
970
971Note the above example had function-trace not set. If we set
972function-trace, we get a much larger output:
973
974 with echo 1 > options/function-trace
975
976# tracer: irqsoff
977#
978# irqsoff latency trace v1.1.5 on 3.8.0-test+
979# --------------------------------------------------------------------
980# latency: 71 us, #168/168, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
981#    -----------------
982#    | task: bash-2042 (uid:0 nice:0 policy:0 rt_prio:0)
983#    -----------------
984#  => started at: ata_scsi_queuecmd
985#  => ended at:   ata_scsi_queuecmd
986#
987#
988#                  _------=> CPU#            
989#                 / _-----=> irqs-off        
990#                | / _----=> need-resched    
991#                || / _---=> hardirq/softirq 
992#                ||| / _--=> preempt-depth   
993#                |||| /     delay             
994#  cmd     pid   ||||| time  |   caller      
995#     \   /      |||||  \    |   /           
996    bash-2042    3d...    0us : _raw_spin_lock_irqsave <-ata_scsi_queuecmd
997    bash-2042    3d...    0us : add_preempt_count <-_raw_spin_lock_irqsave
998    bash-2042    3d..1    1us : ata_scsi_find_dev <-ata_scsi_queuecmd
999    bash-2042    3d..1    1us : __ata_scsi_find_dev <-ata_scsi_find_dev
1000    bash-2042    3d..1    2us : ata_find_dev.part.14 <-__ata_scsi_find_dev
1001    bash-2042    3d..1    2us : ata_qc_new_init <-__ata_scsi_queuecmd
1002    bash-2042    3d..1    3us : ata_sg_init <-__ata_scsi_queuecmd
1003    bash-2042    3d..1    4us : ata_scsi_rw_xlat <-__ata_scsi_queuecmd
1004    bash-2042    3d..1    4us : ata_build_rw_tf <-ata_scsi_rw_xlat
1005[...]
1006    bash-2042    3d..1   67us : delay_tsc <-__delay
1007    bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1008    bash-2042    3d..2   67us : sub_preempt_count <-delay_tsc
1009    bash-2042    3d..1   67us : add_preempt_count <-delay_tsc
1010    bash-2042    3d..2   68us : sub_preempt_count <-delay_tsc
1011    bash-2042    3d..1   68us+: ata_bmdma_start <-ata_bmdma_qc_issue
1012    bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1013    bash-2042    3d..1   71us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1014    bash-2042    3d..1   72us+: trace_hardirqs_on <-ata_scsi_queuecmd
1015    bash-2042    3d..1  120us : <stack trace>
1016 => _raw_spin_unlock_irqrestore
1017 => ata_scsi_queuecmd
1018 => scsi_dispatch_cmd
1019 => scsi_request_fn
1020 => __blk_run_queue_uncond
1021 => __blk_run_queue
1022 => blk_queue_bio
1023 => generic_make_request
1024 => submit_bio
1025 => submit_bh
1026 => __ext3_get_inode_loc
1027 => ext3_iget
1028 => ext3_lookup
1029 => lookup_real
1030 => __lookup_hash
1031 => walk_component
1032 => lookup_last
1033 => path_lookupat
1034 => filename_lookup
1035 => user_path_at_empty
1036 => user_path_at
1037 => vfs_fstatat
1038 => vfs_stat
1039 => sys_newstat
1040 => system_call_fastpath
1041
1042
1043Here we traced a 71 microsecond latency. But we also see all the
1044functions that were called during that time. Note that by
1045enabling function tracing, we incur an added overhead. This
1046overhead may extend the latency times. But nevertheless, this
1047trace has provided some very helpful debugging information.
1048
1049
1050preemptoff
1051----------
1052
1053When preemption is disabled, we may be able to receive
1054interrupts but the task cannot be preempted and a higher
1055priority task must wait for preemption to be enabled again
1056before it can preempt a lower priority task.
1057
1058The preemptoff tracer traces the places that disable preemption.
1059Like the irqsoff tracer, it records the maximum latency for
1060which preemption was disabled. The control of preemptoff tracer
1061is much like the irqsoff tracer.
1062
1063 # echo 0 > options/function-trace
1064 # echo preemptoff > current_tracer
1065 # echo 1 > tracing_on
1066 # echo 0 > tracing_max_latency
1067 # ls -ltr
1068 [...]
1069 # echo 0 > tracing_on
1070 # cat trace
1071# tracer: preemptoff
1072#
1073# preemptoff latency trace v1.1.5 on 3.8.0-test+
1074# --------------------------------------------------------------------
1075# latency: 46 us, #4/4, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1076#    -----------------
1077#    | task: sshd-1991 (uid:0 nice:0 policy:0 rt_prio:0)
1078#    -----------------
1079#  => started at: do_IRQ
1080#  => ended at:   do_IRQ
1081#
1082#
1083#                  _------=> CPU#            
1084#                 / _-----=> irqs-off        
1085#                | / _----=> need-resched    
1086#                || / _---=> hardirq/softirq 
1087#                ||| / _--=> preempt-depth   
1088#                |||| /     delay             
1089#  cmd     pid   ||||| time  |   caller      
1090#     \   /      |||||  \    |   /           
1091    sshd-1991    1d.h.    0us+: irq_enter <-do_IRQ
1092    sshd-1991    1d..1   46us : irq_exit <-do_IRQ
1093    sshd-1991    1d..1   47us+: trace_preempt_on <-do_IRQ
1094    sshd-1991    1d..1   52us : <stack trace>
1095 => sub_preempt_count
1096 => irq_exit
1097 => do_IRQ
1098 => ret_from_intr
1099
1100
1101This has some more changes. Preemption was disabled when an
1102interrupt came in (notice the 'h'), and was enabled on exit.
1103But we also see that interrupts have been disabled when entering
1104the preempt off section and leaving it (the 'd'). We do not know if
1105interrupts were enabled in the mean time or shortly after this
1106was over.
1107
1108# tracer: preemptoff
1109#
1110# preemptoff latency trace v1.1.5 on 3.8.0-test+
1111# --------------------------------------------------------------------
1112# latency: 83 us, #241/241, CPU#1 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1113#    -----------------
1114#    | task: bash-1994 (uid:0 nice:0 policy:0 rt_prio:0)
1115#    -----------------
1116#  => started at: wake_up_new_task
1117#  => ended at:   task_rq_unlock
1118#
1119#
1120#                  _------=> CPU#            
1121#                 / _-----=> irqs-off        
1122#                | / _----=> need-resched    
1123#                || / _---=> hardirq/softirq 
1124#                ||| / _--=> preempt-depth   
1125#                |||| /     delay             
1126#  cmd     pid   ||||| time  |   caller      
1127#     \   /      |||||  \    |   /           
1128    bash-1994    1d..1    0us : _raw_spin_lock_irqsave <-wake_up_new_task
1129    bash-1994    1d..1    0us : select_task_rq_fair <-select_task_rq
1130    bash-1994    1d..1    1us : __rcu_read_lock <-select_task_rq_fair
1131    bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1132    bash-1994    1d..1    1us : source_load <-select_task_rq_fair
1133[...]
1134    bash-1994    1d..1   12us : irq_enter <-smp_apic_timer_interrupt
1135    bash-1994    1d..1   12us : rcu_irq_enter <-irq_enter
1136    bash-1994    1d..1   13us : add_preempt_count <-irq_enter
1137    bash-1994    1d.h1   13us : exit_idle <-smp_apic_timer_interrupt
1138    bash-1994    1d.h1   13us : hrtimer_interrupt <-smp_apic_timer_interrupt
1139    bash-1994    1d.h1   13us : _raw_spin_lock <-hrtimer_interrupt
1140    bash-1994    1d.h1   14us : add_preempt_count <-_raw_spin_lock
1141    bash-1994    1d.h2   14us : ktime_get_update_offsets <-hrtimer_interrupt
1142[...]
1143    bash-1994    1d.h1   35us : lapic_next_event <-clockevents_program_event
1144    bash-1994    1d.h1   35us : irq_exit <-smp_apic_timer_interrupt
1145    bash-1994    1d.h1   36us : sub_preempt_count <-irq_exit
1146    bash-1994    1d..2   36us : do_softirq <-irq_exit
1147    bash-1994    1d..2   36us : __do_softirq <-call_softirq
1148    bash-1994    1d..2   36us : __local_bh_disable <-__do_softirq
1149    bash-1994    1d.s2   37us : add_preempt_count <-_raw_spin_lock_irq
1150    bash-1994    1d.s3   38us : _raw_spin_unlock <-run_timer_softirq
1151    bash-1994    1d.s3   39us : sub_preempt_count <-_raw_spin_unlock
1152    bash-1994    1d.s2   39us : call_timer_fn <-run_timer_softirq
1153[...]
1154    bash-1994    1dNs2   81us : cpu_needs_another_gp <-rcu_process_callbacks
1155    bash-1994    1dNs2   82us : __local_bh_enable <-__do_softirq
1156    bash-1994    1dNs2   82us : sub_preempt_count <-__local_bh_enable
1157    bash-1994    1dN.2   82us : idle_cpu <-irq_exit
1158    bash-1994    1dN.2   83us : rcu_irq_exit <-irq_exit
1159    bash-1994    1dN.2   83us : sub_preempt_count <-irq_exit
1160    bash-1994    1.N.1   84us : _raw_spin_unlock_irqrestore <-task_rq_unlock
1161    bash-1994    1.N.1   84us+: trace_preempt_on <-task_rq_unlock
1162    bash-1994    1.N.1  104us : <stack trace>
1163 => sub_preempt_count
1164 => _raw_spin_unlock_irqrestore
1165 => task_rq_unlock
1166 => wake_up_new_task
1167 => do_fork
1168 => sys_clone
1169 => stub_clone
1170
1171
1172The above is an example of the preemptoff trace with
1173function-trace set. Here we see that interrupts were not disabled
1174the entire time. The irq_enter code lets us know that we entered
1175an interrupt 'h'. Before that, the functions being traced still
1176show that it is not in an interrupt, but we can see from the
1177functions themselves that this is not the case.
1178
1179preemptirqsoff
1180--------------
1181
1182Knowing the locations that have interrupts disabled or
1183preemption disabled for the longest times is helpful. But
1184sometimes we would like to know when either preemption and/or
1185interrupts are disabled.
1186
1187Consider the following code:
1188
1189    local_irq_disable();
1190    call_function_with_irqs_off();
1191    preempt_disable();
1192    call_function_with_irqs_and_preemption_off();
1193    local_irq_enable();
1194    call_function_with_preemption_off();
1195    preempt_enable();
1196
1197The irqsoff tracer will record the total length of
1198call_function_with_irqs_off() and
1199call_function_with_irqs_and_preemption_off().
1200
1201The preemptoff tracer will record the total length of
1202call_function_with_irqs_and_preemption_off() and
1203call_function_with_preemption_off().
1204
1205But neither will trace the time that interrupts and/or
1206preemption is disabled. This total time is the time that we can
1207not schedule. To record this time, use the preemptirqsoff
1208tracer.
1209
1210Again, using this trace is much like the irqsoff and preemptoff
1211tracers.
1212
1213 # echo 0 > options/function-trace
1214 # echo preemptirqsoff > current_tracer
1215 # echo 1 > tracing_on
1216 # echo 0 > tracing_max_latency
1217 # ls -ltr
1218 [...]
1219 # echo 0 > tracing_on
1220 # cat trace
1221# tracer: preemptirqsoff
1222#
1223# preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1224# --------------------------------------------------------------------
1225# latency: 100 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1226#    -----------------
1227#    | task: ls-2230 (uid:0 nice:0 policy:0 rt_prio:0)
1228#    -----------------
1229#  => started at: ata_scsi_queuecmd
1230#  => ended at:   ata_scsi_queuecmd
1231#
1232#
1233#                  _------=> CPU#            
1234#                 / _-----=> irqs-off        
1235#                | / _----=> need-resched    
1236#                || / _---=> hardirq/softirq 
1237#                ||| / _--=> preempt-depth   
1238#                |||| /     delay             
1239#  cmd     pid   ||||| time  |   caller      
1240#     \   /      |||||  \    |   /           
1241      ls-2230    3d...    0us+: _raw_spin_lock_irqsave <-ata_scsi_queuecmd
1242      ls-2230    3...1  100us : _raw_spin_unlock_irqrestore <-ata_scsi_queuecmd
1243      ls-2230    3...1  101us+: trace_preempt_on <-ata_scsi_queuecmd
1244      ls-2230    3...1  111us : <stack trace>
1245 => sub_preempt_count
1246 => _raw_spin_unlock_irqrestore
1247 => ata_scsi_queuecmd
1248 => scsi_dispatch_cmd
1249 => scsi_request_fn
1250 => __blk_run_queue_uncond
1251 => __blk_run_queue
1252 => blk_queue_bio
1253 => generic_make_request
1254 => submit_bio
1255 => submit_bh
1256 => ext3_bread
1257 => ext3_dir_bread
1258 => htree_dirblock_to_tree
1259 => ext3_htree_fill_tree
1260 => ext3_readdir
1261 => vfs_readdir
1262 => sys_getdents
1263 => system_call_fastpath
1264
1265
1266The trace_hardirqs_off_thunk is called from assembly on x86 when
1267interrupts are disabled in the assembly code. Without the
1268function tracing, we do not know if interrupts were enabled
1269within the preemption points. We do see that it started with
1270preemption enabled.
1271
1272Here is a trace with function-trace set:
1273
1274# tracer: preemptirqsoff
1275#
1276# preemptirqsoff latency trace v1.1.5 on 3.8.0-test+
1277# --------------------------------------------------------------------
1278# latency: 161 us, #339/339, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1279#    -----------------
1280#    | task: ls-2269 (uid:0 nice:0 policy:0 rt_prio:0)
1281#    -----------------
1282#  => started at: schedule
1283#  => ended at:   mutex_unlock
1284#
1285#
1286#                  _------=> CPU#            
1287#                 / _-----=> irqs-off        
1288#                | / _----=> need-resched    
1289#                || / _---=> hardirq/softirq 
1290#                ||| / _--=> preempt-depth   
1291#                |||| /     delay             
1292#  cmd     pid   ||||| time  |   caller      
1293#     \   /      |||||  \    |   /           
1294kworker/-59      3...1    0us : __schedule <-schedule
1295kworker/-59      3d..1    0us : rcu_preempt_qs <-rcu_note_context_switch
1296kworker/-59      3d..1    1us : add_preempt_count <-_raw_spin_lock_irq
1297kworker/-59      3d..2    1us : deactivate_task <-__schedule
1298kworker/-59      3d..2    1us : dequeue_task <-deactivate_task
1299kworker/-59      3d..2    2us : update_rq_clock <-dequeue_task
1300kworker/-59      3d..2    2us : dequeue_task_fair <-dequeue_task
1301kworker/-59      3d..2    2us : update_curr <-dequeue_task_fair
1302kworker/-59      3d..2    2us : update_min_vruntime <-update_curr
1303kworker/-59      3d..2    3us : cpuacct_charge <-update_curr
1304kworker/-59      3d..2    3us : __rcu_read_lock <-cpuacct_charge
1305kworker/-59      3d..2    3us : __rcu_read_unlock <-cpuacct_charge
1306kworker/-59      3d..2    3us : update_cfs_rq_blocked_load <-dequeue_task_fair
1307kworker/-59      3d..2    4us : clear_buddies <-dequeue_task_fair
1308kworker/-59      3d..2    4us : account_entity_dequeue <-dequeue_task_fair
1309kworker/-59      3d..2    4us : update_min_vruntime <-dequeue_task_fair
1310kworker/-59      3d..2    4us : update_cfs_shares <-dequeue_task_fair
1311kworker/-59      3d..2    5us : hrtick_update <-dequeue_task_fair
1312kworker/-59      3d..2    5us : wq_worker_sleeping <-__schedule
1313kworker/-59      3d..2    5us : kthread_data <-wq_worker_sleeping
1314kworker/-59      3d..2    5us : put_prev_task_fair <-__schedule
1315kworker/-59      3d..2    6us : pick_next_task_fair <-pick_next_task
1316kworker/-59      3d..2    6us : clear_buddies <-pick_next_task_fair
1317kworker/-59      3d..2    6us : set_next_entity <-pick_next_task_fair
1318kworker/-59      3d..2    6us : update_stats_wait_end <-set_next_entity
1319      ls-2269    3d..2    7us : finish_task_switch <-__schedule
1320      ls-2269    3d..2    7us : _raw_spin_unlock_irq <-finish_task_switch
1321      ls-2269    3d..2    8us : do_IRQ <-ret_from_intr
1322      ls-2269    3d..2    8us : irq_enter <-do_IRQ
1323      ls-2269    3d..2    8us : rcu_irq_enter <-irq_enter
1324      ls-2269    3d..2    9us : add_preempt_count <-irq_enter
1325      ls-2269    3d.h2    9us : exit_idle <-do_IRQ
1326[...]
1327      ls-2269    3d.h3   20us : sub_preempt_count <-_raw_spin_unlock
1328      ls-2269    3d.h2   20us : irq_exit <-do_IRQ
1329      ls-2269    3d.h2   21us : sub_preempt_count <-irq_exit
1330      ls-2269    3d..3   21us : do_softirq <-irq_exit
1331      ls-2269    3d..3   21us : __do_softirq <-call_softirq
1332      ls-2269    3d..3   21us+: __local_bh_disable <-__do_softirq
1333      ls-2269    3d.s4   29us : sub_preempt_count <-_local_bh_enable_ip
1334      ls-2269    3d.s5   29us : sub_preempt_count <-_local_bh_enable_ip
1335      ls-2269    3d.s5   31us : do_IRQ <-ret_from_intr
1336      ls-2269    3d.s5   31us : irq_enter <-do_IRQ
1337      ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1338[...]
1339      ls-2269    3d.s5   31us : rcu_irq_enter <-irq_enter
1340      ls-2269    3d.s5   32us : add_preempt_count <-irq_enter
1341      ls-2269    3d.H5   32us : exit_idle <-do_IRQ
1342      ls-2269    3d.H5   32us : handle_irq <-do_IRQ
1343      ls-2269    3d.H5   32us : irq_to_desc <-handle_irq
1344      ls-2269    3d.H5   33us : handle_fasteoi_irq <-handle_irq
1345[...]
1346      ls-2269    3d.s5  158us : _raw_spin_unlock_irqrestore <-rtl8139_poll
1347      ls-2269    3d.s3  158us : net_rps_action_and_irq_enable.isra.65 <-net_rx_action
1348      ls-2269    3d.s3  159us : __local_bh_enable <-__do_softirq
1349      ls-2269    3d.s3  159us : sub_preempt_count <-__local_bh_enable
1350      ls-2269    3d..3  159us : idle_cpu <-irq_exit
1351      ls-2269    3d..3  159us : rcu_irq_exit <-irq_exit
1352      ls-2269    3d..3  160us : sub_preempt_count <-irq_exit
1353      ls-2269    3d...  161us : __mutex_unlock_slowpath <-mutex_unlock
1354      ls-2269    3d...  162us+: trace_hardirqs_on <-mutex_unlock
1355      ls-2269    3d...  186us : <stack trace>
1356 => __mutex_unlock_slowpath
1357 => mutex_unlock
1358 => process_output
1359 => n_tty_write
1360 => tty_write
1361 => vfs_write
1362 => sys_write
1363 => system_call_fastpath
1364
1365This is an interesting trace. It started with kworker running and
1366scheduling out and ls taking over. But as soon as ls released the
1367rq lock and enabled interrupts (but not preemption) an interrupt
1368triggered. When the interrupt finished, it started running softirqs.
1369But while the softirq was running, another interrupt triggered.
1370When an interrupt is running inside a softirq, the annotation is 'H'.
1371
1372
1373wakeup
1374------
1375
1376One common case that people are interested in tracing is the
1377time it takes for a task that is woken to actually wake up.
1378Now for non Real-Time tasks, this can be arbitrary. But tracing
1379it none the less can be interesting. 
1380
1381Without function tracing:
1382
1383 # echo 0 > options/function-trace
1384 # echo wakeup > current_tracer
1385 # echo 1 > tracing_on
1386 # echo 0 > tracing_max_latency
1387 # chrt -f 5 sleep 1
1388 # echo 0 > tracing_on
1389 # cat trace
1390# tracer: wakeup
1391#
1392# wakeup latency trace v1.1.5 on 3.8.0-test+
1393# --------------------------------------------------------------------
1394# latency: 15 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1395#    -----------------
1396#    | task: kworker/3:1H-312 (uid:0 nice:-20 policy:0 rt_prio:0)
1397#    -----------------
1398#
1399#                  _------=> CPU#            
1400#                 / _-----=> irqs-off        
1401#                | / _----=> need-resched    
1402#                || / _---=> hardirq/softirq 
1403#                ||| / _--=> preempt-depth   
1404#                |||| /     delay             
1405#  cmd     pid   ||||| time  |   caller      
1406#     \   /      |||||  \    |   /           
1407  <idle>-0       3dNs7    0us :      0:120:R   + [003]   312:100:R kworker/3:1H
1408  <idle>-0       3dNs7    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1409  <idle>-0       3d..3   15us : __schedule <-schedule
1410  <idle>-0       3d..3   15us :      0:120:R ==> [003]   312:100:R kworker/3:1H
1411
1412The tracer only traces the highest priority task in the system
1413to avoid tracing the normal circumstances. Here we see that
1414the kworker with a nice priority of -20 (not very nice), took
1415just 15 microseconds from the time it woke up, to the time it
1416ran.
1417
1418Non Real-Time tasks are not that interesting. A more interesting
1419trace is to concentrate only on Real-Time tasks.
1420
1421wakeup_rt
1422---------
1423
1424In a Real-Time environment it is very important to know the
1425wakeup time it takes for the highest priority task that is woken
1426up to the time that it executes. This is also known as "schedule
1427latency". I stress the point that this is about RT tasks. It is
1428also important to know the scheduling latency of non-RT tasks,
1429but the average schedule latency is better for non-RT tasks.
1430Tools like LatencyTop are more appropriate for such
1431measurements.
1432
1433Real-Time environments are interested in the worst case latency.
1434That is the longest latency it takes for something to happen,
1435and not the average. We can have a very fast scheduler that may
1436only have a large latency once in a while, but that would not
1437work well with Real-Time tasks.  The wakeup_rt tracer was designed
1438to record the worst case wakeups of RT tasks. Non-RT tasks are
1439not recorded because the tracer only records one worst case and
1440tracing non-RT tasks that are unpredictable will overwrite the
1441worst case latency of RT tasks (just run the normal wakeup
1442tracer for a while to see that effect).
1443
1444Since this tracer only deals with RT tasks, we will run this
1445slightly differently than we did with the previous tracers.
1446Instead of performing an 'ls', we will run 'sleep 1' under
1447'chrt' which changes the priority of the task.
1448
1449 # echo 0 > options/function-trace
1450 # echo wakeup_rt > current_tracer
1451 # echo 1 > tracing_on
1452 # echo 0 > tracing_max_latency
1453 # chrt -f 5 sleep 1
1454 # echo 0 > tracing_on
1455 # cat trace
1456# tracer: wakeup
1457#
1458# tracer: wakeup_rt
1459#
1460# wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1461# --------------------------------------------------------------------
1462# latency: 5 us, #4/4, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1463#    -----------------
1464#    | task: sleep-2389 (uid:0 nice:0 policy:1 rt_prio:5)
1465#    -----------------
1466#
1467#                  _------=> CPU#            
1468#                 / _-----=> irqs-off        
1469#                | / _----=> need-resched    
1470#                || / _---=> hardirq/softirq 
1471#                ||| / _--=> preempt-depth   
1472#                |||| /     delay             
1473#  cmd     pid   ||||| time  |   caller      
1474#     \   /      |||||  \    |   /           
1475  <idle>-0       3d.h4    0us :      0:120:R   + [003]  2389: 94:R sleep
1476  <idle>-0       3d.h4    1us+: ttwu_do_activate.constprop.87 <-try_to_wake_up
1477  <idle>-0       3d..3    5us : __schedule <-schedule
1478  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
1479
1480
1481Running this on an idle system, we see that it only took 5 microseconds
1482to perform the task switch.  Note, since the trace point in the schedule
1483is before the actual "switch", we stop the tracing when the recorded task
1484is about to schedule in. This may change if we add a new marker at the
1485end of the scheduler.
1486
1487Notice that the recorded task is 'sleep' with the PID of 2389
1488and it has an rt_prio of 5. This priority is user-space priority
1489and not the internal kernel priority. The policy is 1 for
1490SCHED_FIFO and 2 for SCHED_RR.
1491
1492Note, that the trace data shows the internal priority (99 - rtprio).
1493
1494  <idle>-0       3d..3    5us :      0:120:R ==> [003]  2389: 94:R sleep
1495
1496The 0:120:R means idle was running with a nice priority of 0 (120 - 20)
1497and in the running state 'R'. The sleep task was scheduled in with
14982389: 94:R. That is the priority is the kernel rtprio (99 - 5 = 94)
1499and it too is in the running state.
1500
1501Doing the same with chrt -r 5 and function-trace set.
1502
1503  echo 1 > options/function-trace
1504
1505# tracer: wakeup_rt
1506#
1507# wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1508# --------------------------------------------------------------------
1509# latency: 29 us, #85/85, CPU#3 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1510#    -----------------
1511#    | task: sleep-2448 (uid:0 nice:0 policy:1 rt_prio:5)
1512#    -----------------
1513#
1514#                  _------=> CPU#            
1515#                 / _-----=> irqs-off        
1516#                | / _----=> need-resched    
1517#                || / _---=> hardirq/softirq 
1518#                ||| / _--=> preempt-depth   
1519#                |||| /     delay             
1520#  cmd     pid   ||||| time  |   caller      
1521#     \   /      |||||  \    |   /           
1522  <idle>-0       3d.h4    1us+:      0:120:R   + [003]  2448: 94:R sleep
1523  <idle>-0       3d.h4    2us : ttwu_do_activate.constprop.87 <-try_to_wake_up
1524  <idle>-0       3d.h3    3us : check_preempt_curr <-ttwu_do_wakeup
1525  <idle>-0       3d.h3    3us : resched_curr <-check_preempt_curr
1526  <idle>-0       3dNh3    4us : task_woken_rt <-ttwu_do_wakeup
1527  <idle>-0       3dNh3    4us : _raw_spin_unlock <-try_to_wake_up
1528  <idle>-0       3dNh3    4us : sub_preempt_count <-_raw_spin_unlock
1529  <idle>-0       3dNh2    5us : ttwu_stat <-try_to_wake_up
1530  <idle>-0       3dNh2    5us : _raw_spin_unlock_irqrestore <-try_to_wake_up
1531  <idle>-0       3dNh2    6us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1532  <idle>-0       3dNh1    6us : _raw_spin_lock <-__run_hrtimer
1533  <idle>-0       3dNh1    6us : add_preempt_count <-_raw_spin_lock
1534  <idle>-0       3dNh2    7us : _raw_spin_unlock <-hrtimer_interrupt
1535  <idle>-0       3dNh2    7us : sub_preempt_count <-_raw_spin_unlock
1536  <idle>-0       3dNh1    7us : tick_program_event <-hrtimer_interrupt
1537  <idle>-0       3dNh1    7us : clockevents_program_event <-tick_program_event
1538  <idle>-0       3dNh1    8us : ktime_get <-clockevents_program_event
1539  <idle>-0       3dNh1    8us : lapic_next_event <-clockevents_program_event
1540  <idle>-0       3dNh1    8us : irq_exit <-smp_apic_timer_interrupt
1541  <idle>-0       3dNh1    9us : sub_preempt_count <-irq_exit
1542  <idle>-0       3dN.2    9us : idle_cpu <-irq_exit
1543  <idle>-0       3dN.2    9us : rcu_irq_exit <-irq_exit
1544  <idle>-0       3dN.2   10us : rcu_eqs_enter_common.isra.45 <-rcu_irq_exit
1545  <idle>-0       3dN.2   10us : sub_preempt_count <-irq_exit
1546  <idle>-0       3.N.1   11us : rcu_idle_exit <-cpu_idle
1547  <idle>-0       3dN.1   11us : rcu_eqs_exit_common.isra.43 <-rcu_idle_exit
1548  <idle>-0       3.N.1   11us : tick_nohz_idle_exit <-cpu_idle
1549  <idle>-0       3dN.1   12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
1550  <idle>-0       3dN.1   12us : ktime_get <-tick_nohz_idle_exit
1551  <idle>-0       3dN.1   12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
1552  <idle>-0       3dN.1   13us : update_cpu_load_nohz <-tick_nohz_idle_exit
1553  <idle>-0       3dN.1   13us : _raw_spin_lock <-update_cpu_load_nohz
1554  <idle>-0       3dN.1   13us : add_preempt_count <-_raw_spin_lock
1555  <idle>-0       3dN.2   13us : __update_cpu_load <-update_cpu_load_nohz
1556  <idle>-0       3dN.2   14us : sched_avg_update <-__update_cpu_load
1557  <idle>-0       3dN.2   14us : _raw_spin_unlock <-update_cpu_load_nohz
1558  <idle>-0       3dN.2   14us : sub_preempt_count <-_raw_spin_unlock
1559  <idle>-0       3dN.1   15us : calc_load_exit_idle <-tick_nohz_idle_exit
1560  <idle>-0       3dN.1   15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
1561  <idle>-0       3dN.1   15us : hrtimer_cancel <-tick_nohz_idle_exit
1562  <idle>-0       3dN.1   15us : hrtimer_try_to_cancel <-hrtimer_cancel
1563  <idle>-0       3dN.1   16us : lock_hrtimer_base.isra.18 <-hrtimer_try_to_cancel
1564  <idle>-0       3dN.1   16us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1565  <idle>-0       3dN.1   16us : add_preempt_count <-_raw_spin_lock_irqsave
1566  <idle>-0       3dN.2   17us : __remove_hrtimer <-remove_hrtimer.part.16
1567  <idle>-0       3dN.2   17us : hrtimer_force_reprogram <-__remove_hrtimer
1568  <idle>-0       3dN.2   17us : tick_program_event <-hrtimer_force_reprogram
1569  <idle>-0       3dN.2   18us : clockevents_program_event <-tick_program_event
1570  <idle>-0       3dN.2   18us : ktime_get <-clockevents_program_event
1571  <idle>-0       3dN.2   18us : lapic_next_event <-clockevents_program_event
1572  <idle>-0       3dN.2   19us : _raw_spin_unlock_irqrestore <-hrtimer_try_to_cancel
1573  <idle>-0       3dN.2   19us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1574  <idle>-0       3dN.1   19us : hrtimer_forward <-tick_nohz_idle_exit
1575  <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
1576  <idle>-0       3dN.1   20us : ktime_add_safe <-hrtimer_forward
1577  <idle>-0       3dN.1   20us : hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
1578  <idle>-0       3dN.1   20us : __hrtimer_start_range_ns <-hrtimer_start_range_ns
1579  <idle>-0       3dN.1   21us : lock_hrtimer_base.isra.18 <-__hrtimer_start_range_ns
1580  <idle>-0       3dN.1   21us : _raw_spin_lock_irqsave <-lock_hrtimer_base.isra.18
1581  <idle>-0       3dN.1   21us : add_preempt_count <-_raw_spin_lock_irqsave
1582  <idle>-0       3dN.2   22us : ktime_add_safe <-__hrtimer_start_range_ns
1583  <idle>-0       3dN.2   22us : enqueue_hrtimer <-__hrtimer_start_range_ns
1584  <idle>-0       3dN.2   22us : tick_program_event <-__hrtimer_start_range_ns
1585  <idle>-0       3dN.2   23us : clockevents_program_event <-tick_program_event
1586  <idle>-0       3dN.2   23us : ktime_get <-clockevents_program_event
1587  <idle>-0       3dN.2   23us : lapic_next_event <-clockevents_program_event
1588  <idle>-0       3dN.2   24us : _raw_spin_unlock_irqrestore <-__hrtimer_start_range_ns
1589  <idle>-0       3dN.2   24us : sub_preempt_count <-_raw_spin_unlock_irqrestore
1590  <idle>-0       3dN.1   24us : account_idle_ticks <-tick_nohz_idle_exit
1591  <idle>-0       3dN.1   24us : account_idle_time <-account_idle_ticks
1592  <idle>-0       3.N.1   25us : sub_preempt_count <-cpu_idle
1593  <idle>-0       3.N..   25us : schedule <-cpu_idle
1594  <idle>-0       3.N..   25us : __schedule <-preempt_schedule
1595  <idle>-0       3.N..   26us : add_preempt_count <-__schedule
1596  <idle>-0       3.N.1   26us : rcu_note_context_switch <-__schedule
1597  <idle>-0       3.N.1   26us : rcu_sched_qs <-rcu_note_context_switch
1598  <idle>-0       3dN.1   27us : rcu_preempt_qs <-rcu_note_context_switch
1599  <idle>-0       3.N.1   27us : _raw_spin_lock_irq <-__schedule
1600  <idle>-0       3dN.1   27us : add_preempt_count <-_raw_spin_lock_irq
1601  <idle>-0       3dN.2   28us : put_prev_task_idle <-__schedule
1602  <idle>-0       3dN.2   28us : pick_next_task_stop <-pick_next_task
1603  <idle>-0       3dN.2   28us : pick_next_task_rt <-pick_next_task
1604  <idle>-0       3dN.2   29us : dequeue_pushable_task <-pick_next_task_rt
1605  <idle>-0       3d..3   29us : __schedule <-preempt_schedule
1606  <idle>-0       3d..3   30us :      0:120:R ==> [003]  2448: 94:R sleep
1607
1608This isn't that big of a trace, even with function tracing enabled,
1609so I included the entire trace.
1610
1611The interrupt went off while when the system was idle. Somewhere
1612before task_woken_rt() was called, the NEED_RESCHED flag was set,
1613this is indicated by the first occurrence of the 'N' flag.
1614
1615Latency tracing and events
1616--------------------------
1617As function tracing can induce a much larger latency, but without
1618seeing what happens within the latency it is hard to know what
1619caused it. There is a middle ground, and that is with enabling
1620events.
1621
1622 # echo 0 > options/function-trace
1623 # echo wakeup_rt > current_tracer
1624 # echo 1 > events/enable
1625 # echo 1 > tracing_on
1626 # echo 0 > tracing_max_latency
1627 # chrt -f 5 sleep 1
1628 # echo 0 > tracing_on
1629 # cat trace
1630# tracer: wakeup_rt
1631#
1632# wakeup_rt latency trace v1.1.5 on 3.8.0-test+
1633# --------------------------------------------------------------------
1634# latency: 6 us, #12/12, CPU#2 | (M:preempt VP:0, KP:0, SP:0 HP:0 #P:4)
1635#    -----------------
1636#    | task: sleep-5882 (uid:0 nice:0 policy:1 rt_prio:5)
1637#    -----------------
1638#
1639#                  _------=> CPU#            
1640#                 / _-----=> irqs-off        
1641#                | / _----=> need-resched    
1642#                || / _---=> hardirq/softirq 
1643#                ||| / _--=> preempt-depth   
1644#                |||| /     delay             
1645#  cmd     pid   ||||| time  |   caller      
1646#     \   /      |||||  \    |   /           
1647  <idle>-0       2d.h4    0us :      0:120:R   + [002]  5882: 94:R sleep
1648  <idle>-0       2d.h4    0us : ttwu_do_activate.constprop.87 <-try_to_wake_up
1649  <idle>-0       2d.h4    1us : sched_wakeup: comm=sleep pid=5882 prio=94 success=1 target_cpu=002
1650  <idle>-0       2dNh2    1us : hrtimer_expire_exit: hrtimer=ffff88007796feb8
1651  <idle>-0       2.N.2    2us : power_end: cpu_id=2
1652  <idle>-0       2.N.2    3us : cpu_idle: state=4294967295 cpu_id=2
1653  <idle>-0       2dN.3    4us : hrtimer_cancel: hrtimer=ffff88007d50d5e0
1654  <idle>-0       2dN.3    4us : hrtimer_start: hrtimer=ffff88007d50d5e0 function=tick_sched_timer expires=34311211000000 softexpires=34311211000000
1655  <idle>-0       2.N.2    5us : rcu_utilization: Start context switch
1656  <idle>-0       2.N.2    5us : rcu_utilization: End context switch
1657  <idle>-0       2d..3    6us : __schedule <-schedule
1658  <idle>-0       2d..3    6us :      0:120:R ==> [002]  5882: 94:R sleep
1659
1660
1661function
1662--------
1663
1664This tracer is the function tracer. Enabling the function tracer
1665can be done from the debug file system. Make sure the
1666ftrace_enabled is set; otherwise this tracer is a nop.
1667See the "ftrace_enabled" section below.
1668
1669 # sysctl kernel.ftrace_enabled=1
1670 # echo function > current_tracer
1671 # echo 1 > tracing_on
1672 # usleep 1
1673 # echo 0 > tracing_on
1674 # cat trace
1675# tracer: function
1676#
1677# entries-in-buffer/entries-written: 24799/24799   #P:4
1678#
1679#                              _-----=> irqs-off
1680#                             / _----=> need-resched
1681#                            | / _---=> hardirq/softirq
1682#                            || / _--=> preempt-depth
1683#                            ||| /     delay
1684#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
1685#              | |       |   ||||       |         |
1686            bash-1994  [002] ....  3082.063030: mutex_unlock <-rb_simple_write
1687            bash-1994  [002] ....  3082.063031: __mutex_unlock_slowpath <-mutex_unlock
1688            bash-1994  [002] ....  3082.063031: __fsnotify_parent <-fsnotify_modify
1689            bash-1994  [002] ....  3082.063032: fsnotify <-fsnotify_modify
1690            bash-1994  [002] ....  3082.063032: __srcu_read_lock <-fsnotify
1691            bash-1994  [002] ....  3082.063032: add_preempt_count <-__srcu_read_lock
1692            bash-1994  [002] ...1  3082.063032: sub_preempt_count <-__srcu_read_lock
1693            bash-1994  [002] ....  3082.063033: __srcu_read_unlock <-fsnotify
1694[...]
1695
1696
1697Note: function tracer uses ring buffers to store the above
1698entries. The newest data may overwrite the oldest data.
1699Sometimes using echo to stop the trace is not sufficient because
1700the tracing could have overwritten the data that you wanted to
1701record. For this reason, it is sometimes better to disable
1702tracing directly from a program. This allows you to stop the
1703tracing at the point that you hit the part that you are
1704interested in. To disable the tracing directly from a C program,
1705something like following code snippet can be used:
1706
1707int trace_fd;
1708[...]
1709int main(int argc, char *argv[]) {
1710	[...]
1711	trace_fd = open(tracing_file("tracing_on"), O_WRONLY);
1712	[...]
1713	if (condition_hit()) {
1714		write(trace_fd, "0", 1);
1715	}
1716	[...]
1717}
1718
1719
1720Single thread tracing
1721---------------------
1722
1723By writing into set_ftrace_pid you can trace a
1724single thread. For example:
1725
1726# cat set_ftrace_pid
1727no pid
1728# echo 3111 > set_ftrace_pid
1729# cat set_ftrace_pid
17303111
1731# echo function > current_tracer
1732# cat trace | head
1733 # tracer: function
1734 #
1735 #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
1736 #              | |       |          |         |
1737     yum-updatesd-3111  [003]  1637.254676: finish_task_switch <-thread_return
1738     yum-updatesd-3111  [003]  1637.254681: hrtimer_cancel <-schedule_hrtimeout_range
1739     yum-updatesd-3111  [003]  1637.254682: hrtimer_try_to_cancel <-hrtimer_cancel
1740     yum-updatesd-3111  [003]  1637.254683: lock_hrtimer_base <-hrtimer_try_to_cancel
1741     yum-updatesd-3111  [003]  1637.254685: fget_light <-do_sys_poll
1742     yum-updatesd-3111  [003]  1637.254686: pipe_poll <-do_sys_poll
1743# echo > set_ftrace_pid
1744# cat trace |head
1745 # tracer: function
1746 #
1747 #           TASK-PID    CPU#    TIMESTAMP  FUNCTION
1748 #              | |       |          |         |
1749 ##### CPU 3 buffer started ####
1750     yum-updatesd-3111  [003]  1701.957688: free_poll_entry <-poll_freewait
1751     yum-updatesd-3111  [003]  1701.957689: remove_wait_queue <-free_poll_entry
1752     yum-updatesd-3111  [003]  1701.957691: fput <-free_poll_entry
1753     yum-updatesd-3111  [003]  1701.957692: audit_syscall_exit <-sysret_audit
1754     yum-updatesd-3111  [003]  1701.957693: path_put <-audit_syscall_exit
1755
1756If you want to trace a function when executing, you could use
1757something like this simple program:
1758
1759#include <stdio.h>
1760#include <stdlib.h>
1761#include <sys/types.h>
1762#include <sys/stat.h>
1763#include <fcntl.h>
1764#include <unistd.h>
1765#include <string.h>
1766
1767#define _STR(x) #x
1768#define STR(x) _STR(x)
1769#define MAX_PATH 256
1770
1771const char *find_debugfs(void)
1772{
1773       static char debugfs[MAX_PATH+1];
1774       static int debugfs_found;
1775       char type[100];
1776       FILE *fp;
1777
1778       if (debugfs_found)
1779               return debugfs;
1780
1781       if ((fp = fopen("/proc/mounts","r")) == NULL) {
1782               perror("/proc/mounts");
1783               return NULL;
1784       }
1785
1786       while (fscanf(fp, "%*s %"
1787                     STR(MAX_PATH)
1788                     "s %99s %*s %*d %*d\n",
1789                     debugfs, type) == 2) {
1790               if (strcmp(type, "debugfs") == 0)
1791                       break;
1792       }
1793       fclose(fp);
1794
1795       if (strcmp(type, "debugfs") != 0) {
1796               fprintf(stderr, "debugfs not mounted");
1797               return NULL;
1798       }
1799
1800       strcat(debugfs, "/tracing/");
1801       debugfs_found = 1;
1802
1803       return debugfs;
1804}
1805
1806const char *tracing_file(const char *file_name)
1807{
1808       static char trace_file[MAX_PATH+1];
1809       snprintf(trace_file, MAX_PATH, "%s/%s", find_debugfs(), file_name);
1810       return trace_file;
1811}
1812
1813int main (int argc, char **argv)
1814{
1815        if (argc < 1)
1816                exit(-1);
1817
1818        if (fork() > 0) {
1819                int fd, ffd;
1820                char line[64];
1821                int s;
1822
1823                ffd = open(tracing_file("current_tracer"), O_WRONLY);
1824                if (ffd < 0)
1825                        exit(-1);
1826                write(ffd, "nop", 3);
1827
1828                fd = open(tracing_file("set_ftrace_pid"), O_WRONLY);
1829                s = sprintf(line, "%d\n", getpid());
1830                write(fd, line, s);
1831
1832                write(ffd, "function", 8);
1833
1834                close(fd);
1835                close(ffd);
1836
1837                execvp(argv[1], argv+1);
1838        }
1839
1840        return 0;
1841}
1842
1843Or this simple script!
1844
1845------
1846#!/bin/bash
1847
1848debugfs=`sed -ne 's/^debugfs \(.*\) debugfs.*/\1/p' /proc/mounts`
1849echo nop > $debugfs/tracing/current_tracer
1850echo 0 > $debugfs/tracing/tracing_on
1851echo $$ > $debugfs/tracing/set_ftrace_pid
1852echo function > $debugfs/tracing/current_tracer
1853echo 1 > $debugfs/tracing/tracing_on
1854exec "$@"
1855------
1856
1857
1858function graph tracer
1859---------------------------
1860
1861This tracer is similar to the function tracer except that it
1862probes a function on its entry and its exit. This is done by
1863using a dynamically allocated stack of return addresses in each
1864task_struct. On function entry the tracer overwrites the return
1865address of each function traced to set a custom probe. Thus the
1866original return address is stored on the stack of return address
1867in the task_struct.
1868
1869Probing on both ends of a function leads to special features
1870such as:
1871
1872- measure of a function's time execution
1873- having a reliable call stack to draw function calls graph
1874
1875This tracer is useful in several situations:
1876
1877- you want to find the reason of a strange kernel behavior and
1878  need to see what happens in detail on any areas (or specific
1879  ones).
1880
1881- you are experiencing weird latencies but it's difficult to
1882  find its origin.
1883
1884- you want to find quickly which path is taken by a specific
1885  function
1886
1887- you just want to peek inside a working kernel and want to see
1888  what happens there.
1889
1890# tracer: function_graph
1891#
1892# CPU  DURATION                  FUNCTION CALLS
1893# |     |   |                     |   |   |   |
1894
1895 0)               |  sys_open() {
1896 0)               |    do_sys_open() {
1897 0)               |      getname() {
1898 0)               |        kmem_cache_alloc() {
1899 0)   1.382 us    |          __might_sleep();
1900 0)   2.478 us    |        }
1901 0)               |        strncpy_from_user() {
1902 0)               |          might_fault() {
1903 0)   1.389 us    |            __might_sleep();
1904 0)   2.553 us    |          }
1905 0)   3.807 us    |        }
1906 0)   7.876 us    |      }
1907 0)               |      alloc_fd() {
1908 0)   0.668 us    |        _spin_lock();
1909 0)   0.570 us    |        expand_files();
1910 0)   0.586 us    |        _spin_unlock();
1911
1912
1913There are several columns that can be dynamically
1914enabled/disabled. You can use every combination of options you
1915want, depending on your needs.
1916
1917- The cpu number on which the function executed is default
1918  enabled.  It is sometimes better to only trace one cpu (see
1919  tracing_cpu_mask file) or you might sometimes see unordered
1920  function calls while cpu tracing switch.
1921
1922	hide: echo nofuncgraph-cpu > trace_options
1923	show: echo funcgraph-cpu > trace_options
1924
1925- The duration (function's time of execution) is displayed on
1926  the closing bracket line of a function or on the same line
1927  than the current function in case of a leaf one. It is default
1928  enabled.
1929
1930	hide: echo nofuncgraph-duration > trace_options
1931	show: echo funcgraph-duration > trace_options
1932
1933- The overhead field precedes the duration field in case of
1934  reached duration thresholds.
1935
1936	hide: echo nofuncgraph-overhead > trace_options
1937	show: echo funcgraph-overhead > trace_options
1938	depends on: funcgraph-duration
1939
1940  ie:
1941
1942  0)               |    up_write() {
1943  0)   0.646 us    |      _spin_lock_irqsave();
1944  0)   0.684 us    |      _spin_unlock_irqrestore();
1945  0)   3.123 us    |    }
1946  0)   0.548 us    |    fput();
1947  0) + 58.628 us   |  }
1948
1949  [...]
1950
1951  0)               |      putname() {
1952  0)               |        kmem_cache_free() {
1953  0)   0.518 us    |          __phys_addr();
1954  0)   1.757 us    |        }
1955  0)   2.861 us    |      }
1956  0) ! 115.305 us  |    }
1957  0) ! 116.402 us  |  }
1958
1959  + means that the function exceeded 10 usecs.
1960  ! means that the function exceeded 100 usecs.
1961  # means that the function exceeded 1000 usecs.
1962  $ means that the function exceeded 1 sec.
1963
1964
1965- The task/pid field displays the thread cmdline and pid which
1966  executed the function. It is default disabled.
1967
1968	hide: echo nofuncgraph-proc > trace_options
1969	show: echo funcgraph-proc > trace_options
1970
1971  ie:
1972
1973  # tracer: function_graph
1974  #
1975  # CPU  TASK/PID        DURATION                  FUNCTION CALLS
1976  # |    |    |           |   |                     |   |   |   |
1977  0)    sh-4802     |               |                  d_free() {
1978  0)    sh-4802     |               |                    call_rcu() {
1979  0)    sh-4802     |               |                      __call_rcu() {
1980  0)    sh-4802     |   0.616 us    |                        rcu_process_gp_end();
1981  0)    sh-4802     |   0.586 us    |                        check_for_new_grace_period();
1982  0)    sh-4802     |   2.899 us    |                      }
1983  0)    sh-4802     |   4.040 us    |                    }
1984  0)    sh-4802     |   5.151 us    |                  }
1985  0)    sh-4802     | + 49.370 us   |                }
1986
1987
1988- The absolute time field is an absolute timestamp given by the
1989  system clock since it started. A snapshot of this time is
1990  given on each entry/exit of functions
1991
1992	hide: echo nofuncgraph-abstime > trace_options
1993	show: echo funcgraph-abstime > trace_options
1994
1995  ie:
1996
1997  #
1998  #      TIME       CPU  DURATION                  FUNCTION CALLS
1999  #       |         |     |   |                     |   |   |   |
2000  360.774522 |   1)   0.541 us    |                                          }
2001  360.774522 |   1)   4.663 us    |                                        }
2002  360.774523 |   1)   0.541 us    |                                        __wake_up_bit();
2003  360.774524 |   1)   6.796 us    |                                      }
2004  360.774524 |   1)   7.952 us    |                                    }
2005  360.774525 |   1)   9.063 us    |                                  }
2006  360.774525 |   1)   0.615 us    |                                  journal_mark_dirty();
2007  360.774527 |   1)   0.578 us    |                                  __brelse();
2008  360.774528 |   1)               |                                  reiserfs_prepare_for_journal() {
2009  360.774528 |   1)               |                                    unlock_buffer() {
2010  360.774529 |   1)               |                                      wake_up_bit() {
2011  360.774529 |   1)               |                                        bit_waitqueue() {
2012  360.774530 |   1)   0.594 us    |                                          __phys_addr();
2013
2014
2015The function name is always displayed after the closing bracket
2016for a function if the start of that function is not in the
2017trace buffer.
2018
2019Display of the function name after the closing bracket may be
2020enabled for functions whose start is in the trace buffer,
2021allowing easier searching with grep for function durations.
2022It is default disabled.
2023
2024	hide: echo nofuncgraph-tail > trace_options
2025	show: echo funcgraph-tail > trace_options
2026
2027  Example with nofuncgraph-tail (default):
2028  0)               |      putname() {
2029  0)               |        kmem_cache_free() {
2030  0)   0.518 us    |          __phys_addr();
2031  0)   1.757 us    |        }
2032  0)   2.861 us    |      }
2033
2034  Example with funcgraph-tail:
2035  0)               |      putname() {
2036  0)               |        kmem_cache_free() {
2037  0)   0.518 us    |          __phys_addr();
2038  0)   1.757 us    |        } /* kmem_cache_free() */
2039  0)   2.861 us    |      } /* putname() */
2040
2041You can put some comments on specific functions by using
2042trace_printk() For example, if you want to put a comment inside
2043the __might_sleep() function, you just have to include
2044<linux/ftrace.h> and call trace_printk() inside __might_sleep()
2045
2046trace_printk("I'm a comment!\n")
2047
2048will produce:
2049
2050 1)               |             __might_sleep() {
2051 1)               |                /* I'm a comment! */
2052 1)   1.449 us    |             }
2053
2054
2055You might find other useful features for this tracer in the
2056following "dynamic ftrace" section such as tracing only specific
2057functions or tasks.
2058
2059dynamic ftrace
2060--------------
2061
2062If CONFIG_DYNAMIC_FTRACE is set, the system will run with
2063virtually no overhead when function tracing is disabled. The way
2064this works is the mcount function call (placed at the start of
2065every kernel function, produced by the -pg switch in gcc),
2066starts of pointing to a simple return. (Enabling FTRACE will
2067include the -pg switch in the compiling of the kernel.)
2068
2069At compile time every C file object is run through the
2070recordmcount program (located in the scripts directory). This
2071program will parse the ELF headers in the C object to find all
2072the locations in the .text section that call mcount. (Note, only
2073white listed .text sections are processed, since processing other
2074sections like .init.text may cause races due to those sections
2075being freed unexpectedly).
2076
2077A new section called "__mcount_loc" is created that holds
2078references to all the mcount call sites in the .text section.
2079The recordmcount program re-links this section back into the
2080original object. The final linking stage of the kernel will add all these
2081references into a single table.
2082
2083On boot up, before SMP is initialized, the dynamic ftrace code
2084scans this table and updates all the locations into nops. It
2085also records the locations, which are added to the
2086available_filter_functions list.  Modules are processed as they
2087are loaded and before they are executed.  When a module is
2088unloaded, it also removes its functions from the ftrace function
2089list. This is automatic in the module unload code, and the
2090module author does not need to worry about it.
2091
2092When tracing is enabled, the process of modifying the function
2093tracepoints is dependent on architecture. The old method is to use
2094kstop_machine to prevent races with the CPUs executing code being
2095modified (which can cause the CPU to do undesirable things, especially
2096if the modified code crosses cache (or page) boundaries), and the nops are
2097patched back to calls. But this time, they do not call mcount
2098(which is just a function stub). They now call into the ftrace
2099infrastructure.
2100
2101The new method of modifying the function tracepoints is to place
2102a breakpoint at the location to be modified, sync all CPUs, modify
2103the rest of the instruction not covered by the breakpoint. Sync
2104all CPUs again, and then remove the breakpoint with the finished
2105version to the ftrace call site.
2106
2107Some archs do not even need to monkey around with the synchronization,
2108and can just slap the new code on top of the old without any
2109problems with other CPUs executing it at the same time.
2110
2111One special side-effect to the recording of the functions being
2112traced is that we can now selectively choose which functions we
2113wish to trace and which ones we want the mcount calls to remain
2114as nops.
2115
2116Two files are used, one for enabling and one for disabling the
2117tracing of specified functions. They are:
2118
2119  set_ftrace_filter
2120
2121and
2122
2123  set_ftrace_notrace
2124
2125A list of available functions that you can add to these files is
2126listed in:
2127
2128   available_filter_functions
2129
2130 # cat available_filter_functions
2131put_prev_task_idle
2132kmem_cache_create
2133pick_next_task_rt
2134get_online_cpus
2135pick_next_task_fair
2136mutex_lock
2137[...]
2138
2139If I am only interested in sys_nanosleep and hrtimer_interrupt:
2140
2141 # echo sys_nanosleep hrtimer_interrupt > set_ftrace_filter
2142 # echo function > current_tracer
2143 # echo 1 > tracing_on
2144 # usleep 1
2145 # echo 0 > tracing_on
2146 # cat trace
2147# tracer: function
2148#
2149# entries-in-buffer/entries-written: 5/5   #P:4
2150#
2151#                              _-----=> irqs-off
2152#                             / _----=> need-resched
2153#                            | / _---=> hardirq/softirq
2154#                            || / _--=> preempt-depth
2155#                            ||| /     delay
2156#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2157#              | |       |   ||||       |         |
2158          usleep-2665  [001] ....  4186.475355: sys_nanosleep <-system_call_fastpath
2159          <idle>-0     [001] d.h1  4186.475409: hrtimer_interrupt <-smp_apic_timer_interrupt
2160          usleep-2665  [001] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2161          <idle>-0     [003] d.h1  4186.475426: hrtimer_interrupt <-smp_apic_timer_interrupt
2162          <idle>-0     [002] d.h1  4186.475427: hrtimer_interrupt <-smp_apic_timer_interrupt
2163
2164To see which functions are being traced, you can cat the file:
2165
2166 # cat set_ftrace_filter
2167hrtimer_interrupt
2168sys_nanosleep
2169
2170
2171Perhaps this is not enough. The filters also allow simple wild
2172cards. Only the following are currently available
2173
2174  <match>*  - will match functions that begin with <match>
2175  *<match>  - will match functions that end with <match>
2176  *<match>* - will match functions that have <match> in it
2177
2178These are the only wild cards which are supported.
2179
2180  <match>*<match> will not work.
2181
2182Note: It is better to use quotes to enclose the wild cards,
2183      otherwise the shell may expand the parameters into names
2184      of files in the local directory.
2185
2186 # echo 'hrtimer_*' > set_ftrace_filter
2187
2188Produces:
2189
2190# tracer: function
2191#
2192# entries-in-buffer/entries-written: 897/897   #P:4
2193#
2194#                              _-----=> irqs-off
2195#                             / _----=> need-resched
2196#                            | / _---=> hardirq/softirq
2197#                            || / _--=> preempt-depth
2198#                            ||| /     delay
2199#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2200#              | |       |   ||||       |         |
2201          <idle>-0     [003] dN.1  4228.547803: hrtimer_cancel <-tick_nohz_idle_exit
2202          <idle>-0     [003] dN.1  4228.547804: hrtimer_try_to_cancel <-hrtimer_cancel
2203          <idle>-0     [003] dN.2  4228.547805: hrtimer_force_reprogram <-__remove_hrtimer
2204          <idle>-0     [003] dN.1  4228.547805: hrtimer_forward <-tick_nohz_idle_exit
2205          <idle>-0     [003] dN.1  4228.547805: hrtimer_start_range_ns <-hrtimer_start_expires.constprop.11
2206          <idle>-0     [003] d..1  4228.547858: hrtimer_get_next_event <-get_next_timer_interrupt
2207          <idle>-0     [003] d..1  4228.547859: hrtimer_start <-__tick_nohz_idle_enter
2208          <idle>-0     [003] d..2  4228.547860: hrtimer_force_reprogram <-__rem
2209
2210Notice that we lost the sys_nanosleep.
2211
2212 # cat set_ftrace_filter
2213hrtimer_run_queues
2214hrtimer_run_pending
2215hrtimer_init
2216hrtimer_cancel
2217hrtimer_try_to_cancel
2218hrtimer_forward
2219hrtimer_start
2220hrtimer_reprogram
2221hrtimer_force_reprogram
2222hrtimer_get_next_event
2223hrtimer_interrupt
2224hrtimer_nanosleep
2225hrtimer_wakeup
2226hrtimer_get_remaining
2227hrtimer_get_res
2228hrtimer_init_sleeper
2229
2230
2231This is because the '>' and '>>' act just like they do in bash.
2232To rewrite the filters, use '>'
2233To append to the filters, use '>>'
2234
2235To clear out a filter so that all functions will be recorded
2236again:
2237
2238 # echo > set_ftrace_filter
2239 # cat set_ftrace_filter
2240 #
2241
2242Again, now we want to append.
2243
2244 # echo sys_nanosleep > set_ftrace_filter
2245 # cat set_ftrace_filter
2246sys_nanosleep
2247 # echo 'hrtimer_*' >> set_ftrace_filter
2248 # cat set_ftrace_filter
2249hrtimer_run_queues
2250hrtimer_run_pending
2251hrtimer_init
2252hrtimer_cancel
2253hrtimer_try_to_cancel
2254hrtimer_forward
2255hrtimer_start
2256hrtimer_reprogram
2257hrtimer_force_reprogram
2258hrtimer_get_next_event
2259hrtimer_interrupt
2260sys_nanosleep
2261hrtimer_nanosleep
2262hrtimer_wakeup
2263hrtimer_get_remaining
2264hrtimer_get_res
2265hrtimer_init_sleeper
2266
2267
2268The set_ftrace_notrace prevents those functions from being
2269traced.
2270
2271 # echo '*preempt*' '*lock*' > set_ftrace_notrace
2272
2273Produces:
2274
2275# tracer: function
2276#
2277# entries-in-buffer/entries-written: 39608/39608   #P:4
2278#
2279#                              _-----=> irqs-off
2280#                             / _----=> need-resched
2281#                            | / _---=> hardirq/softirq
2282#                            || / _--=> preempt-depth
2283#                            ||| /     delay
2284#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2285#              | |       |   ||||       |         |
2286            bash-1994  [000] ....  4342.324896: file_ra_state_init <-do_dentry_open
2287            bash-1994  [000] ....  4342.324897: open_check_o_direct <-do_last
2288            bash-1994  [000] ....  4342.324897: ima_file_check <-do_last
2289            bash-1994  [000] ....  4342.324898: process_measurement <-ima_file_check
2290            bash-1994  [000] ....  4342.324898: ima_get_action <-process_measurement
2291            bash-1994  [000] ....  4342.324898: ima_match_policy <-ima_get_action
2292            bash-1994  [000] ....  4342.324899: do_truncate <-do_last
2293            bash-1994  [000] ....  4342.324899: should_remove_suid <-do_truncate
2294            bash-1994  [000] ....  4342.324899: notify_change <-do_truncate
2295            bash-1994  [000] ....  4342.324900: current_fs_time <-notify_change
2296            bash-1994  [000] ....  4342.324900: current_kernel_time <-current_fs_time
2297            bash-1994  [000] ....  4342.324900: timespec_trunc <-current_fs_time
2298
2299We can see that there's no more lock or preempt tracing.
2300
2301
2302Dynamic ftrace with the function graph tracer
2303---------------------------------------------
2304
2305Although what has been explained above concerns both the
2306function tracer and the function-graph-tracer, there are some
2307special features only available in the function-graph tracer.
2308
2309If you want to trace only one function and all of its children,
2310you just have to echo its name into set_graph_function:
2311
2312 echo __do_fault > set_graph_function
2313
2314will produce the following "expanded" trace of the __do_fault()
2315function:
2316
2317 0)               |  __do_fault() {
2318 0)               |    filemap_fault() {
2319 0)               |      find_lock_page() {
2320 0)   0.804 us    |        find_get_page();
2321 0)               |        __might_sleep() {
2322 0)   1.329 us    |        }
2323 0)   3.904 us    |      }
2324 0)   4.979 us    |    }
2325 0)   0.653 us    |    _spin_lock();
2326 0)   0.578 us    |    page_add_file_rmap();
2327 0)   0.525 us    |    native_set_pte_at();
2328 0)   0.585 us    |    _spin_unlock();
2329 0)               |    unlock_page() {
2330 0)   0.541 us    |      page_waitqueue();
2331 0)   0.639 us    |      __wake_up_bit();
2332 0)   2.786 us    |    }
2333 0) + 14.237 us   |  }
2334 0)               |  __do_fault() {
2335 0)               |    filemap_fault() {
2336 0)               |      find_lock_page() {
2337 0)   0.698 us    |        find_get_page();
2338 0)               |        __might_sleep() {
2339 0)   1.412 us    |        }
2340 0)   3.950 us    |      }
2341 0)   5.098 us    |    }
2342 0)   0.631 us    |    _spin_lock();
2343 0)   0.571 us    |    page_add_file_rmap();
2344 0)   0.526 us    |    native_set_pte_at();
2345 0)   0.586 us    |    _spin_unlock();
2346 0)               |    unlock_page() {
2347 0)   0.533 us    |      page_waitqueue();
2348 0)   0.638 us    |      __wake_up_bit();
2349 0)   2.793 us    |    }
2350 0) + 14.012 us   |  }
2351
2352You can also expand several functions at once:
2353
2354 echo sys_open > set_graph_function
2355 echo sys_close >> set_graph_function
2356
2357Now if you want to go back to trace all functions you can clear
2358this special filter via:
2359
2360 echo > set_graph_function
2361
2362
2363ftrace_enabled
2364--------------
2365
2366Note, the proc sysctl ftrace_enable is a big on/off switch for the
2367function tracer. By default it is enabled (when function tracing is
2368enabled in the kernel). If it is disabled, all function tracing is
2369disabled. This includes not only the function tracers for ftrace, but
2370also for any other uses (perf, kprobes, stack tracing, profiling, etc).
2371
2372Please disable this with care.
2373
2374This can be disable (and enabled) with:
2375
2376  sysctl kernel.ftrace_enabled=0
2377  sysctl kernel.ftrace_enabled=1
2378
2379 or
2380
2381  echo 0 > /proc/sys/kernel/ftrace_enabled
2382  echo 1 > /proc/sys/kernel/ftrace_enabled
2383
2384
2385Filter commands
2386---------------
2387
2388A few commands are supported by the set_ftrace_filter interface.
2389Trace commands have the following format:
2390
2391<function>:<command>:<parameter>
2392
2393The following commands are supported:
2394
2395- mod
2396  This command enables function filtering per module. The
2397  parameter defines the module. For example, if only the write*
2398  functions in the ext3 module are desired, run:
2399
2400   echo 'write*:mod:ext3' > set_ftrace_filter
2401
2402  This command interacts with the filter in the same way as
2403  filtering based on function names. Thus, adding more functions
2404  in a different module is accomplished by appending (>>) to the
2405  filter file. Remove specific module functions by prepending
2406  '!':
2407
2408   echo '!writeback*:mod:ext3' >> set_ftrace_filter
2409
2410- traceon/traceoff
2411  These commands turn tracing on and off when the specified
2412  functions are hit. The parameter determines how many times the
2413  tracing system is turned on and off. If unspecified, there is
2414  no limit. For example, to disable tracing when a schedule bug
2415  is hit the first 5 times, run:
2416
2417   echo '__schedule_bug:traceoff:5' > set_ftrace_filter
2418
2419  To always disable tracing when __schedule_bug is hit:
2420
2421   echo '__schedule_bug:traceoff' > set_ftrace_filter
2422
2423  These commands are cumulative whether or not they are appended
2424  to set_ftrace_filter. To remove a command, prepend it by '!'
2425  and drop the parameter:
2426
2427   echo '!__schedule_bug:traceoff:0' > set_ftrace_filter
2428
2429    The above removes the traceoff command for __schedule_bug
2430    that have a counter. To remove commands without counters:
2431
2432   echo '!__schedule_bug:traceoff' > set_ftrace_filter
2433
2434- snapshot
2435  Will cause a snapshot to be triggered when the function is hit.
2436
2437   echo 'native_flush_tlb_others:snapshot' > set_ftrace_filter
2438
2439  To only snapshot once:
2440
2441   echo 'native_flush_tlb_others:snapshot:1' > set_ftrace_filter
2442
2443  To remove the above commands:
2444
2445   echo '!native_flush_tlb_others:snapshot' > set_ftrace_filter
2446   echo '!native_flush_tlb_others:snapshot:0' > set_ftrace_filter
2447
2448- enable_event/disable_event
2449  These commands can enable or disable a trace event. Note, because
2450  function tracing callbacks are very sensitive, when these commands
2451  are registered, the trace point is activated, but disabled in
2452  a "soft" mode. That is, the tracepoint will be called, but
2453  just will not be traced. The event tracepoint stays in this mode
2454  as long as there's a command that triggers it.
2455
2456   echo 'try_to_wake_up:enable_event:sched:sched_switch:2' > \
2457   	 set_ftrace_filter
2458
2459  The format is:
2460
2461    <function>:enable_event:<system>:<event>[:count]
2462    <function>:disable_event:<system>:<event>[:count]
2463
2464  To remove the events commands:
2465
2466
2467   echo '!try_to_wake_up:enable_event:sched:sched_switch:0' > \
2468   	 set_ftrace_filter
2469   echo '!schedule:disable_event:sched:sched_switch' > \
2470   	 set_ftrace_filter
2471
2472- dump
2473  When the function is hit, it will dump the contents of the ftrace
2474  ring buffer to the console. This is useful if you need to debug
2475  something, and want to dump the trace when a certain function
2476  is hit. Perhaps its a function that is called before a tripple
2477  fault happens and does not allow you to get a regular dump.
2478
2479- cpudump
2480  When the function is hit, it will dump the contents of the ftrace
2481  ring buffer for the current CPU to the console. Unlike the "dump"
2482  command, it only prints out the contents of the ring buffer for the
2483  CPU that executed the function that triggered the dump.
2484
2485trace_pipe
2486----------
2487
2488The trace_pipe outputs the same content as the trace file, but
2489the effect on the tracing is different. Every read from
2490trace_pipe is consumed. This means that subsequent reads will be
2491different. The trace is live.
2492
2493 # echo function > current_tracer
2494 # cat trace_pipe > /tmp/trace.out &
2495[1] 4153
2496 # echo 1 > tracing_on
2497 # usleep 1
2498 # echo 0 > tracing_on
2499 # cat trace
2500# tracer: function
2501#
2502# entries-in-buffer/entries-written: 0/0   #P:4
2503#
2504#                              _-----=> irqs-off
2505#                             / _----=> need-resched
2506#                            | / _---=> hardirq/softirq
2507#                            || / _--=> preempt-depth
2508#                            ||| /     delay
2509#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2510#              | |       |   ||||       |         |
2511
2512 #
2513 # cat /tmp/trace.out
2514            bash-1994  [000] ....  5281.568961: mutex_unlock <-rb_simple_write
2515            bash-1994  [000] ....  5281.568963: __mutex_unlock_slowpath <-mutex_unlock
2516            bash-1994  [000] ....  5281.568963: __fsnotify_parent <-fsnotify_modify
2517            bash-1994  [000] ....  5281.568964: fsnotify <-fsnotify_modify
2518            bash-1994  [000] ....  5281.568964: __srcu_read_lock <-fsnotify
2519            bash-1994  [000] ....  5281.568964: add_preempt_count <-__srcu_read_lock
2520            bash-1994  [000] ...1  5281.568965: sub_preempt_count <-__srcu_read_lock
2521            bash-1994  [000] ....  5281.568965: __srcu_read_unlock <-fsnotify
2522            bash-1994  [000] ....  5281.568967: sys_dup2 <-system_call_fastpath
2523
2524
2525Note, reading the trace_pipe file will block until more input is
2526added.
2527
2528trace entries
2529-------------
2530
2531Having too much or not enough data can be troublesome in
2532diagnosing an issue in the kernel. The file buffer_size_kb is
2533used to modify the size of the internal trace buffers. The
2534number listed is the number of entries that can be recorded per
2535CPU. To know the full size, multiply the number of possible CPUs
2536with the number of entries.
2537
2538 # cat buffer_size_kb
25391408 (units kilobytes)
2540
2541Or simply read buffer_total_size_kb
2542
2543 # cat buffer_total_size_kb 
25445632
2545
2546To modify the buffer, simple echo in a number (in 1024 byte segments).
2547
2548 # echo 10000 > buffer_size_kb
2549 # cat buffer_size_kb
255010000 (units kilobytes)
2551
2552It will try to allocate as much as possible. If you allocate too
2553much, it can cause Out-Of-Memory to trigger.
2554
2555 # echo 1000000000000 > buffer_size_kb
2556-bash: echo: write error: Cannot allocate memory
2557 # cat buffer_size_kb
255885
2559
2560The per_cpu buffers can be changed individually as well:
2561
2562 # echo 10000 > per_cpu/cpu0/buffer_size_kb
2563 # echo 100 > per_cpu/cpu1/buffer_size_kb
2564
2565When the per_cpu buffers are not the same, the buffer_size_kb
2566at the top level will just show an X
2567
2568 # cat buffer_size_kb
2569X
2570
2571This is where the buffer_total_size_kb is useful:
2572
2573 # cat buffer_total_size_kb 
257412916
2575
2576Writing to the top level buffer_size_kb will reset all the buffers
2577to be the same again.
2578
2579Snapshot
2580--------
2581CONFIG_TRACER_SNAPSHOT makes a generic snapshot feature
2582available to all non latency tracers. (Latency tracers which
2583record max latency, such as "irqsoff" or "wakeup", can't use
2584this feature, since those are already using the snapshot
2585mechanism internally.)
2586
2587Snapshot preserves a current trace buffer at a particular point
2588in time without stopping tracing. Ftrace swaps the current
2589buffer with a spare buffer, and tracing continues in the new
2590current (=previous spare) buffer.
2591
2592The following debugfs files in "tracing" are related to this
2593feature:
2594
2595  snapshot:
2596
2597	This is used to take a snapshot and to read the output
2598	of the snapshot. Echo 1 into this file to allocate a
2599	spare buffer and to take a snapshot (swap), then read
2600	the snapshot from this file in the same format as
2601	"trace" (described above in the section "The File
2602	System"). Both reads snapshot and tracing are executable
2603	in parallel. When the spare buffer is allocated, echoing
2604	0 frees it, and echoing else (positive) values clear the
2605	snapshot contents.
2606	More details are shown in the table below.
2607
2608	status\input  |     0      |     1      |    else    |
2609	--------------+------------+------------+------------+
2610	not allocated |(do nothing)| alloc+swap |(do nothing)|
2611	--------------+------------+------------+------------+
2612	allocated     |    free    |    swap    |   clear    |
2613	--------------+------------+------------+------------+
2614
2615Here is an example of using the snapshot feature.
2616
2617 # echo 1 > events/sched/enable
2618 # echo 1 > snapshot
2619 # cat snapshot
2620# tracer: nop
2621#
2622# entries-in-buffer/entries-written: 71/71   #P:8
2623#
2624#                              _-----=> irqs-off
2625#                             / _----=> need-resched
2626#                            | / _---=> hardirq/softirq
2627#                            || / _--=> preempt-depth
2628#                            ||| /     delay
2629#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2630#              | |       |   ||||       |         |
2631          <idle>-0     [005] d...  2440.603828: sched_switch: prev_comm=swapper/5 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2242 next_prio=120
2632           sleep-2242  [005] d...  2440.603846: sched_switch: prev_comm=snapshot-test-2 prev_pid=2242 prev_prio=120 prev_state=R ==> next_comm=kworker/5:1 next_pid=60 next_prio=120
2633[...]
2634          <idle>-0     [002] d...  2440.707230: sched_switch: prev_comm=swapper/2 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2229 next_prio=120
2635
2636 # cat trace
2637# tracer: nop
2638#
2639# entries-in-buffer/entries-written: 77/77   #P:8
2640#
2641#                              _-----=> irqs-off
2642#                             / _----=> need-resched
2643#                            | / _---=> hardirq/softirq
2644#                            || / _--=> preempt-depth
2645#                            ||| /     delay
2646#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2647#              | |       |   ||||       |         |
2648          <idle>-0     [007] d...  2440.707395: sched_switch: prev_comm=swapper/7 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=snapshot-test-2 next_pid=2243 next_prio=120
2649 snapshot-test-2-2229  [002] d...  2440.707438: sched_switch: prev_comm=snapshot-test-2 prev_pid=2229 prev_prio=120 prev_state=S ==> next_comm=swapper/2 next_pid=0 next_prio=120
2650[...]
2651
2652
2653If you try to use this snapshot feature when current tracer is
2654one of the latency tracers, you will get the following results.
2655
2656 # echo wakeup > current_tracer
2657 # echo 1 > snapshot
2658bash: echo: write error: Device or resource busy
2659 # cat snapshot
2660cat: snapshot: Device or resource busy
2661
2662
2663Instances
2664---------
2665In the debugfs tracing directory is a directory called "instances".
2666This directory can have new directories created inside of it using
2667mkdir, and removing directories with rmdir. The directory created
2668with mkdir in this directory will already contain files and other
2669directories after it is created.
2670
2671 # mkdir instances/foo
2672 # ls instances/foo
2673buffer_size_kb  buffer_total_size_kb  events  free_buffer  per_cpu
2674set_event  snapshot  trace  trace_clock  trace_marker  trace_options
2675trace_pipe  tracing_on
2676
2677As you can see, the new directory looks similar to the tracing directory
2678itself. In fact, it is very similar, except that the buffer and
2679events are agnostic from the main director, or from any other
2680instances that are created.
2681
2682The files in the new directory work just like the files with the
2683same name in the tracing directory except the buffer that is used
2684is a separate and new buffer. The files affect that buffer but do not
2685affect the main buffer with the exception of trace_options. Currently,
2686the trace_options affect all instances and the top level buffer
2687the same, but this may change in future releases. That is, options
2688may become specific to the instance they reside in.
2689
2690Notice that none of the function tracer files are there, nor is
2691current_tracer and available_tracers. This is because the buffers
2692can currently only have events enabled for them.
2693
2694 # mkdir instances/foo
2695 # mkdir instances/bar
2696 # mkdir instances/zoot
2697 # echo 100000 > buffer_size_kb
2698 # echo 1000 > instances/foo/buffer_size_kb
2699 # echo 5000 > instances/bar/per_cpu/cpu1/buffer_size_kb
2700 # echo function > current_trace
2701 # echo 1 > instances/foo/events/sched/sched_wakeup/enable
2702 # echo 1 > instances/foo/events/sched/sched_wakeup_new/enable
2703 # echo 1 > instances/foo/events/sched/sched_switch/enable
2704 # echo 1 > instances/bar/events/irq/enable
2705 # echo 1 > instances/zoot/events/syscalls/enable
2706 # cat trace_pipe
2707CPU:2 [LOST 11745 EVENTS]
2708            bash-2044  [002] .... 10594.481032: _raw_spin_lock_irqsave <-get_page_from_freelist
2709            bash-2044  [002] d... 10594.481032: add_preempt_count <-_raw_spin_lock_irqsave
2710            bash-2044  [002] d..1 10594.481032: __rmqueue <-get_page_from_freelist
2711            bash-2044  [002] d..1 10594.481033: _raw_spin_unlock <-get_page_from_freelist
2712            bash-2044  [002] d..1 10594.481033: sub_preempt_count <-_raw_spin_unlock
2713            bash-2044  [002] d... 10594.481033: get_pageblock_flags_group <-get_pageblock_migratetype
2714            bash-2044  [002] d... 10594.481034: __mod_zone_page_state <-get_page_from_freelist
2715            bash-2044  [002] d... 10594.481034: zone_statistics <-get_page_from_freelist
2716            bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
2717            bash-2044  [002] d... 10594.481034: __inc_zone_state <-zone_statistics
2718            bash-2044  [002] .... 10594.481035: arch_dup_task_struct <-copy_process
2719[...]
2720
2721 # cat instances/foo/trace_pipe
2722            bash-1998  [000] d..4   136.676759: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
2723            bash-1998  [000] dN.4   136.676760: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
2724          <idle>-0     [003] d.h3   136.676906: sched_wakeup: comm=rcu_preempt pid=9 prio=120 success=1 target_cpu=003
2725          <idle>-0     [003] d..3   136.676909: sched_switch: prev_comm=swapper/3 prev_pid=0 prev_prio=120 prev_state=R ==> next_comm=rcu_preempt next_pid=9 next_prio=120
2726     rcu_preempt-9     [003] d..3   136.676916: sched_switch: prev_comm=rcu_preempt prev_pid=9 prev_prio=120 prev_state=S ==> next_comm=swapper/3 next_pid=0 next_prio=120
2727            bash-1998  [000] d..4   136.677014: sched_wakeup: comm=kworker/0:1 pid=59 prio=120 success=1 target_cpu=000
2728            bash-1998  [000] dN.4   136.677016: sched_wakeup: comm=bash pid=1998 prio=120 success=1 target_cpu=000
2729            bash-1998  [000] d..3   136.677018: sched_switch: prev_comm=bash prev_pid=1998 prev_prio=120 prev_state=R+ ==> next_comm=kworker/0:1 next_pid=59 next_prio=120
2730     kworker/0:1-59    [000] d..4   136.677022: sched_wakeup: comm=sshd pid=1995 prio=120 success=1 target_cpu=001
2731     kworker/0:1-59    [000] d..3   136.677025: sched_switch: prev_comm=kworker/0:1 prev_pid=59 prev_prio=120 prev_state=S ==> next_comm=bash next_pid=1998 next_prio=120
2732[...]
2733
2734 # cat instances/bar/trace_pipe
2735     migration/1-14    [001] d.h3   138.732674: softirq_raise: vec=3 [action=NET_RX]
2736          <idle>-0     [001] dNh3   138.732725: softirq_raise: vec=3 [action=NET_RX]
2737            bash-1998  [000] d.h1   138.733101: softirq_raise: vec=1 [action=TIMER]
2738            bash-1998  [000] d.h1   138.733102: softirq_raise: vec=9 [action=RCU]
2739            bash-1998  [000] ..s2   138.733105: softirq_entry: vec=1 [action=TIMER]
2740            bash-1998  [000] ..s2   138.733106: softirq_exit: vec=1 [action=TIMER]
2741            bash-1998  [000] ..s2   138.733106: softirq_entry: vec=9 [action=RCU]
2742            bash-1998  [000] ..s2   138.733109: softirq_exit: vec=9 [action=RCU]
2743            sshd-1995  [001] d.h1   138.733278: irq_handler_entry: irq=21 name=uhci_hcd:usb4
2744            sshd-1995  [001] d.h1   138.733280: irq_handler_exit: irq=21 ret=unhandled
2745            sshd-1995  [001] d.h1   138.733281: irq_handler_entry: irq=21 name=eth0
2746            sshd-1995  [001] d.h1   138.733283: irq_handler_exit: irq=21 ret=handled
2747[...]
2748
2749 # cat instances/zoot/trace
2750# tracer: nop
2751#
2752# entries-in-buffer/entries-written: 18996/18996   #P:4
2753#
2754#                              _-----=> irqs-off
2755#                             / _----=> need-resched
2756#                            | / _---=> hardirq/softirq
2757#                            || / _--=> preempt-depth
2758#                            ||| /     delay
2759#           TASK-PID   CPU#  ||||    TIMESTAMP  FUNCTION
2760#              | |       |   ||||       |         |
2761            bash-1998  [000] d...   140.733501: sys_write -> 0x2
2762            bash-1998  [000] d...   140.733504: sys_dup2(oldfd: a, newfd: 1)
2763            bash-1998  [000] d...   140.733506: sys_dup2 -> 0x1
2764            bash-1998  [000] d...   140.733508: sys_fcntl(fd: a, cmd: 1, arg: 0)
2765            bash-1998  [000] d...   140.733509: sys_fcntl -> 0x1
2766            bash-1998  [000] d...   140.733510: sys_close(fd: a)
2767            bash-1998  [000] d...   140.733510: sys_close -> 0x0
2768            bash-1998  [000] d...   140.733514: sys_rt_sigprocmask(how: 0, nset: 0, oset: 6e2768, sigsetsize: 8)
2769            bash-1998  [000] d...   140.733515: sys_rt_sigprocmask -> 0x0
2770            bash-1998  [000] d...   140.733516: sys_rt_sigaction(sig: 2, act: 7fff718846f0, oact: 7fff71884650, sigsetsize: 8)
2771            bash-1998  [000] d...   140.733516: sys_rt_sigaction -> 0x0
2772
2773You can see that the trace of the top most trace buffer shows only
2774the function tracing. The foo instance displays wakeups and task
2775switches.
2776
2777To remove the instances, simply delete their directories:
2778
2779 # rmdir instances/foo
2780 # rmdir instances/bar
2781 # rmdir instances/zoot
2782
2783Note, if a process has a trace file open in one of the instance
2784directories, the rmdir will fail with EBUSY.
2785
2786
2787Stack trace
2788-----------
2789Since the kernel has a fixed sized stack, it is important not to
2790waste it in functions. A kernel developer must be conscience of
2791what they allocate on the stack. If they add too much, the system
2792can be in danger of a stack overflow, and corruption will occur,
2793usually leading to a system panic.
2794
2795There are some tools that check this, usually with interrupts
2796periodically checking usage. But if you can perform a check
2797at every function call that will become very useful. As ftrace provides
2798a function tracer, it makes it convenient to check the stack size
2799at every function call. This is enabled via the stack tracer.
2800
2801CONFIG_STACK_TRACER enables the ftrace stack tracing functionality.
2802To enable it, write a '1' into /proc/sys/kernel/stack_tracer_enabled.
2803
2804 # echo 1 > /proc/sys/kernel/stack_tracer_enabled
2805
2806You can also enable it from the kernel command line to trace
2807the stack size of the kernel during boot up, by adding "stacktrace"
2808to the kernel command line parameter.
2809
2810After running it for a few minutes, the output looks like:
2811
2812 # cat stack_max_size
28132928
2814
2815 # cat stack_trace
2816        Depth    Size   Location    (18 entries)
2817        -----    ----   --------
2818  0)     2928     224   update_sd_lb_stats+0xbc/0x4ac
2819  1)     2704     160   find_busiest_group+0x31/0x1f1
2820  2)     2544     256   load_balance+0xd9/0x662
2821  3)     2288      80   idle_balance+0xbb/0x130
2822  4)     2208     128   __schedule+0x26e/0x5b9
2823  5)     2080      16   schedule+0x64/0x66
2824  6)     2064     128   schedule_timeout+0x34/0xe0
2825  7)     1936     112   wait_for_common+0x97/0xf1
2826  8)     1824      16   wait_for_completion+0x1d/0x1f
2827  9)     1808     128   flush_work+0xfe/0x119
2828 10)     1680      16   tty_flush_to_ldisc+0x1e/0x20
2829 11)     1664      48   input_available_p+0x1d/0x5c
2830 12)     1616      48   n_tty_poll+0x6d/0x134
2831 13)     1568      64   tty_poll+0x64/0x7f
2832 14)     1504     880   do_select+0x31e/0x511
2833 15)      624     400   core_sys_select+0x177/0x216
2834 16)      224      96   sys_select+0x91/0xb9
2835 17)      128     128   system_call_fastpath+0x16/0x1b
2836
2837Note, if -mfentry is being used by gcc, functions get traced before
2838they set up the stack frame. This means that leaf level functions
2839are not tested by the stack tracer when -mfentry is used.
2840
2841Currently, -mfentry is used by gcc 4.6.0 and above on x86 only.
2842
2843---------
2844
2845More details can be found in the source code, in the
2846kernel/trace/*.c files.
2847