Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'trace-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

- Hide get_vm_area() from MMUless builds

The function get_vm_area() is not defined when CONFIG_MMU is not
defined. Hide that function within #ifdef CONFIG_MMU.

- Fix output of synthetic events when they have dynamic strings

The print fmt of the synthetic event's format file use to have "%.*s"
for dynamic size strings even though the user space exported
arguments had only __get_str() macro that provided just a nul
terminated string. This was fixed so that user space could parse this
properly.

But the reason that it had "%.*s" was because internally it provided
the maximum size of the string as one of the arguments. The fix that
replaced "%.*s" with "%s" caused the trace output (when the kernel
reads the event) to write "(efault)" as it would now read the length
of the string as "%s".

As the string provided is always nul terminated, there's no reason
for the internal code to use "%.*s" anyway. Just remove the length
argument to match the "%s" that is now in the format.

- Fix the ftrace subops hash logic of the manager ops hash

The function_graph uses the ftrace subops code. The subops code is a
way to have a single ftrace_ops registered with ftrace to determine
what functions will call the ftrace_ops callback. More than one user
of function graph can register a ftrace_ops with it. The function
graph infrastructure will then add this ftrace_ops as a subops with
the main ftrace_ops it registers with ftrace. This is because the
functions will always call the function graph callback which in turn
calls the subops ftrace_ops callbacks.

The main ftrace_ops must add a callback to all the functions that the
subops want a callback from. When a subops is registered, it will
update the main ftrace_ops hash to include the functions it wants.
This is the logic that was broken.

The ftrace_ops hash has a "filter_hash" and a "notrace_hash" where
all the functions in the filter_hash but not in the notrace_hash are
attached by ftrace. The original logic would have the main ftrace_ops
filter_hash be a union of all the subops filter_hashes and the main
notrace_hash would be a intersect of all the subops filter hashes.
But this was incorrect because the notrace hash depends on the
filter_hash it is associated to and not the union of all
filter_hashes.

Instead, when a subops is added, just include all the functions of
the subops hash that are in its filter_hash but not in its
notrace_hash. The main subops hash should not use its notrace hash,
unless all of its subops hashes have an empty filter_hash (which
means to attach to all functions), and then, and only then, the main
ftrace_ops notrace hash can be the intersect of all the subops
hashes.

This not only fixes the bug, but also simplifies the code.

- Add a selftest to better test the subops filtering

Add a selftest that would catch the bug fixed by the above change.

- Fix extra newline printed in function tracing with retval

The function parameter code changed the output logic slightly and
called print_graph_retval() and also printed a newline. The
print_graph_retval() also prints a newline which caused blank lines
to be printed in the function graph tracer when retval was added.
This caused one of the selftests to fail if retvals were enabled.
Instead remove the new line output from print_graph_retval() and have
the callers always print the new line so that it doesn't have to do
special logic if it calls print_graph_retval() or not.

- Fix out-of-bound memory access in the runtime verifier

When rv_is_container_monitor() is called on the last entry on the
link list it references the next entry, which is the list head and
causes an out-of-bound memory access.

* tag 'trace-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
rv: Fix out-of-bound memory access in rv_is_container_monitor()
ftrace: Do not have print_graph_retval() add a newline
tracing/selftest: Add test to better test subops filtering of function graph
ftrace: Fix accounting of subop hashes
ftrace: Properly merge notrace hashes
tracing: Do not add length to print format in synthetic events
tracing: Hide get_vm_area() from MMUless builds

+372 -145
+177 -137
kernel/trace/ftrace.c
··· 3256 3256 } 3257 3257 3258 3258 /* 3259 + * Remove functions from @hash that are in @notrace_hash 3260 + */ 3261 + static void remove_hash(struct ftrace_hash *hash, struct ftrace_hash *notrace_hash) 3262 + { 3263 + struct ftrace_func_entry *entry; 3264 + struct hlist_node *tmp; 3265 + int size; 3266 + int i; 3267 + 3268 + /* If the notrace hash is empty, there's nothing to do */ 3269 + if (ftrace_hash_empty(notrace_hash)) 3270 + return; 3271 + 3272 + size = 1 << hash->size_bits; 3273 + for (i = 0; i < size; i++) { 3274 + hlist_for_each_entry_safe(entry, tmp, &hash->buckets[i], hlist) { 3275 + if (!__ftrace_lookup_ip(notrace_hash, entry->ip)) 3276 + continue; 3277 + remove_hash_entry(hash, entry); 3278 + kfree(entry); 3279 + } 3280 + } 3281 + } 3282 + 3283 + /* 3259 3284 * Add to @hash only those that are in both @new_hash1 and @new_hash2 3260 3285 * 3261 3286 * The notrace_hash updates uses just the intersect_hash() function ··· 3318 3293 *hash = EMPTY_HASH; 3319 3294 } 3320 3295 return 0; 3321 - } 3322 - 3323 - /* Return a new hash that has a union of all @ops->filter_hash entries */ 3324 - static struct ftrace_hash *append_hashes(struct ftrace_ops *ops) 3325 - { 3326 - struct ftrace_hash *new_hash = NULL; 3327 - struct ftrace_ops *subops; 3328 - int size_bits; 3329 - int ret; 3330 - 3331 - if (ops->func_hash->filter_hash) 3332 - size_bits = ops->func_hash->filter_hash->size_bits; 3333 - else 3334 - size_bits = FTRACE_HASH_DEFAULT_BITS; 3335 - 3336 - list_for_each_entry(subops, &ops->subop_list, list) { 3337 - ret = append_hash(&new_hash, subops->func_hash->filter_hash, size_bits); 3338 - if (ret < 0) { 3339 - free_ftrace_hash(new_hash); 3340 - return NULL; 3341 - } 3342 - /* Nothing more to do if new_hash is empty */ 3343 - if (ftrace_hash_empty(new_hash)) 3344 - break; 3345 - } 3346 - /* Can't return NULL as that means this failed */ 3347 - return new_hash ? : EMPTY_HASH; 3348 - } 3349 - 3350 - /* Make @ops trace evenything except what all its subops do not trace */ 3351 - static struct ftrace_hash *intersect_hashes(struct ftrace_ops *ops) 3352 - { 3353 - struct ftrace_hash *new_hash = NULL; 3354 - struct ftrace_ops *subops; 3355 - int size_bits; 3356 - int ret; 3357 - 3358 - list_for_each_entry(subops, &ops->subop_list, list) { 3359 - struct ftrace_hash *next_hash; 3360 - 3361 - if (!new_hash) { 3362 - size_bits = subops->func_hash->notrace_hash->size_bits; 3363 - new_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->notrace_hash); 3364 - if (!new_hash) 3365 - return NULL; 3366 - continue; 3367 - } 3368 - size_bits = new_hash->size_bits; 3369 - next_hash = new_hash; 3370 - new_hash = alloc_ftrace_hash(size_bits); 3371 - ret = intersect_hash(&new_hash, next_hash, subops->func_hash->notrace_hash); 3372 - free_ftrace_hash(next_hash); 3373 - if (ret < 0) { 3374 - free_ftrace_hash(new_hash); 3375 - return NULL; 3376 - } 3377 - /* Nothing more to do if new_hash is empty */ 3378 - if (ftrace_hash_empty(new_hash)) 3379 - break; 3380 - } 3381 - return new_hash; 3382 3296 } 3383 3297 3384 3298 static bool ops_equal(struct ftrace_hash *A, struct ftrace_hash *B) ··· 3391 3427 return 0; 3392 3428 } 3393 3429 3430 + static int add_first_hash(struct ftrace_hash **filter_hash, struct ftrace_hash **notrace_hash, 3431 + struct ftrace_ops_hash *func_hash) 3432 + { 3433 + /* If the filter hash is not empty, simply remove the nohash from it */ 3434 + if (!ftrace_hash_empty(func_hash->filter_hash)) { 3435 + *filter_hash = copy_hash(func_hash->filter_hash); 3436 + if (!*filter_hash) 3437 + return -ENOMEM; 3438 + remove_hash(*filter_hash, func_hash->notrace_hash); 3439 + *notrace_hash = EMPTY_HASH; 3440 + 3441 + } else { 3442 + *notrace_hash = copy_hash(func_hash->notrace_hash); 3443 + if (!*notrace_hash) 3444 + return -ENOMEM; 3445 + *filter_hash = EMPTY_HASH; 3446 + } 3447 + return 0; 3448 + } 3449 + 3450 + static int add_next_hash(struct ftrace_hash **filter_hash, struct ftrace_hash **notrace_hash, 3451 + struct ftrace_ops_hash *ops_hash, struct ftrace_ops_hash *subops_hash) 3452 + { 3453 + int size_bits; 3454 + int ret; 3455 + 3456 + /* If the subops trace all functions so must the main ops */ 3457 + if (ftrace_hash_empty(ops_hash->filter_hash) || 3458 + ftrace_hash_empty(subops_hash->filter_hash)) { 3459 + *filter_hash = EMPTY_HASH; 3460 + } else { 3461 + /* 3462 + * The main ops filter hash is not empty, so its 3463 + * notrace_hash had better be, as the notrace hash 3464 + * is only used for empty main filter hashes. 3465 + */ 3466 + WARN_ON_ONCE(!ftrace_hash_empty(ops_hash->notrace_hash)); 3467 + 3468 + size_bits = max(ops_hash->filter_hash->size_bits, 3469 + subops_hash->filter_hash->size_bits); 3470 + 3471 + /* Copy the subops hash */ 3472 + *filter_hash = alloc_and_copy_ftrace_hash(size_bits, subops_hash->filter_hash); 3473 + if (!filter_hash) 3474 + return -ENOMEM; 3475 + /* Remove any notrace functions from the copy */ 3476 + remove_hash(*filter_hash, subops_hash->notrace_hash); 3477 + 3478 + ret = append_hash(filter_hash, ops_hash->filter_hash, 3479 + size_bits); 3480 + if (ret < 0) { 3481 + free_ftrace_hash(*filter_hash); 3482 + return ret; 3483 + } 3484 + } 3485 + 3486 + /* 3487 + * Only process notrace hashes if the main filter hash is empty 3488 + * (tracing all functions), otherwise the filter hash will just 3489 + * remove the notrace hash functions, and the notrace hash is 3490 + * not needed. 3491 + */ 3492 + if (ftrace_hash_empty(*filter_hash)) { 3493 + /* 3494 + * Intersect the notrace functions. That is, if two 3495 + * subops are not tracing a set of functions, the 3496 + * main ops will only not trace the functions that are 3497 + * in both subops, but has to trace the functions that 3498 + * are only notrace in one of the subops, for the other 3499 + * subops to be able to trace them. 3500 + */ 3501 + size_bits = max(ops_hash->notrace_hash->size_bits, 3502 + subops_hash->notrace_hash->size_bits); 3503 + *notrace_hash = alloc_ftrace_hash(size_bits); 3504 + if (!*notrace_hash) 3505 + return -ENOMEM; 3506 + 3507 + ret = intersect_hash(notrace_hash, ops_hash->notrace_hash, 3508 + subops_hash->notrace_hash); 3509 + if (ret < 0) { 3510 + free_ftrace_hash(*notrace_hash); 3511 + return ret; 3512 + } 3513 + } 3514 + return 0; 3515 + } 3516 + 3394 3517 /** 3395 3518 * ftrace_startup_subops - enable tracing for subops of an ops 3396 3519 * @ops: Manager ops (used to pick all the functions of its subops) ··· 3494 3443 struct ftrace_hash *notrace_hash; 3495 3444 struct ftrace_hash *save_filter_hash; 3496 3445 struct ftrace_hash *save_notrace_hash; 3497 - int size_bits; 3498 3446 int ret; 3499 3447 3500 3448 if (unlikely(ftrace_disabled)) ··· 3517 3467 3518 3468 /* For the first subops to ops just enable it normally */ 3519 3469 if (list_empty(&ops->subop_list)) { 3520 - /* Just use the subops hashes */ 3521 - filter_hash = copy_hash(subops->func_hash->filter_hash); 3522 - notrace_hash = copy_hash(subops->func_hash->notrace_hash); 3523 - if (!filter_hash || !notrace_hash) { 3524 - free_ftrace_hash(filter_hash); 3525 - free_ftrace_hash(notrace_hash); 3526 - return -ENOMEM; 3527 - } 3470 + 3471 + /* The ops was empty, should have empty hashes */ 3472 + WARN_ON_ONCE(!ftrace_hash_empty(ops->func_hash->filter_hash)); 3473 + WARN_ON_ONCE(!ftrace_hash_empty(ops->func_hash->notrace_hash)); 3474 + 3475 + ret = add_first_hash(&filter_hash, &notrace_hash, subops->func_hash); 3476 + if (ret < 0) 3477 + return ret; 3528 3478 3529 3479 save_filter_hash = ops->func_hash->filter_hash; 3530 3480 save_notrace_hash = ops->func_hash->notrace_hash; ··· 3550 3500 3551 3501 /* 3552 3502 * Here there's already something attached. Here are the rules: 3553 - * o If either filter_hash is empty then the final stays empty 3554 - * o Otherwise, the final is a superset of both hashes 3555 - * o If either notrace_hash is empty then the final stays empty 3556 - * o Otherwise, the final is an intersection between the hashes 3503 + * If the new subops and main ops filter hashes are not empty: 3504 + * o Make a copy of the subops filter hash 3505 + * o Remove all functions in the nohash from it. 3506 + * o Add in the main hash filter functions 3507 + * o Remove any of these functions from the main notrace hash 3557 3508 */ 3558 - if (ftrace_hash_empty(ops->func_hash->filter_hash) || 3559 - ftrace_hash_empty(subops->func_hash->filter_hash)) { 3560 - filter_hash = EMPTY_HASH; 3561 - } else { 3562 - size_bits = max(ops->func_hash->filter_hash->size_bits, 3563 - subops->func_hash->filter_hash->size_bits); 3564 - filter_hash = alloc_and_copy_ftrace_hash(size_bits, ops->func_hash->filter_hash); 3565 - if (!filter_hash) 3566 - return -ENOMEM; 3567 - ret = append_hash(&filter_hash, subops->func_hash->filter_hash, 3568 - size_bits); 3569 - if (ret < 0) { 3570 - free_ftrace_hash(filter_hash); 3571 - return ret; 3572 - } 3573 - } 3574 3509 3575 - if (ftrace_hash_empty(ops->func_hash->notrace_hash) || 3576 - ftrace_hash_empty(subops->func_hash->notrace_hash)) { 3577 - notrace_hash = EMPTY_HASH; 3578 - } else { 3579 - size_bits = max(ops->func_hash->filter_hash->size_bits, 3580 - subops->func_hash->filter_hash->size_bits); 3581 - notrace_hash = alloc_ftrace_hash(size_bits); 3582 - if (!notrace_hash) { 3583 - free_ftrace_hash(filter_hash); 3584 - return -ENOMEM; 3585 - } 3586 - 3587 - ret = intersect_hash(&notrace_hash, ops->func_hash->filter_hash, 3588 - subops->func_hash->filter_hash); 3589 - if (ret < 0) { 3590 - free_ftrace_hash(filter_hash); 3591 - free_ftrace_hash(notrace_hash); 3592 - return ret; 3593 - } 3594 - } 3510 + ret = add_next_hash(&filter_hash, &notrace_hash, ops->func_hash, subops->func_hash); 3511 + if (ret < 0) 3512 + return ret; 3595 3513 3596 3514 list_add(&subops->list, &ops->subop_list); 3597 3515 ··· 3573 3555 subops->managed = ops; 3574 3556 } 3575 3557 return ret; 3558 + } 3559 + 3560 + static int rebuild_hashes(struct ftrace_hash **filter_hash, struct ftrace_hash **notrace_hash, 3561 + struct ftrace_ops *ops) 3562 + { 3563 + struct ftrace_ops_hash temp_hash; 3564 + struct ftrace_ops *subops; 3565 + bool first = true; 3566 + int ret; 3567 + 3568 + temp_hash.filter_hash = EMPTY_HASH; 3569 + temp_hash.notrace_hash = EMPTY_HASH; 3570 + 3571 + list_for_each_entry(subops, &ops->subop_list, list) { 3572 + *filter_hash = EMPTY_HASH; 3573 + *notrace_hash = EMPTY_HASH; 3574 + 3575 + if (first) { 3576 + ret = add_first_hash(filter_hash, notrace_hash, subops->func_hash); 3577 + if (ret < 0) 3578 + return ret; 3579 + first = false; 3580 + } else { 3581 + ret = add_next_hash(filter_hash, notrace_hash, 3582 + &temp_hash, subops->func_hash); 3583 + if (ret < 0) { 3584 + free_ftrace_hash(temp_hash.filter_hash); 3585 + free_ftrace_hash(temp_hash.notrace_hash); 3586 + return ret; 3587 + } 3588 + } 3589 + 3590 + temp_hash.filter_hash = *filter_hash; 3591 + temp_hash.notrace_hash = *notrace_hash; 3592 + } 3593 + return 0; 3576 3594 } 3577 3595 3578 3596 /** ··· 3659 3605 } 3660 3606 3661 3607 /* Rebuild the hashes without subops */ 3662 - filter_hash = append_hashes(ops); 3663 - notrace_hash = intersect_hashes(ops); 3664 - if (!filter_hash || !notrace_hash) { 3665 - free_ftrace_hash(filter_hash); 3666 - free_ftrace_hash(notrace_hash); 3667 - list_add(&subops->list, &ops->subop_list); 3668 - return -ENOMEM; 3669 - } 3608 + ret = rebuild_hashes(&filter_hash, &notrace_hash, ops); 3609 + if (ret < 0) 3610 + return ret; 3670 3611 3671 3612 ret = ftrace_update_ops(ops, filter_hash, notrace_hash); 3672 3613 if (ret < 0) { ··· 3677 3628 3678 3629 static int ftrace_hash_move_and_update_subops(struct ftrace_ops *subops, 3679 3630 struct ftrace_hash **orig_subhash, 3680 - struct ftrace_hash *hash, 3681 - int enable) 3631 + struct ftrace_hash *hash) 3682 3632 { 3683 3633 struct ftrace_ops *ops = subops->managed; 3684 - struct ftrace_hash **orig_hash; 3634 + struct ftrace_hash *notrace_hash; 3635 + struct ftrace_hash *filter_hash; 3685 3636 struct ftrace_hash *save_hash; 3686 3637 struct ftrace_hash *new_hash; 3687 3638 int ret; ··· 3698 3649 return -ENOMEM; 3699 3650 } 3700 3651 3701 - /* Create a new_hash to hold the ops new functions */ 3702 - if (enable) { 3703 - orig_hash = &ops->func_hash->filter_hash; 3704 - new_hash = append_hashes(ops); 3705 - } else { 3706 - orig_hash = &ops->func_hash->notrace_hash; 3707 - new_hash = intersect_hashes(ops); 3708 - } 3709 - 3710 - /* Move the hash over to the new hash */ 3711 - ret = __ftrace_hash_move_and_update_ops(ops, orig_hash, new_hash, enable); 3712 - 3713 - free_ftrace_hash(new_hash); 3652 + ret = rebuild_hashes(&filter_hash, &notrace_hash, ops); 3653 + if (!ret) 3654 + ret = ftrace_update_ops(ops, filter_hash, notrace_hash); 3714 3655 3715 3656 if (ret) { 3716 3657 /* Put back the original hash */ 3717 - free_ftrace_hash_rcu(*orig_subhash); 3658 + new_hash = *orig_subhash; 3718 3659 *orig_subhash = save_hash; 3660 + free_ftrace_hash_rcu(new_hash); 3719 3661 } else { 3720 3662 free_ftrace_hash_rcu(save_hash); 3721 3663 } ··· 4930 4890 int enable) 4931 4891 { 4932 4892 if (ops->flags & FTRACE_OPS_FL_SUBOP) 4933 - return ftrace_hash_move_and_update_subops(ops, orig_hash, hash, enable); 4893 + return ftrace_hash_move_and_update_subops(ops, orig_hash, hash); 4934 4894 4935 4895 /* 4936 4896 * If this ops is not enabled, it could be sharing its filters ··· 4949 4909 list_for_each_entry(subops, &op->subop_list, list) { 4950 4910 if ((subops->flags & FTRACE_OPS_FL_ENABLED) && 4951 4911 subops->func_hash == ops->func_hash) { 4952 - return ftrace_hash_move_and_update_subops(subops, orig_hash, hash, enable); 4912 + return ftrace_hash_move_and_update_subops(subops, orig_hash, hash); 4953 4913 } 4954 4914 } 4955 4915 } while_for_each_ftrace_op(op);
+6 -1
kernel/trace/rv/rv.c
··· 225 225 */ 226 226 bool rv_is_container_monitor(struct rv_monitor_def *mdef) 227 227 { 228 - struct rv_monitor_def *next = list_next_entry(mdef, list); 228 + struct rv_monitor_def *next; 229 + 230 + if (list_is_last(&mdef->list, &rv_monitors_list)) 231 + return false; 232 + 233 + next = list_next_entry(mdef, list); 229 234 230 235 return next->parent == mdef->monitor || !mdef->monitor->enable; 231 236 }
+7
kernel/trace/trace.c
··· 9806 9806 return ret; 9807 9807 } 9808 9808 9809 + #ifdef CONFIG_MMU 9809 9810 static u64 map_pages(unsigned long start, unsigned long size) 9810 9811 { 9811 9812 unsigned long vmap_start, vmap_end; ··· 9829 9828 9830 9829 return (u64)vmap_start; 9831 9830 } 9831 + #else 9832 + static inline u64 map_pages(unsigned long start, unsigned long size) 9833 + { 9834 + return 0; 9835 + } 9836 + #endif 9832 9837 9833 9838 /** 9834 9839 * trace_array_get_by_name - Create/Lookup a trace array, given its name.
-1
kernel/trace/trace_events_synth.c
··· 370 370 union trace_synth_field *data = &entry->fields[n_u64]; 371 371 372 372 trace_seq_printf(s, print_fmt, se->fields[i]->name, 373 - STR_VAR_LEN_MAX, 374 373 (char *)entry + data->as_dynamic.offset, 375 374 i == se->n_fields - 1 ? "" : " "); 376 375 n_u64++;
+5 -6
kernel/trace/trace_functions_graph.c
··· 880 880 881 881 if (print_retval || print_retaddr) 882 882 trace_seq_puts(s, " /*"); 883 - else 884 - trace_seq_putc(s, '\n'); 885 883 } else { 886 884 print_retaddr = false; 887 885 trace_seq_printf(s, "} /* %ps", func); ··· 897 899 } 898 900 899 901 if (!entry || print_retval || print_retaddr) 900 - trace_seq_puts(s, " */\n"); 902 + trace_seq_puts(s, " */"); 901 903 } 902 904 903 905 #else ··· 973 975 } else 974 976 trace_seq_puts(s, "();"); 975 977 } 976 - trace_seq_printf(s, "\n"); 978 + trace_seq_putc(s, '\n'); 977 979 978 980 print_graph_irq(iter, graph_ret->func, TRACE_GRAPH_RET, 979 981 cpu, iter->ent->pid, flags); ··· 1311 1313 * that if the funcgraph-tail option is enabled. 1312 1314 */ 1313 1315 if (func_match && !(flags & TRACE_GRAPH_PRINT_TAIL)) 1314 - trace_seq_puts(s, "}\n"); 1316 + trace_seq_puts(s, "}"); 1315 1317 else 1316 - trace_seq_printf(s, "} /* %ps */\n", (void *)func); 1318 + trace_seq_printf(s, "} /* %ps */", (void *)func); 1317 1319 } 1320 + trace_seq_putc(s, '\n'); 1318 1321 1319 1322 /* Overrun */ 1320 1323 if (flags & TRACE_GRAPH_PRINT_OVERRUN)
+177
tools/testing/selftests/ftrace/test.d/ftrace/fgraph-multi-filter.tc
··· 1 + #!/bin/sh 2 + # SPDX-License-Identifier: GPL-2.0 3 + # description: ftrace - function graph filters 4 + # requires: set_ftrace_filter function_graph:tracer 5 + 6 + # Make sure that function graph filtering works 7 + 8 + INSTANCE1="instances/test1_$$" 9 + INSTANCE2="instances/test2_$$" 10 + 11 + WD=`pwd` 12 + 13 + do_reset() { 14 + cd $WD 15 + if [ -d $INSTANCE1 ]; then 16 + echo nop > $INSTANCE1/current_tracer 17 + rmdir $INSTANCE1 18 + fi 19 + if [ -d $INSTANCE2 ]; then 20 + echo nop > $INSTANCE2/current_tracer 21 + rmdir $INSTANCE2 22 + fi 23 + } 24 + 25 + mkdir $INSTANCE1 26 + if ! grep -q function_graph $INSTANCE1/available_tracers; then 27 + echo "function_graph not allowed with instances" 28 + rmdir $INSTANCE1 29 + exit_unsupported 30 + fi 31 + 32 + mkdir $INSTANCE2 33 + 34 + fail() { # msg 35 + do_reset 36 + echo $1 37 + exit_fail 38 + } 39 + 40 + disable_tracing 41 + clear_trace 42 + 43 + function_count() { 44 + search=$1 45 + vsearch=$2 46 + 47 + if [ -z "$search" ]; then 48 + cat enabled_functions | wc -l 49 + elif [ -z "$vsearch" ]; then 50 + grep $search enabled_functions | wc -l 51 + else 52 + grep $search enabled_functions | grep $vsearch| wc -l 53 + fi 54 + } 55 + 56 + set_fgraph() { 57 + instance=$1 58 + filter="$2" 59 + notrace="$3" 60 + 61 + echo "$filter" > $instance/set_ftrace_filter 62 + echo "$notrace" > $instance/set_ftrace_notrace 63 + echo function_graph > $instance/current_tracer 64 + } 65 + 66 + check_functions() { 67 + orig_cnt=$1 68 + test=$2 69 + 70 + cnt=`function_count $test` 71 + if [ $cnt -gt $orig_cnt ]; then 72 + fail 73 + fi 74 + } 75 + 76 + check_cnt() { 77 + orig_cnt=$1 78 + search=$2 79 + vsearch=$3 80 + 81 + cnt=`function_count $search $vsearch` 82 + if [ $cnt -gt $orig_cnt ]; then 83 + fail 84 + fi 85 + } 86 + 87 + reset_graph() { 88 + instance=$1 89 + echo nop > $instance/current_tracer 90 + } 91 + 92 + # get any functions that were enabled before the test 93 + total_cnt=`function_count` 94 + sched_cnt=`function_count sched` 95 + lock_cnt=`function_count lock` 96 + time_cnt=`function_count time` 97 + clock_cnt=`function_count clock` 98 + locks_clock_cnt=`function_count locks clock` 99 + clock_locks_cnt=`function_count clock locks` 100 + 101 + # Trace functions with "sched" but not "time" 102 + set_fgraph $INSTANCE1 '*sched*' '*time*' 103 + 104 + # Make sure "time" isn't listed 105 + check_functions $time_cnt 'time' 106 + instance1_cnt=`function_count` 107 + 108 + # Trace functions with "lock" but not "clock" 109 + set_fgraph $INSTANCE2 '*lock*' '*clock*' 110 + instance1_2_cnt=`function_count` 111 + 112 + # Turn off the first instance 113 + reset_graph $INSTANCE1 114 + 115 + # The second instance doesn't trace "clock" functions 116 + check_functions $clock_cnt 'clock' 117 + instance2_cnt=`function_count` 118 + 119 + # Start from a clean slate 120 + reset_graph $INSTANCE2 121 + check_functions $total_cnt 122 + 123 + # Trace functions with "lock" but not "clock" 124 + set_fgraph $INSTANCE2 '*lock*' '*clock*' 125 + 126 + # This should match the last time instance 2 was by itself 127 + cnt=`function_count` 128 + if [ $instance2_cnt -ne $cnt ]; then 129 + fail 130 + fi 131 + 132 + # And it should not be tracing "clock" functions 133 + check_functions $clock_cnt 'clock' 134 + 135 + # Trace functions with "sched" but not "time" 136 + set_fgraph $INSTANCE1 '*sched*' '*time*' 137 + 138 + # This should match the last time both instances were enabled 139 + cnt=`function_count` 140 + if [ $instance1_2_cnt -ne $cnt ]; then 141 + fail 142 + fi 143 + 144 + # Turn off the second instance 145 + reset_graph $INSTANCE2 146 + 147 + # This should match the last time instance 1 was by itself 148 + cnt=`function_count` 149 + if [ $instance1_cnt -ne $cnt ]; then 150 + fail 151 + fi 152 + 153 + # And it should not be tracing "time" functions 154 + check_functions $time_cnt 'time' 155 + 156 + # Start from a clean slate 157 + reset_graph $INSTANCE1 158 + check_functions $total_cnt 159 + 160 + # Enable all functions but those that have "locks" 161 + set_fgraph $INSTANCE1 '' '*locks*' 162 + 163 + # Enable all functions but those that have "clock" 164 + set_fgraph $INSTANCE2 '' '*clock*' 165 + 166 + # If a function has "locks" it should not have "clock" 167 + check_cnt $locks_clock_cnt locks clock 168 + 169 + # If a function has "clock" it should not have "locks" 170 + check_cnt $clock_locks_cnt clock locks 171 + 172 + reset_graph $INSTANCE1 173 + reset_graph $INSTANCE2 174 + 175 + do_reset 176 + 177 + exit 0