Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'trace-rtla-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull RTLA updates from Steven Rostedt:

- Simplify option parsing

Auto-generate getopt_long() optstring for short options from long
options array, avoiding the need to specify it manually and reducing
the surface for mistakes.

- Add unit tests

Implement unit tests (make unit-tests) using libcheck, next to
existing runtime tests (make check). Currently, three functions from
utils.c are tested.

- Add --stack-format option

In addition to stopping stack pointer decoding (with -s/--stack
option) on first unresolvable pointer, allow also skipping
unresolvable pointers and displaying everything, configurable with a
new option.

- Unify number of CPUs into one global variable

Use one global variable, nr_cpus, to store the number of CPUs instead
of retrieving it and passing it at multiple places.

- Fix behavior in various corner cases

Make RTLA behave correctly in several corner cases: memory allocation
failure, invalid value read from kernel side, thread creation
failure, malformed time value input, and read/write failure or
interruption by signal.

- Improve string handling

Simplify several places in the code that handle strings, including
parsing of action arguments. A few new helper functions and variables
are added for that purpose.

- Get rid of magic numbers

Few places handling paths use a magic number of 1024. Replace it with
MAX_PATH and ARRAY_SIZE() macro.

- Unify threshold handling

Code that handles response to latency threshold is duplicated between
tools, which has led to bugs in the past. Unify it into a new helper
as much as possible.

- Fix segfault on SIGINT during cleanup

The SIGINT handler touches dynamically allocated memory. Detach it
before freeing it during cleanup to prevent segmentation fault and
discarding of output buffers. Also, properly document SIGINT handling
while at it.

* tag 'trace-rtla-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (28 commits)
Documentation/rtla: Document SIGINT behavior
rtla: Fix segfault on multiple SIGINTs
rtla/utils: Fix loop condition in PID validation
rtla/utils: Fix resource leak in set_comm_sched_attr()
rtla/trace: Fix I/O handling in save_trace_to_file()
rtla/trace: Fix write loop in trace_event_save_hist()
rtla/timerlat: Simplify RTLA_NO_BPF environment variable check
rtla: Use str_has_prefix() for option prefix check
rtla: Enforce exact match for time unit suffixes
rtla: Use str_has_prefix() for prefix checks
rtla: Add str_has_prefix() helper function
rtla: Handle pthread_create() failure properly
rtla/timerlat: Add bounds check for softirq vector
rtla: Simplify code by caching string lengths
rtla: Replace magic number with MAX_PATH
rtla: Introduce common_threshold_handler() helper
rtla/actions: Simplify argument parsing
rtla: Use strdup() to simplify code
rtla: Exit on memory allocation failures during initialization
tools/rtla: Remove unneeded nr_cpus from for_each_monitored_cpu
...

+769 -402
+21
Documentation/tools/rtla/common_appendix.txt
··· 1 1 .. SPDX-License-Identifier: GPL-2.0 2 2 3 + SIGINT BEHAVIOR 4 + =============== 5 + 6 + On the first SIGINT, RTLA exits after collecting all outstanding samples up to 7 + the point of receiving the signal. 8 + 9 + When receiving more than one SIGINT, RTLA discards any outstanding samples, and 10 + exits while displaying only samples that have already been processed. 11 + 12 + If SIGINT is received during RTLA cleanup, RTLA exits immediately via 13 + the default signal handler. 14 + 15 + Note: For the purpose of SIGINT behavior, the expiry of duration specified via 16 + the -d/--duration option is treated as equivalent to receiving a SIGINT. For 17 + example, a SIGINT received after duration expired but samples have not been 18 + processed yet will drop any outstanding samples. 19 + 20 + Also note that when using the timerlat tool in BPF mode, samples are processed 21 + in-kernel; RTLA only copies them out to display them to the user. A second 22 + SIGINT does not affect in-kernel sample aggregation. 23 + 3 24 EXIT STATUS 4 25 =========== 5 26
+12
Documentation/tools/rtla/common_timerlat_options.txt
··· 83 83 84 84 **Note**: BPF actions require BPF support to be available. If BPF is not available 85 85 or disabled, the tool falls back to tracefs mode and BPF actions are not supported. 86 + 87 + **--stack-format** *format* 88 + 89 + Adjust the format of the stack trace printed during auto-analysis. 90 + 91 + The supported values for *format* are: 92 + 93 + * **truncate** Print the stack trace up to the first unknown address (default). 94 + * **skip** Skip unknown addresses. 95 + * **full** Print the entire stack trace, including unknown addresses. 96 + 97 + For unknown addresses, the raw pointer is printed.
+3
tools/build/Makefile.feature
··· 115 115 hello \ 116 116 libbabeltrace \ 117 117 libcapstone \ 118 + libcheck \ 118 119 libbfd-liberty \ 119 120 libbfd-liberty-z \ 120 121 libopencsd \ ··· 176 175 ifneq ($(PKG_CONFIG),) 177 176 $(foreach package,$(FEATURE_PKG_CONFIG),$(call feature_pkg_config,$(package))) 178 177 endif 178 + 179 + FEATURE_CHECK_LDFLAGS-libcheck = -lcheck 179 180 180 181 # Set FEATURE_CHECK_(C|LD)FLAGS-all for all FEATURE_TESTS features. 181 182 # If in the future we need per-feature checks/flags for features not
+4
tools/build/feature/Makefile
··· 50 50 test-timerfd.bin \ 51 51 test-libbabeltrace.bin \ 52 52 test-libcapstone.bin \ 53 + test-libcheck.bin \ 53 54 test-compile-32.bin \ 54 55 test-compile-x32.bin \ 55 56 test-zlib.bin \ ··· 307 306 308 307 $(OUTPUT)test-libcapstone.bin: 309 308 $(BUILD) # -lcapstone provided by $(FEATURE_CHECK_LDFLAGS-libcapstone) 309 + 310 + $(OUTPUT)test-libcheck.bin: 311 + $(BUILD) # -lcheck is provided by $(FEATURE_CHECK_LDFLAGS-libcheck) 310 312 311 313 $(OUTPUT)test-compile-32.bin: 312 314 $(CC) -m32 -Wall -Werror -o $@ test-compile.c
+8
tools/build/feature/test-libcheck.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <check.h> 3 + 4 + int main(void) 5 + { 6 + Suite *s = suite_create("test"); 7 + return s == 0; 8 + }
+1
tools/tracing/rtla/Build
··· 1 1 rtla-y += src/ 2 + unit_tests-y += tests/unit/
+5
tools/tracing/rtla/Makefile
··· 33 33 FEATURE_TESTS := libtraceevent 34 34 FEATURE_TESTS += libtracefs 35 35 FEATURE_TESTS += libcpupower 36 + FEATURE_TESTS += libcheck 36 37 FEATURE_TESTS += libbpf 37 38 FEATURE_TESTS += clang-bpf-co-re 38 39 FEATURE_TESTS += bpftool-skeletons 39 40 FEATURE_DISPLAY := libtraceevent 40 41 FEATURE_DISPLAY += libtracefs 41 42 FEATURE_DISPLAY += libcpupower 43 + FEATURE_DISPLAY += libcheck 42 44 FEATURE_DISPLAY += libbpf 43 45 FEATURE_DISPLAY += clang-bpf-co-re 44 46 FEATURE_DISPLAY += bpftool-skeletons ··· 49 47 50 48 include $(srctree)/tools/build/Makefile.include 51 49 include Makefile.rtla 50 + include tests/unit/Makefile.unit 52 51 53 52 # check for dependencies only on required targets 54 53 NON_CONFIG_TARGETS := clean install tarball doc doc_clean doc_install ··· 112 109 $(Q)rm -f rtla rtla-static fixdep FEATURE-DUMP rtla-* 113 110 $(Q)rm -rf feature 114 111 $(Q)rm -f src/timerlat.bpf.o src/timerlat.skel.h example/timerlat_bpf_action.o 112 + $(Q)rm -f $(UNIT_TESTS) 113 + 115 114 check: $(RTLA) tests/bpf/bpf_action_map.o 116 115 RTLA=$(RTLA) BPFTOOL=$(SYSTEM_BPFTOOL) prove -o -f -v tests/ 117 116 examples: example/timerlat_bpf_action.o
+8
tools/tracing/rtla/Makefile.config
··· 61 61 $(info Please install libcpupower-dev/kernel-tools-libs-devel) 62 62 endif 63 63 64 + $(call feature_check,libcheck) 65 + ifeq ($(feature-libcheck), 1) 66 + $(call detected,CONFIG_LIBCHECK) 67 + else 68 + $(info libcheck is missing, building without unit tests support.) 69 + $(info Please install check-devel/check) 70 + endif 71 + 64 72 ifndef BUILD_BPF_SKEL 65 73 # BPF skeletons are used to implement improved sample collection, enable 66 74 # them by default.
+1
tools/tracing/rtla/README.txt
··· 12 12 - libtracefs 13 13 - libtraceevent 14 14 - libcpupower (optional, for --deepest-idle-state) 15 + - libcheck (optional, for unit tests) 15 16 16 17 For BPF sample collection support, the following extra dependencies are 17 18 required:
+63 -42
tools/tracing/rtla/src/actions.c
··· 15 15 actions_init(struct actions *self) 16 16 { 17 17 self->size = action_default_size; 18 - self->list = calloc(self->size, sizeof(struct action)); 18 + self->list = calloc_fatal(self->size, sizeof(struct action)); 19 19 self->len = 0; 20 20 self->continue_flag = false; 21 21 ··· 50 50 actions_new(struct actions *self) 51 51 { 52 52 if (self->len >= self->size) { 53 - self->size *= 2; 54 - self->list = realloc(self->list, self->size * sizeof(struct action)); 53 + const size_t new_size = self->size * 2; 54 + 55 + self->list = reallocarray_fatal(self->list, new_size, sizeof(struct action)); 56 + self->size = new_size; 55 57 } 56 58 57 59 return &self->list[self->len++]; ··· 62 60 /* 63 61 * actions_add_trace_output - add an action to output trace 64 62 */ 65 - int 63 + void 66 64 actions_add_trace_output(struct actions *self, const char *trace_output) 67 65 { 68 66 struct action *action = actions_new(self); 69 67 70 68 self->present[ACTION_TRACE_OUTPUT] = true; 71 69 action->type = ACTION_TRACE_OUTPUT; 72 - action->trace_output = calloc(strlen(trace_output) + 1, sizeof(char)); 73 - if (!action->trace_output) 74 - return -1; 75 - strcpy(action->trace_output, trace_output); 76 - 77 - return 0; 70 + action->trace_output = strdup_fatal(trace_output); 78 71 } 79 72 80 73 /* 81 74 * actions_add_trace_output - add an action to send signal to a process 82 75 */ 83 - int 76 + void 84 77 actions_add_signal(struct actions *self, int signal, int pid) 85 78 { 86 79 struct action *action = actions_new(self); ··· 84 87 action->type = ACTION_SIGNAL; 85 88 action->signal = signal; 86 89 action->pid = pid; 87 - 88 - return 0; 89 90 } 90 91 91 92 /* 92 93 * actions_add_shell - add an action to execute a shell command 93 94 */ 94 - int 95 + void 95 96 actions_add_shell(struct actions *self, const char *command) 96 97 { 97 98 struct action *action = actions_new(self); 98 99 99 100 self->present[ACTION_SHELL] = true; 100 101 action->type = ACTION_SHELL; 101 - action->command = calloc(strlen(command) + 1, sizeof(char)); 102 - if (!action->command) 103 - return -1; 104 - strcpy(action->command, command); 105 - 106 - return 0; 102 + action->command = strdup_fatal(command); 107 103 } 108 104 109 105 /* 110 106 * actions_add_continue - add an action to resume measurement 111 107 */ 112 - int 108 + void 113 109 actions_add_continue(struct actions *self) 114 110 { 115 111 struct action *action = actions_new(self); 116 112 117 113 self->present[ACTION_CONTINUE] = true; 118 114 action->type = ACTION_CONTINUE; 119 - 120 - return 0; 121 115 } 116 + 117 + static inline const char *__extract_arg(const char *token, const char *opt, size_t opt_len) 118 + { 119 + const size_t tok_len = strlen(token); 120 + 121 + if (tok_len <= opt_len) 122 + return NULL; 123 + 124 + if (strncmp(token, opt, opt_len)) 125 + return NULL; 126 + 127 + return token + opt_len; 128 + } 129 + 130 + /* 131 + * extract_arg - extract argument value from option token 132 + * @token: option token (e.g., "file=trace.txt") 133 + * @opt: option name to match (e.g., "file") 134 + * 135 + * Returns pointer to argument value after "=" if token matches "opt=", 136 + * otherwise returns NULL. 137 + */ 138 + #define extract_arg(token, opt) __extract_arg(token, opt "=", STRING_LENGTH(opt "=")) 122 139 123 140 /* 124 141 * actions_parse - add an action based on text specification ··· 143 132 enum action_type type = ACTION_NONE; 144 133 const char *token; 145 134 char trigger_c[strlen(trigger) + 1]; 135 + const char *arg_value; 146 136 147 137 /* For ACTION_SIGNAL */ 148 138 int signal = 0, pid = 0; ··· 176 164 if (token == NULL) 177 165 trace_output = tracefn; 178 166 else { 179 - if (strlen(token) > 5 && strncmp(token, "file=", 5) == 0) { 180 - trace_output = token + 5; 181 - } else { 167 + trace_output = extract_arg(token, "file"); 168 + if (!trace_output) 182 169 /* Invalid argument */ 183 170 return -1; 184 - } 185 171 186 172 token = strtok(NULL, ","); 187 173 if (token != NULL) 188 174 /* Only one argument allowed */ 189 175 return -1; 190 176 } 191 - return actions_add_trace_output(self, trace_output); 177 + actions_add_trace_output(self, trace_output); 178 + break; 192 179 case ACTION_SIGNAL: 193 180 /* Takes two arguments, num (signal) and pid */ 194 181 while (token != NULL) { 195 - if (strlen(token) > 4 && strncmp(token, "num=", 4) == 0) { 196 - if (strtoi(token + 4, &signal)) 197 - return -1; 198 - } else if (strlen(token) > 4 && strncmp(token, "pid=", 4) == 0) { 199 - if (strncmp(token + 4, "parent", 7) == 0) 200 - pid = -1; 201 - else if (strtoi(token + 4, &pid)) 182 + arg_value = extract_arg(token, "num"); 183 + if (arg_value) { 184 + if (strtoi(arg_value, &signal)) 202 185 return -1; 203 186 } else { 204 - /* Invalid argument */ 205 - return -1; 187 + arg_value = extract_arg(token, "pid"); 188 + if (arg_value) { 189 + if (strncmp_static(arg_value, "parent") == 0) 190 + pid = -1; 191 + else if (strtoi(arg_value, &pid)) 192 + return -1; 193 + } else { 194 + /* Invalid argument */ 195 + return -1; 196 + } 206 197 } 207 198 208 199 token = strtok(NULL, ","); ··· 215 200 /* Missing argument */ 216 201 return -1; 217 202 218 - return actions_add_signal(self, signal, pid); 203 + actions_add_signal(self, signal, pid); 204 + break; 219 205 case ACTION_SHELL: 220 206 if (token == NULL) 221 207 return -1; 222 - if (strlen(token) > 8 && strncmp(token, "command=", 8) == 0) 223 - return actions_add_shell(self, token + 8); 224 - return -1; 208 + arg_value = extract_arg(token, "command"); 209 + if (!arg_value) 210 + return -1; 211 + actions_add_shell(self, arg_value); 212 + break; 225 213 case ACTION_CONTINUE: 226 214 /* Takes no argument */ 227 215 if (token != NULL) 228 216 return -1; 229 - return actions_add_continue(self); 217 + actions_add_continue(self); 218 + break; 230 219 default: 231 220 return -1; 232 221 } 222 + 223 + return 0; 233 224 } 234 225 235 226 /*
+4 -4
tools/tracing/rtla/src/actions.h
··· 49 49 50 50 void actions_init(struct actions *self); 51 51 void actions_destroy(struct actions *self); 52 - int actions_add_trace_output(struct actions *self, const char *trace_output); 53 - int actions_add_signal(struct actions *self, int signal, int pid); 54 - int actions_add_shell(struct actions *self, const char *command); 55 - int actions_add_continue(struct actions *self); 52 + void actions_add_trace_output(struct actions *self, const char *trace_output); 53 + void actions_add_signal(struct actions *self, int signal, int pid); 54 + void actions_add_shell(struct actions *self, const char *command); 55 + void actions_add_continue(struct actions *self); 56 56 int actions_parse(struct actions *self, const char *trigger, const char *tracefn); 57 57 int actions_perform(struct actions *self);
+97 -23
tools/tracing/rtla/src/common.c
··· 5 5 #include <signal.h> 6 6 #include <stdlib.h> 7 7 #include <string.h> 8 - #include <unistd.h> 9 8 #include <getopt.h> 9 + #include <sys/sysinfo.h> 10 + 10 11 #include "common.h" 11 12 12 13 struct trace_instance *trace_inst; 13 14 volatile int stop_tracing; 15 + int nr_cpus; 14 16 15 17 static void stop_trace(int sig) 16 18 { ··· 39 37 signal(SIGALRM, stop_trace); 40 38 alarm(params->duration); 41 39 } 40 + } 41 + 42 + /* 43 + * unset_signals - unsets the signals to stop the tool 44 + */ 45 + static void unset_signals(struct common_params *params) 46 + { 47 + signal(SIGINT, SIG_DFL); 48 + if (params->duration) { 49 + alarm(0); 50 + signal(SIGALRM, SIG_DFL); 51 + } 52 + } 53 + 54 + /* 55 + * getopt_auto - auto-generates optstring from long_options 56 + */ 57 + int getopt_auto(int argc, char **argv, const struct option *long_opts) 58 + { 59 + char opts[256]; 60 + int n = 0; 61 + 62 + for (int i = 0; long_opts[i].name; i++) { 63 + if (long_opts[i].val < 32 || long_opts[i].val > 127) 64 + continue; 65 + 66 + if (n + 4 >= sizeof(opts)) 67 + fatal("optstring buffer overflow"); 68 + 69 + opts[n++] = long_opts[i].val; 70 + 71 + if (long_opts[i].has_arg == required_argument) 72 + opts[n++] = ':'; 73 + else if (long_opts[i].has_arg == optional_argument) { 74 + opts[n++] = ':'; 75 + opts[n++] = ':'; 76 + } 77 + } 78 + 79 + opts[n] = '\0'; 80 + 81 + return getopt_long(argc, argv, opts, long_opts, NULL); 42 82 } 43 83 44 84 /* ··· 113 69 }; 114 70 115 71 opterr = 0; 116 - c = getopt_long(argc, argv, "c:C::Dd:e:H:P:", long_options, NULL); 72 + c = getopt_auto(argc, argv, long_options); 117 73 opterr = 1; 118 74 119 75 switch (c) { ··· 179 135 } 180 136 181 137 if (!params->cpus) { 182 - for (i = 0; i < sysconf(_SC_NPROCESSORS_CONF); i++) 138 + for (i = 0; i < nr_cpus; i++) 183 139 CPU_SET(i, &params->monitored_cpus); 184 140 } 185 141 ··· 219 175 } 220 176 221 177 178 + /** 179 + * common_threshold_handler - handle latency threshold overflow 180 + * @tool: pointer to the osnoise_tool instance containing trace contexts 181 + * 182 + * Executes the configured threshold actions (e.g., saving trace, printing, 183 + * sending signals). If the continue flag is set (--on-threshold continue), 184 + * restarts the auxiliary trace instances to continue monitoring. 185 + * 186 + * Return: 0 for success, -1 for error. 187 + */ 188 + int 189 + common_threshold_handler(const struct osnoise_tool *tool) 190 + { 191 + actions_perform(&tool->params->threshold_actions); 192 + 193 + if (!should_continue_tracing(tool->params)) 194 + /* continue flag not set, break */ 195 + return 0; 196 + 197 + /* continue action reached, re-enable tracing */ 198 + if (tool->record && trace_instance_start(&tool->record->trace)) 199 + goto err; 200 + if (tool->aa && trace_instance_start(&tool->aa->trace)) 201 + goto err; 202 + 203 + return 0; 204 + 205 + err: 206 + err_msg("Error restarting trace\n"); 207 + return -1; 208 + } 209 + 222 210 int run_tool(struct tool_ops *ops, int argc, char *argv[]) 223 211 { 224 212 struct common_params *params; ··· 259 183 bool stopped; 260 184 int retval; 261 185 186 + nr_cpus = get_nprocs_conf(); 262 187 params = ops->parse_args(argc, argv); 263 188 if (!params) 264 189 exit(1); ··· 348 271 params->user.cgroup_name = params->cgroup_name; 349 272 350 273 retval = pthread_create(&user_thread, NULL, timerlat_u_dispatcher, &params->user); 351 - if (retval) 274 + if (retval) { 352 275 err_msg("Error creating timerlat user-space threads\n"); 276 + goto out_trace; 277 + } 353 278 } 354 279 355 280 retval = ops->enable(tool); ··· 363 284 364 285 retval = ops->main(tool); 365 286 if (retval) 366 - goto out_trace; 287 + goto out_signals; 367 288 368 289 if (params->user_workload && !params->user.stopped_running) { 369 290 params->user.should_run = 0; ··· 385 306 if (ops->analyze) 386 307 ops->analyze(tool, stopped); 387 308 309 + out_signals: 310 + unset_signals(params); 388 311 out_trace: 389 312 trace_events_destroy(&tool->record->trace, params->events); 390 313 params->events = NULL; ··· 433 352 /* stop tracing requested, do not perform actions */ 434 353 return 0; 435 354 436 - actions_perform(&params->threshold_actions); 355 + retval = common_threshold_handler(tool); 356 + if (retval) 357 + return retval; 437 358 438 - if (!params->threshold_actions.continue_flag) 439 - /* continue flag not set, break */ 359 + 360 + if (!should_continue_tracing(params)) 440 361 return 0; 441 362 442 - /* continue action reached, re-enable tracing */ 443 - if (record) 444 - trace_instance_start(&record->trace); 445 - if (tool->aa) 446 - trace_instance_start(&tool->aa->trace); 447 363 trace_instance_start(trace); 448 364 } 449 365 ··· 481 403 /* stop tracing requested, do not perform actions */ 482 404 break; 483 405 484 - actions_perform(&params->threshold_actions); 406 + retval = common_threshold_handler(tool); 407 + if (retval) 408 + return retval; 485 409 486 - if (!params->threshold_actions.continue_flag) 487 - /* continue flag not set, break */ 488 - break; 410 + if (!should_continue_tracing(params)) 411 + return 0; 489 412 490 - /* continue action reached, re-enable tracing */ 491 - if (tool->record) 492 - trace_instance_start(&tool->record->trace); 493 - if (tool->aa) 494 - trace_instance_start(&tool->aa->trace); 495 - trace_instance_start(&tool->trace); 413 + trace_instance_start(trace); 496 414 } 497 415 498 416 /* is there still any user-threads ? */
+23 -1
tools/tracing/rtla/src/common.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 #pragma once 3 3 4 + #include <getopt.h> 4 5 #include "actions.h" 5 6 #include "timerlat_u.h" 6 7 #include "trace.h" ··· 108 107 struct timerlat_u_params user; 109 108 }; 110 109 111 - #define for_each_monitored_cpu(cpu, nr_cpus, common) \ 110 + extern int nr_cpus; 111 + 112 + #define for_each_monitored_cpu(cpu, common) \ 112 113 for (cpu = 0; cpu < nr_cpus; cpu++) \ 113 114 if (!(common)->cpus || CPU_ISSET(cpu, &(common)->monitored_cpus)) 114 115 ··· 146 143 void (*free)(struct osnoise_tool *tool); 147 144 }; 148 145 146 + /** 147 + * should_continue_tracing - check if tracing should continue after threshold 148 + * @params: pointer to the common parameters structure 149 + * 150 + * Returns true if the continue action was configured (--on-threshold continue), 151 + * indicating that tracing should be restarted after handling the threshold event. 152 + * 153 + * Return: 1 if tracing should continue, 0 otherwise. 154 + */ 155 + static inline int 156 + should_continue_tracing(const struct common_params *params) 157 + { 158 + return params->threshold_actions.continue_flag; 159 + } 160 + 161 + int 162 + common_threshold_handler(const struct osnoise_tool *tool); 163 + 149 164 int osnoise_set_cpus(struct osnoise_context *context, char *cpus); 150 165 void osnoise_restore_cpus(struct osnoise_context *context); 151 166 ··· 177 156 int osnoise_set_stop_total_us(struct osnoise_context *context, 178 157 long long stop_total_us); 179 158 159 + int getopt_auto(int argc, char **argv, const struct option *long_opts); 180 160 int common_parse_options(int argc, char **argv, struct common_params *common); 181 161 int common_apply_config(struct osnoise_tool *tool, struct common_params *params); 182 162 int top_main_loop(struct osnoise_tool *tool);
+8 -18
tools/tracing/rtla/src/osnoise.c
··· 62 62 if (!context->curr_cpus) 63 63 return -1; 64 64 65 - snprintf(buffer, 1024, "%s\n", cpus); 65 + snprintf(buffer, ARRAY_SIZE(buffer), "%s\n", cpus); 66 66 67 67 debug_msg("setting cpus to %s from %s", cpus, context->orig_cpus); 68 68 ··· 938 938 { 939 939 struct osnoise_context *context; 940 940 941 - context = calloc(1, sizeof(*context)); 942 - if (!context) 943 - return NULL; 941 + context = calloc_fatal(1, sizeof(*context)); 944 942 945 943 context->orig_stop_us = OSNOISE_OPTION_INIT_VAL; 946 944 context->stop_us = OSNOISE_OPTION_INIT_VAL; ··· 1015 1017 struct osnoise_tool *osnoise_init_tool(char *tool_name) 1016 1018 { 1017 1019 struct osnoise_tool *top; 1018 - int retval; 1019 1020 1020 - top = calloc(1, sizeof(*top)); 1021 - if (!top) 1022 - return NULL; 1023 - 1021 + top = calloc_fatal(1, sizeof(*top)); 1024 1022 top->context = osnoise_context_alloc(); 1025 - if (!top->context) 1026 - goto out_err; 1027 1023 1028 - retval = trace_instance_init(&top->trace, tool_name); 1029 - if (retval) 1030 - goto out_err; 1024 + if (trace_instance_init(&top->trace, tool_name)) { 1025 + osnoise_destroy_tool(top); 1026 + return NULL; 1027 + } 1031 1028 1032 1029 return top; 1033 - out_err: 1034 - osnoise_destroy_tool(top); 1035 - return NULL; 1036 1030 } 1037 1031 1038 1032 /* ··· 1210 1220 1211 1221 if ((strcmp(argv[1], "-h") == 0) || (strcmp(argv[1], "--help") == 0)) { 1212 1222 osnoise_usage(0); 1213 - } else if (strncmp(argv[1], "-", 1) == 0) { 1223 + } else if (str_has_prefix(argv[1], "-")) { 1214 1224 /* the user skipped the tool, call the default one */ 1215 1225 run_tool(&osnoise_top_ops, argc, argv); 1216 1226 exit(0);
+18 -33
tools/tracing/rtla/src/osnoise_hist.c
··· 29 29 struct osnoise_hist_cpu *hist; 30 30 int entries; 31 31 int bucket_size; 32 - int nr_cpus; 33 32 }; 34 33 35 34 /* ··· 40 41 int cpu; 41 42 42 43 /* one histogram for IRQ and one for thread, per CPU */ 43 - for (cpu = 0; cpu < data->nr_cpus; cpu++) { 44 + for (cpu = 0; cpu < nr_cpus; cpu++) { 44 45 if (data->hist[cpu].samples) 45 46 free(data->hist[cpu].samples); 46 47 } ··· 61 62 * osnoise_alloc_histogram - alloc runtime data 62 63 */ 63 64 static struct osnoise_hist_data 64 - *osnoise_alloc_histogram(int nr_cpus, int entries, int bucket_size) 65 + *osnoise_alloc_histogram(int entries, int bucket_size) 65 66 { 66 67 struct osnoise_hist_data *data; 67 68 int cpu; ··· 72 73 73 74 data->entries = entries; 74 75 data->bucket_size = bucket_size; 75 - data->nr_cpus = nr_cpus; 76 76 77 77 data->hist = calloc(1, sizeof(*data->hist) * nr_cpus); 78 78 if (!data->hist) ··· 244 246 if (!params->common.hist.no_index) 245 247 trace_seq_printf(s, "Index"); 246 248 247 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 249 + for_each_monitored_cpu(cpu, &params->common) { 248 250 249 251 if (!data->hist[cpu].count) 250 252 continue; ··· 273 275 if (!params->common.hist.no_index) 274 276 trace_seq_printf(trace->seq, "count:"); 275 277 276 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 277 - 278 + for_each_monitored_cpu(cpu, &params->common) { 278 279 if (!data->hist[cpu].count) 279 280 continue; 280 281 ··· 284 287 if (!params->common.hist.no_index) 285 288 trace_seq_printf(trace->seq, "min: "); 286 289 287 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 290 + for_each_monitored_cpu(cpu, &params->common) { 288 291 289 292 if (!data->hist[cpu].count) 290 293 continue; ··· 297 300 if (!params->common.hist.no_index) 298 301 trace_seq_printf(trace->seq, "avg: "); 299 302 300 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 303 + for_each_monitored_cpu(cpu, &params->common) { 301 304 302 305 if (!data->hist[cpu].count) 303 306 continue; ··· 313 316 if (!params->common.hist.no_index) 314 317 trace_seq_printf(trace->seq, "max: "); 315 318 316 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 319 + for_each_monitored_cpu(cpu, &params->common) { 317 320 318 321 if (!data->hist[cpu].count) 319 322 continue; ··· 348 351 trace_seq_printf(trace->seq, "%-6d", 349 352 bucket * data->bucket_size); 350 353 351 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 354 + for_each_monitored_cpu(cpu, &params->common) { 352 355 353 356 if (!data->hist[cpu].count) 354 357 continue; ··· 384 387 if (!params->common.hist.no_index) 385 388 trace_seq_printf(trace->seq, "over: "); 386 389 387 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 390 + for_each_monitored_cpu(cpu, &params->common) { 388 391 389 392 if (!data->hist[cpu].count) 390 393 continue; ··· 463 466 int c; 464 467 char *trace_output = NULL; 465 468 466 - params = calloc(1, sizeof(*params)); 467 - if (!params) 468 - exit(1); 469 + params = calloc_fatal(1, sizeof(*params)); 469 470 470 471 actions_init(&params->common.threshold_actions); 471 472 actions_init(&params->common.end_actions); ··· 501 506 if (common_parse_options(argc, argv, &params->common)) 502 507 continue; 503 508 504 - c = getopt_long(argc, argv, "a:b:E:hp:r:s:S:t::T:01234:5:6:7:", 505 - long_options, NULL); 509 + c = getopt_auto(argc, argv, long_options); 506 510 507 511 /* detect the end of the options. */ 508 512 if (c == -1) ··· 573 579 params->common.hist.with_zeros = 1; 574 580 break; 575 581 case '4': /* trigger */ 576 - if (params->common.events) { 577 - retval = trace_event_add_trigger(params->common.events, optarg); 578 - if (retval) 579 - fatal("Error adding trigger %s", optarg); 580 - } else { 582 + if (params->common.events) 583 + trace_event_add_trigger(params->common.events, optarg); 584 + else 581 585 fatal("--trigger requires a previous -e"); 582 - } 583 586 break; 584 587 case '5': /* filter */ 585 - if (params->common.events) { 586 - retval = trace_event_add_filter(params->common.events, optarg); 587 - if (retval) 588 - fatal("Error adding filter %s", optarg); 589 - } else { 588 + if (params->common.events) 589 + trace_event_add_filter(params->common.events, optarg); 590 + else 590 591 fatal("--filter requires a previous -e"); 591 - } 592 592 break; 593 593 case '6': 594 594 params->common.warmup = get_llong_from_str(optarg); ··· 635 647 *osnoise_init_hist(struct common_params *params) 636 648 { 637 649 struct osnoise_tool *tool; 638 - int nr_cpus; 639 - 640 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 641 650 642 651 tool = osnoise_init_tool("osnoise_hist"); 643 652 if (!tool) 644 653 return NULL; 645 654 646 - tool->data = osnoise_alloc_histogram(nr_cpus, params->hist.entries, 655 + tool->data = osnoise_alloc_histogram(params->hist.entries, 647 656 params->hist.bucket_size); 648 657 if (!tool->data) 649 658 goto out_err;
+11 -30
tools/tracing/rtla/src/osnoise_top.c
··· 31 31 32 32 struct osnoise_top_data { 33 33 struct osnoise_top_cpu *cpu_data; 34 - int nr_cpus; 35 34 }; 36 35 37 36 /* ··· 50 51 /* 51 52 * osnoise_alloc_histogram - alloc runtime data 52 53 */ 53 - static struct osnoise_top_data *osnoise_alloc_top(int nr_cpus) 54 + static struct osnoise_top_data *osnoise_alloc_top(void) 54 55 { 55 56 struct osnoise_top_data *data; 56 57 57 58 data = calloc(1, sizeof(*data)); 58 59 if (!data) 59 60 return NULL; 60 - 61 - data->nr_cpus = nr_cpus; 62 61 63 62 /* one set of histograms per CPU */ 64 63 data->cpu_data = calloc(1, sizeof(*data->cpu_data) * nr_cpus); ··· 229 232 { 230 233 struct osnoise_params *params = to_osnoise_params(top->params); 231 234 struct trace_instance *trace = &top->trace; 232 - static int nr_cpus = -1; 233 235 int i; 234 - 235 - if (nr_cpus == -1) 236 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 237 236 238 237 if (!params->common.quiet) 239 238 clear_terminal(trace->seq); 240 239 241 240 osnoise_top_header(top); 242 241 243 - for_each_monitored_cpu(i, nr_cpus, &params->common) { 242 + for_each_monitored_cpu(i, &params->common) { 244 243 osnoise_top_print(top, i); 245 244 } 246 245 ··· 312 319 int c; 313 320 char *trace_output = NULL; 314 321 315 - params = calloc(1, sizeof(*params)); 316 - if (!params) 317 - exit(1); 322 + params = calloc_fatal(1, sizeof(*params)); 318 323 319 324 actions_init(&params->common.threshold_actions); 320 325 actions_init(&params->common.end_actions); ··· 349 358 if (common_parse_options(argc, argv, &params->common)) 350 359 continue; 351 360 352 - c = getopt_long(argc, argv, "a:hp:qr:s:S:t::T:0:1:2:3:", 353 - long_options, NULL); 361 + c = getopt_auto(argc, argv, long_options); 354 362 355 363 /* Detect the end of the options. */ 356 364 if (c == -1) ··· 400 410 params->threshold = get_llong_from_str(optarg); 401 411 break; 402 412 case '0': /* trigger */ 403 - if (params->common.events) { 404 - retval = trace_event_add_trigger(params->common.events, optarg); 405 - if (retval) 406 - fatal("Error adding trigger %s", optarg); 407 - } else { 413 + if (params->common.events) 414 + trace_event_add_trigger(params->common.events, optarg); 415 + else 408 416 fatal("--trigger requires a previous -e"); 409 - } 410 417 break; 411 418 case '1': /* filter */ 412 - if (params->common.events) { 413 - retval = trace_event_add_filter(params->common.events, optarg); 414 - if (retval) 415 - fatal("Error adding filter %s", optarg); 416 - } else { 419 + if (params->common.events) 420 + trace_event_add_filter(params->common.events, optarg); 421 + else 417 422 fatal("--filter requires a previous -e"); 418 - } 419 423 break; 420 424 case '2': 421 425 params->common.warmup = get_llong_from_str(optarg); ··· 479 495 struct osnoise_tool *osnoise_init_top(struct common_params *params) 480 496 { 481 497 struct osnoise_tool *tool; 482 - int nr_cpus; 483 - 484 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 485 498 486 499 tool = osnoise_init_tool("osnoise_top"); 487 500 if (!tool) 488 501 return NULL; 489 502 490 - tool->data = osnoise_alloc_top(nr_cpus); 503 + tool->data = osnoise_alloc_top(); 491 504 if (!tool->data) { 492 505 osnoise_destroy_tool(tool); 493 506 return NULL;
+7 -9
tools/tracing/rtla/src/timerlat.c
··· 28 28 timerlat_apply_config(struct osnoise_tool *tool, struct timerlat_params *params) 29 29 { 30 30 int retval; 31 + const char *const rtla_no_bpf = getenv("RTLA_NO_BPF"); 31 32 32 33 /* 33 34 * Try to enable BPF, unless disabled explicitly. 34 35 * If BPF enablement fails, fall back to tracefs mode. 35 36 */ 36 - if (getenv("RTLA_NO_BPF") && strncmp(getenv("RTLA_NO_BPF"), "1", 2) == 0) { 37 + if (rtla_no_bpf && strncmp_static(rtla_no_bpf, "1") == 0) { 37 38 debug_msg("RTLA_NO_BPF set, disabling BPF\n"); 38 39 params->mode = TRACING_MODE_TRACEFS; 39 40 } else if (!tep_find_event_by_name(tool->trace.tep, "osnoise", "timerlat_sample")) { ··· 100 99 int timerlat_enable(struct osnoise_tool *tool) 101 100 { 102 101 struct timerlat_params *params = to_timerlat_params(tool->params); 103 - int retval, nr_cpus, i; 102 + int retval, i; 104 103 105 104 if (params->dma_latency >= 0) { 106 105 dma_latency_fd = set_cpu_dma_latency(params->dma_latency); ··· 116 115 return -1; 117 116 } 118 117 119 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 120 - 121 - for_each_monitored_cpu(i, nr_cpus, &params->common) { 118 + for_each_monitored_cpu(i, &params->common) { 122 119 if (save_cpu_idle_disable_state(i) < 0) { 123 120 err_msg("Could not save cpu idle state.\n"); 124 121 return -1; ··· 133 134 if (!tool->aa) 134 135 return -1; 135 136 136 - retval = timerlat_aa_init(tool->aa, params->dump_tasks); 137 + retval = timerlat_aa_init(tool->aa, params->dump_tasks, params->stack_format); 137 138 if (retval) { 138 139 err_msg("Failed to enable the auto analysis instance\n"); 139 140 return retval; ··· 213 214 void timerlat_free(struct osnoise_tool *tool) 214 215 { 215 216 struct timerlat_params *params = to_timerlat_params(tool->params); 216 - int nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 217 217 int i; 218 218 219 219 timerlat_aa_destroy(); 220 220 if (dma_latency_fd >= 0) 221 221 close(dma_latency_fd); 222 222 if (params->deepest_idle_state >= -1) { 223 - for_each_monitored_cpu(i, nr_cpus, &params->common) { 223 + for_each_monitored_cpu(i, &params->common) { 224 224 restore_cpu_idle_disable_state(i); 225 225 } 226 226 } ··· 270 272 271 273 if ((strcmp(argv[1], "-h") == 0) || (strcmp(argv[1], "--help") == 0)) { 272 274 timerlat_usage(0); 273 - } else if (strncmp(argv[1], "-", 1) == 0) { 275 + } else if (str_has_prefix(argv[1], "-")) { 274 276 /* the user skipped the tool, call the default one */ 275 277 run_tool(&timerlat_top_ops, argc, argv); 276 278 exit(0);
+1
tools/tracing/rtla/src/timerlat.h
··· 28 28 int deepest_idle_state; 29 29 enum timerlat_tracing_mode mode; 30 30 const char *bpf_action_program; 31 + enum stack_format stack_format; 31 32 }; 32 33 33 34 #define to_timerlat_params(ptr) container_of(ptr, struct timerlat_params, common)
+35 -16
tools/tracing/rtla/src/timerlat_aa.c
··· 102 102 * The analysis context and system wide view 103 103 */ 104 104 struct timerlat_aa_context { 105 - int nr_cpus; 106 105 int dump_tasks; 106 + enum stack_format stack_format; 107 107 108 108 /* per CPU data */ 109 109 struct timerlat_aa_data *taa_data; ··· 417 417 taa_data->thread_softirq_sum += duration; 418 418 419 419 trace_seq_printf(taa_data->softirqs_seq, " %24s:%-3llu %.*s %9.2f us\n", 420 - softirq_name[vector], vector, 421 - 24, spaces, 420 + vector < ARRAY_SIZE(softirq_name) ? softirq_name[vector] : "UNKNOWN", 421 + vector, 24, spaces, 422 422 ns_to_usf(duration)); 423 423 return 0; 424 424 } ··· 481 481 { 482 482 struct timerlat_aa_context *taa_ctx = timerlat_aa_get_ctx(); 483 483 struct timerlat_aa_data *taa_data = timerlat_aa_get_data(taa_ctx, record->cpu); 484 + enum stack_format stack_format = taa_ctx->stack_format; 484 485 unsigned long *caller; 485 486 const char *function; 486 - int val, i; 487 + int val; 488 + unsigned long long i; 487 489 488 490 trace_seq_reset(taa_data->stack_seq); 489 491 490 492 trace_seq_printf(taa_data->stack_seq, " Blocking thread stack trace\n"); 491 493 caller = tep_get_field_raw(s, event, "caller", record, &val, 1); 494 + 492 495 if (caller) { 493 - for (i = 0; ; i++) { 496 + unsigned long long size; 497 + unsigned long long max_entries; 498 + 499 + if (tep_get_field_val(s, event, "size", record, &size, 1) == 0) 500 + max_entries = size < 64 ? size : 64; 501 + else 502 + max_entries = 64; 503 + 504 + for (i = 0; i < max_entries; i++) { 494 505 function = tep_find_function(taa_ctx->tool->trace.tep, caller[i]); 495 - if (!function) 496 - break; 497 - trace_seq_printf(taa_data->stack_seq, " %.*s -> %s\n", 498 - 14, spaces, function); 506 + if (!function) { 507 + if (stack_format == STACK_FORMAT_TRUNCATE) 508 + break; 509 + else if (stack_format == STACK_FORMAT_SKIP) 510 + continue; 511 + else if (stack_format == STACK_FORMAT_FULL) 512 + trace_seq_printf(taa_data->stack_seq, " %.*s -> 0x%lx\n", 513 + 14, spaces, caller[i]); 514 + } else { 515 + trace_seq_printf(taa_data->stack_seq, " %.*s -> %s\n", 516 + 14, spaces, function); 517 + } 499 518 } 500 519 } 520 + 501 521 return 0; 502 522 } 503 523 ··· 758 738 irq_thresh = irq_thresh * 1000; 759 739 thread_thresh = thread_thresh * 1000; 760 740 761 - for (cpu = 0; cpu < taa_ctx->nr_cpus; cpu++) { 741 + for (cpu = 0; cpu < nr_cpus; cpu++) { 762 742 taa_data = timerlat_aa_get_data(taa_ctx, cpu); 763 743 764 744 if (irq_thresh && taa_data->tlat_irq_latency >= irq_thresh) { ··· 786 766 787 767 printf("\n"); 788 768 printf("Printing CPU tasks:\n"); 789 - for (cpu = 0; cpu < taa_ctx->nr_cpus; cpu++) { 769 + for (cpu = 0; cpu < nr_cpus; cpu++) { 790 770 taa_data = timerlat_aa_get_data(taa_ctx, cpu); 791 771 tep = taa_ctx->tool->trace.tep; 792 772 ··· 812 792 if (!taa_ctx->taa_data) 813 793 return; 814 794 815 - for (i = 0; i < taa_ctx->nr_cpus; i++) { 795 + for (i = 0; i < nr_cpus; i++) { 816 796 taa_data = timerlat_aa_get_data(taa_ctx, i); 817 797 818 798 if (taa_data->prev_irqs_seq) { ··· 862 842 struct timerlat_aa_data *taa_data; 863 843 int i; 864 844 865 - for (i = 0; i < taa_ctx->nr_cpus; i++) { 845 + for (i = 0; i < nr_cpus; i++) { 866 846 867 847 taa_data = timerlat_aa_get_data(taa_ctx, i); 868 848 ··· 1040 1020 * 1041 1021 * Returns 0 on success, -1 otherwise. 1042 1022 */ 1043 - int timerlat_aa_init(struct osnoise_tool *tool, int dump_tasks) 1023 + int timerlat_aa_init(struct osnoise_tool *tool, int dump_tasks, enum stack_format stack_format) 1044 1024 { 1045 - int nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 1046 1025 struct timerlat_aa_context *taa_ctx; 1047 1026 int retval; 1048 1027 ··· 1051 1032 1052 1033 __timerlat_aa_ctx = taa_ctx; 1053 1034 1054 - taa_ctx->nr_cpus = nr_cpus; 1055 1035 taa_ctx->tool = tool; 1056 1036 taa_ctx->dump_tasks = dump_tasks; 1037 + taa_ctx->stack_format = stack_format; 1057 1038 1058 1039 taa_ctx->taa_data = calloc(nr_cpus, sizeof(*taa_ctx->taa_data)); 1059 1040 if (!taa_ctx->taa_data)
+1 -1
tools/tracing/rtla/src/timerlat_aa.h
··· 3 3 * Copyright (C) 2023 Red Hat Inc, Daniel Bristot de Oliveira <bristot@kernel.org> 4 4 */ 5 5 6 - int timerlat_aa_init(struct osnoise_tool *tool, int dump_task); 6 + int timerlat_aa_init(struct osnoise_tool *tool, int dump_task, enum stack_format stack_format); 7 7 void timerlat_aa_destroy(void); 8 8 9 9 void timerlat_auto_analysis(int irq_thresh, int thread_thresh);
+8 -11
tools/tracing/rtla/src/timerlat_bpf.c
··· 147 147 int key, 148 148 long long *value_irq, 149 149 long long *value_thread, 150 - long long *value_user, 151 - int cpus) 150 + long long *value_user) 152 151 { 153 152 int err; 154 153 155 154 err = bpf_map__lookup_elem(map_irq, &key, 156 155 sizeof(unsigned int), value_irq, 157 - sizeof(long long) * cpus, 0); 156 + sizeof(long long) * nr_cpus, 0); 158 157 if (err) 159 158 return err; 160 159 err = bpf_map__lookup_elem(map_thread, &key, 161 160 sizeof(unsigned int), value_thread, 162 - sizeof(long long) * cpus, 0); 161 + sizeof(long long) * nr_cpus, 0); 163 162 if (err) 164 163 return err; 165 164 err = bpf_map__lookup_elem(map_user, &key, 166 165 sizeof(unsigned int), value_user, 167 - sizeof(long long) * cpus, 0); 166 + sizeof(long long) * nr_cpus, 0); 168 167 if (err) 169 168 return err; 170 169 return 0; ··· 175 176 int timerlat_bpf_get_hist_value(int key, 176 177 long long *value_irq, 177 178 long long *value_thread, 178 - long long *value_user, 179 - int cpus) 179 + long long *value_user) 180 180 { 181 181 return get_value(bpf->maps.hist_irq, 182 182 bpf->maps.hist_thread, 183 183 bpf->maps.hist_user, 184 - key, value_irq, value_thread, value_user, cpus); 184 + key, value_irq, value_thread, value_user); 185 185 } 186 186 187 187 /* ··· 189 191 int timerlat_bpf_get_summary_value(enum summary_field key, 190 192 long long *value_irq, 191 193 long long *value_thread, 192 - long long *value_user, 193 - int cpus) 194 + long long *value_user) 194 195 { 195 196 return get_value(bpf->maps.summary_irq, 196 197 bpf->maps.summary_thread, 197 198 bpf->maps.summary_user, 198 - key, value_irq, value_thread, value_user, cpus); 199 + key, value_irq, value_thread, value_user); 199 200 } 200 201 201 202 /*
+4 -8
tools/tracing/rtla/src/timerlat_bpf.h
··· 22 22 int timerlat_bpf_get_hist_value(int key, 23 23 long long *value_irq, 24 24 long long *value_thread, 25 - long long *value_user, 26 - int cpus); 25 + long long *value_user); 27 26 int timerlat_bpf_get_summary_value(enum summary_field key, 28 27 long long *value_irq, 29 28 long long *value_thread, 30 - long long *value_user, 31 - int cpus); 29 + long long *value_user); 32 30 int timerlat_load_bpf_action_program(const char *program_path); 33 31 static inline int have_libbpf_support(void) { return 1; } 34 32 #else ··· 42 44 static inline int timerlat_bpf_get_hist_value(int key, 43 45 long long *value_irq, 44 46 long long *value_thread, 45 - long long *value_user, 46 - int cpus) 47 + long long *value_user) 47 48 { 48 49 return -1; 49 50 } 50 51 static inline int timerlat_bpf_get_summary_value(enum summary_field key, 51 52 long long *value_irq, 52 53 long long *value_thread, 53 - long long *value_user, 54 - int cpus) 54 + long long *value_user) 55 55 { 56 56 return -1; 57 57 }
+53 -63
tools/tracing/rtla/src/timerlat_hist.c
··· 17 17 #include "timerlat.h" 18 18 #include "timerlat_aa.h" 19 19 #include "timerlat_bpf.h" 20 + #include "common.h" 20 21 21 22 struct timerlat_hist_cpu { 22 23 int *irq; ··· 45 44 struct timerlat_hist_cpu *hist; 46 45 int entries; 47 46 int bucket_size; 48 - int nr_cpus; 49 47 }; 50 48 51 49 /* ··· 56 56 int cpu; 57 57 58 58 /* one histogram for IRQ and one for thread, per CPU */ 59 - for (cpu = 0; cpu < data->nr_cpus; cpu++) { 59 + for (cpu = 0; cpu < nr_cpus; cpu++) { 60 60 if (data->hist[cpu].irq) 61 61 free(data->hist[cpu].irq); 62 62 ··· 83 83 * timerlat_alloc_histogram - alloc runtime data 84 84 */ 85 85 static struct timerlat_hist_data 86 - *timerlat_alloc_histogram(int nr_cpus, int entries, int bucket_size) 86 + *timerlat_alloc_histogram(int entries, int bucket_size) 87 87 { 88 88 struct timerlat_hist_data *data; 89 89 int cpu; ··· 94 94 95 95 data->entries = entries; 96 96 data->bucket_size = bucket_size; 97 - data->nr_cpus = nr_cpus; 98 97 99 98 /* one set of histograms per CPU */ 100 99 data->hist = calloc(1, sizeof(*data->hist) * nr_cpus); ··· 203 204 { 204 205 struct timerlat_hist_data *data = tool->data; 205 206 int i, j, err; 206 - long long value_irq[data->nr_cpus], 207 - value_thread[data->nr_cpus], 208 - value_user[data->nr_cpus]; 207 + long long value_irq[nr_cpus], 208 + value_thread[nr_cpus], 209 + value_user[nr_cpus]; 209 210 210 211 /* Pull histogram */ 211 212 for (i = 0; i < data->entries; i++) { 212 213 err = timerlat_bpf_get_hist_value(i, value_irq, value_thread, 213 - value_user, data->nr_cpus); 214 + value_user); 214 215 if (err) 215 216 return err; 216 - for (j = 0; j < data->nr_cpus; j++) { 217 + for (j = 0; j < nr_cpus; j++) { 217 218 data->hist[j].irq[i] = value_irq[j]; 218 219 data->hist[j].thread[i] = value_thread[j]; 219 220 data->hist[j].user[i] = value_user[j]; ··· 222 223 223 224 /* Pull summary */ 224 225 err = timerlat_bpf_get_summary_value(SUMMARY_COUNT, 225 - value_irq, value_thread, value_user, 226 - data->nr_cpus); 226 + value_irq, value_thread, value_user); 227 227 if (err) 228 228 return err; 229 - for (i = 0; i < data->nr_cpus; i++) { 229 + for (i = 0; i < nr_cpus; i++) { 230 230 data->hist[i].irq_count = value_irq[i]; 231 231 data->hist[i].thread_count = value_thread[i]; 232 232 data->hist[i].user_count = value_user[i]; 233 233 } 234 234 235 235 err = timerlat_bpf_get_summary_value(SUMMARY_MIN, 236 - value_irq, value_thread, value_user, 237 - data->nr_cpus); 236 + value_irq, value_thread, value_user); 238 237 if (err) 239 238 return err; 240 - for (i = 0; i < data->nr_cpus; i++) { 239 + for (i = 0; i < nr_cpus; i++) { 241 240 data->hist[i].min_irq = value_irq[i]; 242 241 data->hist[i].min_thread = value_thread[i]; 243 242 data->hist[i].min_user = value_user[i]; 244 243 } 245 244 246 245 err = timerlat_bpf_get_summary_value(SUMMARY_MAX, 247 - value_irq, value_thread, value_user, 248 - data->nr_cpus); 246 + value_irq, value_thread, value_user); 249 247 if (err) 250 248 return err; 251 - for (i = 0; i < data->nr_cpus; i++) { 249 + for (i = 0; i < nr_cpus; i++) { 252 250 data->hist[i].max_irq = value_irq[i]; 253 251 data->hist[i].max_thread = value_thread[i]; 254 252 data->hist[i].max_user = value_user[i]; 255 253 } 256 254 257 255 err = timerlat_bpf_get_summary_value(SUMMARY_SUM, 258 - value_irq, value_thread, value_user, 259 - data->nr_cpus); 256 + value_irq, value_thread, value_user); 260 257 if (err) 261 258 return err; 262 - for (i = 0; i < data->nr_cpus; i++) { 259 + for (i = 0; i < nr_cpus; i++) { 263 260 data->hist[i].sum_irq = value_irq[i]; 264 261 data->hist[i].sum_thread = value_thread[i]; 265 262 data->hist[i].sum_user = value_user[i]; 266 263 } 267 264 268 265 err = timerlat_bpf_get_summary_value(SUMMARY_OVERFLOW, 269 - value_irq, value_thread, value_user, 270 - data->nr_cpus); 266 + value_irq, value_thread, value_user); 271 267 if (err) 272 268 return err; 273 - for (i = 0; i < data->nr_cpus; i++) { 269 + for (i = 0; i < nr_cpus; i++) { 274 270 data->hist[i].irq[data->entries] = value_irq[i]; 275 271 data->hist[i].thread[data->entries] = value_thread[i]; 276 272 data->hist[i].user[data->entries] = value_user[i]; ··· 299 305 if (!params->common.hist.no_index) 300 306 trace_seq_printf(s, "Index"); 301 307 302 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 308 + for_each_monitored_cpu(cpu, &params->common) { 303 309 304 310 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 305 311 continue; ··· 351 357 if (!params->common.hist.no_index) 352 358 trace_seq_printf(trace->seq, "count:"); 353 359 354 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 360 + for_each_monitored_cpu(cpu, &params->common) { 355 361 356 362 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 357 363 continue; ··· 373 379 if (!params->common.hist.no_index) 374 380 trace_seq_printf(trace->seq, "min: "); 375 381 376 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 382 + for_each_monitored_cpu(cpu, &params->common) { 377 383 378 384 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 379 385 continue; ··· 401 407 if (!params->common.hist.no_index) 402 408 trace_seq_printf(trace->seq, "avg: "); 403 409 404 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 410 + for_each_monitored_cpu(cpu, &params->common) { 405 411 406 412 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 407 413 continue; ··· 429 435 if (!params->common.hist.no_index) 430 436 trace_seq_printf(trace->seq, "max: "); 431 437 432 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 438 + for_each_monitored_cpu(cpu, &params->common) { 433 439 434 440 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 435 441 continue; ··· 474 480 sum.min_thread = ~0; 475 481 sum.min_user = ~0; 476 482 477 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 483 + for_each_monitored_cpu(cpu, &params->common) { 478 484 479 485 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 480 486 continue; ··· 621 627 trace_seq_printf(trace->seq, "%-6d", 622 628 bucket * data->bucket_size); 623 629 624 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 630 + for_each_monitored_cpu(cpu, &params->common) { 625 631 626 632 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 627 633 continue; ··· 659 665 if (!params->common.hist.no_index) 660 666 trace_seq_printf(trace->seq, "over: "); 661 667 662 - for_each_monitored_cpu(cpu, data->nr_cpus, &params->common) { 668 + for_each_monitored_cpu(cpu, &params->common) { 663 669 664 670 if (!data->hist[cpu].irq_count && !data->hist[cpu].thread_count) 665 671 continue; ··· 741 747 " --on-threshold <action>: define action to be executed at latency threshold, multiple are allowed", 742 748 " --on-end <action>: define action to be executed at measurement end, multiple are allowed", 743 749 " --bpf-action <program>: load and execute BPF program when latency threshold is exceeded", 750 + " --stack-format <format>: set the stack format (truncate, skip, full)", 744 751 NULL, 745 752 }; 746 753 ··· 761 766 int c; 762 767 char *trace_output = NULL; 763 768 764 - params = calloc(1, sizeof(*params)); 765 - if (!params) 766 - exit(1); 769 + params = calloc_fatal(1, sizeof(*params)); 767 770 768 771 actions_init(&params->common.threshold_actions); 769 772 actions_init(&params->common.end_actions); ··· 779 786 780 787 /* default to BPF mode */ 781 788 params->mode = TRACING_MODE_BPF; 789 + 790 + /* default to truncate stack format */ 791 + params->stack_format = STACK_FORMAT_TRUNCATE; 782 792 783 793 while (1) { 784 794 static struct option long_options[] = { ··· 815 819 {"on-threshold", required_argument, 0, '\5'}, 816 820 {"on-end", required_argument, 0, '\6'}, 817 821 {"bpf-action", required_argument, 0, '\7'}, 822 + {"stack-format", required_argument, 0, '\10'}, 818 823 {0, 0, 0, 0} 819 824 }; 820 825 821 826 if (common_parse_options(argc, argv, &params->common)) 822 827 continue; 823 828 824 - c = getopt_long(argc, argv, "a:b:E:hi:knp:s:t::T:uU0123456:7:8:9\1\2:\3:", 825 - long_options, NULL); 829 + c = getopt_auto(argc, argv, long_options); 826 830 827 831 /* detect the end of the options. */ 828 832 if (c == -1) ··· 910 914 params->common.hist.with_zeros = 1; 911 915 break; 912 916 case '6': /* trigger */ 913 - if (params->common.events) { 914 - retval = trace_event_add_trigger(params->common.events, optarg); 915 - if (retval) 916 - fatal("Error adding trigger %s", optarg); 917 - } else { 917 + if (params->common.events) 918 + trace_event_add_trigger(params->common.events, optarg); 919 + else 918 920 fatal("--trigger requires a previous -e"); 919 - } 920 921 break; 921 922 case '7': /* filter */ 922 - if (params->common.events) { 923 - retval = trace_event_add_filter(params->common.events, optarg); 924 - if (retval) 925 - fatal("Error adding filter %s", optarg); 926 - } else { 923 + if (params->common.events) 924 + trace_event_add_filter(params->common.events, optarg); 925 + else 927 926 fatal("--filter requires a previous -e"); 928 - } 929 927 break; 930 928 case '8': 931 929 params->dma_latency = get_llong_from_str(optarg); ··· 955 965 break; 956 966 case '\7': 957 967 params->bpf_action_program = optarg; 968 + break; 969 + case '\10': 970 + params->stack_format = parse_stack_format(optarg); 971 + if (params->stack_format == -1) 972 + fatal("Invalid --stack-format option"); 958 973 break; 959 974 default: 960 975 fatal("Invalid option"); ··· 1026 1031 *timerlat_init_hist(struct common_params *params) 1027 1032 { 1028 1033 struct osnoise_tool *tool; 1029 - int nr_cpus; 1030 - 1031 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 1032 1034 1033 1035 tool = osnoise_init_tool("timerlat_hist"); 1034 1036 if (!tool) 1035 1037 return NULL; 1036 1038 1037 - tool->data = timerlat_alloc_histogram(nr_cpus, params->hist.entries, 1039 + tool->data = timerlat_alloc_histogram(params->hist.entries, 1038 1040 params->hist.bucket_size); 1039 1041 if (!tool->data) 1040 1042 goto out_err; ··· 1048 1056 1049 1057 static int timerlat_hist_bpf_main_loop(struct osnoise_tool *tool) 1050 1058 { 1051 - struct timerlat_params *params = to_timerlat_params(tool->params); 1052 1059 int retval; 1053 1060 1054 1061 while (!stop_tracing) { ··· 1055 1064 1056 1065 if (!stop_tracing) { 1057 1066 /* Threshold overflow, perform actions on threshold */ 1058 - actions_perform(&params->common.threshold_actions); 1067 + retval = common_threshold_handler(tool); 1068 + if (retval) 1069 + return retval; 1059 1070 1060 - if (!params->common.threshold_actions.continue_flag) 1061 - /* continue flag not set, break */ 1071 + if (!should_continue_tracing(tool->params)) 1062 1072 break; 1063 1073 1064 - /* continue action reached, re-enable tracing */ 1065 - if (tool->record) 1066 - trace_instance_start(&tool->record->trace); 1067 - if (tool->aa) 1068 - trace_instance_start(&tool->aa->trace); 1069 - timerlat_bpf_restart_tracing(); 1074 + if (timerlat_bpf_restart_tracing()) { 1075 + err_msg("Error restarting BPF trace\n"); 1076 + return -1; 1077 + } 1070 1078 } 1071 1079 } 1072 1080 timerlat_bpf_detach();
+50 -64
tools/tracing/rtla/src/timerlat_top.c
··· 17 17 #include "timerlat.h" 18 18 #include "timerlat_aa.h" 19 19 #include "timerlat_bpf.h" 20 + #include "common.h" 20 21 21 22 struct timerlat_top_cpu { 22 23 unsigned long long irq_count; ··· 42 41 43 42 struct timerlat_top_data { 44 43 struct timerlat_top_cpu *cpu_data; 45 - int nr_cpus; 46 44 }; 47 45 48 46 /* ··· 62 62 /* 63 63 * timerlat_alloc_histogram - alloc runtime data 64 64 */ 65 - static struct timerlat_top_data *timerlat_alloc_top(int nr_cpus) 65 + static struct timerlat_top_data *timerlat_alloc_top(void) 66 66 { 67 67 struct timerlat_top_data *data; 68 68 int cpu; ··· 70 70 data = calloc(1, sizeof(*data)); 71 71 if (!data) 72 72 return NULL; 73 - 74 - data->nr_cpus = nr_cpus; 75 73 76 74 /* one set of histograms per CPU */ 77 75 data->cpu_data = calloc(1, sizeof(*data->cpu_data) * nr_cpus); ··· 188 190 { 189 191 struct timerlat_top_data *data = tool->data; 190 192 int i, err; 191 - long long value_irq[data->nr_cpus], 192 - value_thread[data->nr_cpus], 193 - value_user[data->nr_cpus]; 193 + long long value_irq[nr_cpus], 194 + value_thread[nr_cpus], 195 + value_user[nr_cpus]; 194 196 195 197 /* Pull summary */ 196 198 err = timerlat_bpf_get_summary_value(SUMMARY_CURRENT, 197 - value_irq, value_thread, value_user, 198 - data->nr_cpus); 199 + value_irq, value_thread, value_user); 199 200 if (err) 200 201 return err; 201 - for (i = 0; i < data->nr_cpus; i++) { 202 + for (i = 0; i < nr_cpus; i++) { 202 203 data->cpu_data[i].cur_irq = value_irq[i]; 203 204 data->cpu_data[i].cur_thread = value_thread[i]; 204 205 data->cpu_data[i].cur_user = value_user[i]; 205 206 } 206 207 207 208 err = timerlat_bpf_get_summary_value(SUMMARY_COUNT, 208 - value_irq, value_thread, value_user, 209 - data->nr_cpus); 209 + value_irq, value_thread, value_user); 210 210 if (err) 211 211 return err; 212 - for (i = 0; i < data->nr_cpus; i++) { 212 + for (i = 0; i < nr_cpus; i++) { 213 213 data->cpu_data[i].irq_count = value_irq[i]; 214 214 data->cpu_data[i].thread_count = value_thread[i]; 215 215 data->cpu_data[i].user_count = value_user[i]; 216 216 } 217 217 218 218 err = timerlat_bpf_get_summary_value(SUMMARY_MIN, 219 - value_irq, value_thread, value_user, 220 - data->nr_cpus); 219 + value_irq, value_thread, value_user); 221 220 if (err) 222 221 return err; 223 - for (i = 0; i < data->nr_cpus; i++) { 222 + for (i = 0; i < nr_cpus; i++) { 224 223 data->cpu_data[i].min_irq = value_irq[i]; 225 224 data->cpu_data[i].min_thread = value_thread[i]; 226 225 data->cpu_data[i].min_user = value_user[i]; 227 226 } 228 227 229 228 err = timerlat_bpf_get_summary_value(SUMMARY_MAX, 230 - value_irq, value_thread, value_user, 231 - data->nr_cpus); 229 + value_irq, value_thread, value_user); 232 230 if (err) 233 231 return err; 234 - for (i = 0; i < data->nr_cpus; i++) { 232 + for (i = 0; i < nr_cpus; i++) { 235 233 data->cpu_data[i].max_irq = value_irq[i]; 236 234 data->cpu_data[i].max_thread = value_thread[i]; 237 235 data->cpu_data[i].max_user = value_user[i]; 238 236 } 239 237 240 238 err = timerlat_bpf_get_summary_value(SUMMARY_SUM, 241 - value_irq, value_thread, value_user, 242 - data->nr_cpus); 239 + value_irq, value_thread, value_user); 243 240 if (err) 244 241 return err; 245 - for (i = 0; i < data->nr_cpus; i++) { 242 + for (i = 0; i < nr_cpus; i++) { 246 243 data->cpu_data[i].sum_irq = value_irq[i]; 247 244 data->cpu_data[i].sum_thread = value_thread[i]; 248 245 data->cpu_data[i].sum_user = value_user[i]; ··· 435 442 struct timerlat_params *params = to_timerlat_params(top->params); 436 443 struct trace_instance *trace = &top->trace; 437 444 struct timerlat_top_cpu summary; 438 - static int nr_cpus = -1; 439 445 int i; 440 446 441 447 if (params->common.aa_only) 442 448 return; 443 - 444 - if (nr_cpus == -1) 445 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 446 449 447 450 if (!params->common.quiet) 448 451 clear_terminal(trace->seq); ··· 447 458 448 459 timerlat_top_header(params, top); 449 460 450 - for_each_monitored_cpu(i, nr_cpus, &params->common) { 461 + for_each_monitored_cpu(i, &params->common) { 451 462 timerlat_top_print(top, i); 452 463 timerlat_top_update_sum(top, i, &summary); 453 464 } ··· 507 518 " --on-threshold <action>: define action to be executed at latency threshold, multiple are allowed", 508 519 " --on-end: define action to be executed at measurement end, multiple are allowed", 509 520 " --bpf-action <program>: load and execute BPF program when latency threshold is exceeded", 521 + " --stack-format <format>: set the stack format (truncate, skip, full)", 510 522 NULL, 511 523 }; 512 524 ··· 527 537 int c; 528 538 char *trace_output = NULL; 529 539 530 - params = calloc(1, sizeof(*params)); 531 - if (!params) 532 - exit(1); 540 + params = calloc_fatal(1, sizeof(*params)); 533 541 534 542 actions_init(&params->common.threshold_actions); 535 543 actions_init(&params->common.end_actions); ··· 543 555 544 556 /* default to BPF mode */ 545 557 params->mode = TRACING_MODE_BPF; 558 + 559 + /* default to truncate stack format */ 560 + params->stack_format = STACK_FORMAT_TRUNCATE; 546 561 547 562 while (1) { 548 563 static struct option long_options[] = { ··· 573 582 {"on-threshold", required_argument, 0, '9'}, 574 583 {"on-end", required_argument, 0, '\1'}, 575 584 {"bpf-action", required_argument, 0, '\2'}, 585 + {"stack-format", required_argument, 0, '\3'}, 576 586 {0, 0, 0, 0} 577 587 }; 578 588 579 589 if (common_parse_options(argc, argv, &params->common)) 580 590 continue; 581 591 582 - c = getopt_long(argc, argv, "a:hi:knp:qs:t::T:uU0:1:2:345:6:7:", 583 - long_options, NULL); 592 + c = getopt_auto(argc, argv, long_options); 584 593 585 594 /* detect the end of the options. */ 586 595 if (c == -1) ··· 655 664 params->common.user_data = true; 656 665 break; 657 666 case '0': /* trigger */ 658 - if (params->common.events) { 659 - retval = trace_event_add_trigger(params->common.events, optarg); 660 - if (retval) 661 - fatal("Error adding trigger %s", optarg); 662 - } else { 667 + if (params->common.events) 668 + trace_event_add_trigger(params->common.events, optarg); 669 + else 663 670 fatal("--trigger requires a previous -e"); 664 - } 665 671 break; 666 672 case '1': /* filter */ 667 - if (params->common.events) { 668 - retval = trace_event_add_filter(params->common.events, optarg); 669 - if (retval) 670 - fatal("Error adding filter %s", optarg); 671 - } else { 673 + if (params->common.events) 674 + trace_event_add_filter(params->common.events, optarg); 675 + else 672 676 fatal("--filter requires a previous -e"); 673 - } 674 677 break; 675 678 case '2': /* dma-latency */ 676 679 params->dma_latency = get_llong_from_str(optarg); ··· 700 715 break; 701 716 case '\2': 702 717 params->bpf_action_program = optarg; 718 + break; 719 + case '\3': 720 + params->stack_format = parse_stack_format(optarg); 721 + if (params->stack_format == -1) 722 + fatal("Invalid --stack-format option"); 703 723 break; 704 724 default: 705 725 fatal("Invalid option"); ··· 771 781 *timerlat_init_top(struct common_params *params) 772 782 { 773 783 struct osnoise_tool *top; 774 - int nr_cpus; 775 - 776 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 777 784 778 785 top = osnoise_init_tool("timerlat_top"); 779 786 if (!top) 780 787 return NULL; 781 788 782 - top->data = timerlat_alloc_top(nr_cpus); 789 + top->data = timerlat_alloc_top(); 783 790 if (!top->data) 784 791 goto out_err; 785 792 ··· 796 809 static int 797 810 timerlat_top_bpf_main_loop(struct osnoise_tool *tool) 798 811 { 799 - struct timerlat_params *params = to_timerlat_params(tool->params); 812 + const struct common_params *params = tool->params; 800 813 int retval, wait_retval; 801 814 802 - if (params->common.aa_only) { 815 + if (params->aa_only) { 803 816 /* Auto-analysis only, just wait for stop tracing */ 804 817 timerlat_bpf_wait(-1); 805 818 return 0; ··· 807 820 808 821 /* Pull and display data in a loop */ 809 822 while (!stop_tracing) { 810 - wait_retval = timerlat_bpf_wait(params->common.quiet ? -1 : 811 - params->common.sleep_time); 823 + wait_retval = timerlat_bpf_wait(params->quiet ? -1 : 824 + params->sleep_time); 812 825 813 826 retval = timerlat_top_bpf_pull_data(tool); 814 827 if (retval) { ··· 816 829 return retval; 817 830 } 818 831 819 - if (!params->common.quiet) 832 + if (!params->quiet) 820 833 timerlat_print_stats(tool); 821 834 822 835 if (wait_retval != 0) { 823 836 /* Stopping requested by tracer */ 824 - actions_perform(&params->common.threshold_actions); 837 + retval = common_threshold_handler(tool); 838 + if (retval) 839 + return retval; 825 840 826 - if (!params->common.threshold_actions.continue_flag) 827 - /* continue flag not set, break */ 841 + if (!should_continue_tracing(tool->params)) 828 842 break; 829 843 830 - /* continue action reached, re-enable tracing */ 831 - if (tool->record) 832 - trace_instance_start(&tool->record->trace); 833 - if (tool->aa) 834 - trace_instance_start(&tool->aa->trace); 835 - timerlat_bpf_restart_tracing(); 844 + if (timerlat_bpf_restart_tracing()) { 845 + err_msg("Error restarting BPF trace\n"); 846 + return -1; 847 + } 836 848 } 837 849 838 850 /* is there still any user-threads ? */ 839 - if (params->common.user_workload) { 840 - if (params->common.user.stopped_running) { 851 + if (params->user_workload) { 852 + if (params->user.stopped_running) { 841 853 debug_msg("timerlat user space threads stopped!\n"); 842 854 break; 843 855 }
+6 -7
tools/tracing/rtla/src/timerlat_u.c
··· 16 16 #include <sys/wait.h> 17 17 #include <sys/prctl.h> 18 18 19 - #include "utils.h" 19 + #include "common.h" 20 20 #include "timerlat_u.h" 21 21 22 22 /* ··· 32 32 static int timerlat_u_main(int cpu, struct timerlat_u_params *params) 33 33 { 34 34 struct sched_param sp = { .sched_priority = 95 }; 35 - char buffer[1024]; 35 + char buffer[MAX_PATH]; 36 36 int timerlat_fd; 37 37 cpu_set_t set; 38 38 int retval; ··· 83 83 84 84 /* add should continue with a signal handler */ 85 85 while (true) { 86 - retval = read(timerlat_fd, buffer, 1024); 86 + retval = read(timerlat_fd, buffer, ARRAY_SIZE(buffer)); 87 87 if (retval < 0) 88 88 break; 89 89 } ··· 99 99 * 100 100 * Return the number of processes that received the kill. 101 101 */ 102 - static int timerlat_u_send_kill(pid_t *procs, int nr_cpus) 102 + static int timerlat_u_send_kill(pid_t *procs) 103 103 { 104 104 int killed = 0; 105 105 int i, retval; ··· 131 131 */ 132 132 void *timerlat_u_dispatcher(void *data) 133 133 { 134 - int nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 135 134 struct timerlat_u_params *params = data; 136 135 char proc_name[128]; 137 136 int procs_count = 0; ··· 169 170 170 171 /* parent */ 171 172 if (pid == -1) { 172 - timerlat_u_send_kill(procs, nr_cpus); 173 + timerlat_u_send_kill(procs); 173 174 debug_msg("Failed to create child processes"); 174 175 pthread_exit(&retval); 175 176 } ··· 196 197 sleep(1); 197 198 } 198 199 199 - timerlat_u_send_kill(procs, nr_cpus); 200 + timerlat_u_send_kill(procs); 200 201 201 202 while (procs_count) { 202 203 pid = waitpid(-1, &wstatus, 0);
+1
tools/tracing/rtla/src/timerlat_u.h
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 + #pragma once 2 3 /* 3 4 * Copyright (C) 2023 Red Hat Inc, Daniel Bristot de Oliveira <bristot@kernel.org> 4 5 */
+55 -45
tools/tracing/rtla/src/trace.c
··· 73 73 char buffer[4096]; 74 74 int out_fd, in_fd; 75 75 int retval = -1; 76 + ssize_t n_read; 77 + ssize_t n_written; 76 78 77 79 if (!inst || !filename) 78 80 return 0; ··· 92 90 goto out_close_in; 93 91 } 94 92 95 - do { 96 - retval = read(in_fd, buffer, sizeof(buffer)); 97 - if (retval <= 0) 93 + for (;;) { 94 + n_read = read(in_fd, buffer, sizeof(buffer)); 95 + if (n_read < 0) { 96 + if (errno == EINTR) 97 + continue; 98 + err_msg("Error reading trace file: %s\n", strerror(errno)); 98 99 goto out_close; 100 + } 101 + if (n_read == 0) 102 + break; 99 103 100 - retval = write(out_fd, buffer, retval); 101 - if (retval < 0) 102 - goto out_close; 103 - } while (retval > 0); 104 + n_written = 0; 105 + while (n_written < n_read) { 106 + const ssize_t w = write(out_fd, buffer + n_written, n_read - n_written); 107 + 108 + if (w < 0) { 109 + if (errno == EINTR) 110 + continue; 111 + err_msg("Error writing trace file: %s\n", strerror(errno)); 112 + goto out_close; 113 + } 114 + n_written += w; 115 + } 116 + } 104 117 105 118 retval = 0; 106 119 out_close: ··· 208 191 */ 209 192 int trace_instance_init(struct trace_instance *trace, char *tool_name) 210 193 { 211 - trace->seq = calloc(1, sizeof(*trace->seq)); 212 - if (!trace->seq) 213 - goto out_err; 194 + trace->seq = calloc_fatal(1, sizeof(*trace->seq)); 214 195 215 196 trace_seq_init(trace->seq); 216 197 ··· 289 274 { 290 275 struct trace_events *tevent; 291 276 292 - tevent = calloc(1, sizeof(*tevent)); 293 - if (!tevent) 294 - return NULL; 277 + tevent = calloc_fatal(1, sizeof(*tevent)); 295 278 296 - tevent->system = strdup(event_string); 297 - if (!tevent->system) { 298 - free(tevent); 299 - return NULL; 300 - } 279 + tevent->system = strdup_fatal(event_string); 301 280 302 281 tevent->event = strstr(tevent->system, ":"); 303 282 if (tevent->event) { ··· 305 296 /* 306 297 * trace_event_add_filter - record an event filter 307 298 */ 308 - int trace_event_add_filter(struct trace_events *event, char *filter) 299 + void trace_event_add_filter(struct trace_events *event, char *filter) 309 300 { 310 301 if (event->filter) 311 302 free(event->filter); 312 303 313 - event->filter = strdup(filter); 314 - if (!event->filter) 315 - return 1; 316 - 317 - return 0; 304 + event->filter = strdup_fatal(filter); 318 305 } 319 306 320 307 /* 321 308 * trace_event_add_trigger - record an event trigger action 322 309 */ 323 - int trace_event_add_trigger(struct trace_events *event, char *trigger) 310 + void trace_event_add_trigger(struct trace_events *event, char *trigger) 324 311 { 325 312 if (event->trigger) 326 313 free(event->trigger); 327 314 328 - event->trigger = strdup(trigger); 329 - if (!event->trigger) 330 - return 1; 331 - 332 - return 0; 315 + event->trigger = strdup_fatal(trigger); 333 316 } 334 317 335 318 /* ··· 330 329 static void trace_event_disable_filter(struct trace_instance *instance, 331 330 struct trace_events *tevent) 332 331 { 333 - char filter[1024]; 332 + char filter[MAX_PATH]; 334 333 int retval; 335 334 336 335 if (!tevent->filter) ··· 342 341 debug_msg("Disabling %s:%s filter %s\n", tevent->system, 343 342 tevent->event ? : "*", tevent->filter); 344 343 345 - snprintf(filter, 1024, "!%s\n", tevent->filter); 344 + snprintf(filter, ARRAY_SIZE(filter), "!%s\n", tevent->filter); 346 345 347 346 retval = tracefs_event_file_write(instance->inst, tevent->system, 348 347 tevent->event, "filter", filter); ··· 359 358 static void trace_event_save_hist(struct trace_instance *instance, 360 359 struct trace_events *tevent) 361 360 { 362 - int retval, index, out_fd; 361 + size_t index, hist_len; 363 362 mode_t mode = 0644; 364 - char path[1024]; 363 + char path[MAX_PATH]; 365 364 char *hist; 365 + int out_fd; 366 366 367 367 if (!tevent) 368 368 return; ··· 373 371 return; 374 372 375 373 /* is this a hist: trigger? */ 376 - retval = strncmp(tevent->trigger, "hist:", strlen("hist:")); 377 - if (retval) 374 + if (!str_has_prefix(tevent->trigger, "hist:")) 378 375 return; 379 376 380 - snprintf(path, 1024, "%s_%s_hist.txt", tevent->system, tevent->event); 377 + snprintf(path, ARRAY_SIZE(path), "%s_%s_hist.txt", tevent->system, tevent->event); 381 378 382 379 printf(" Saving event %s:%s hist to %s\n", tevent->system, tevent->event, path); 383 380 ··· 393 392 } 394 393 395 394 index = 0; 395 + hist_len = strlen(hist); 396 396 do { 397 - index += write(out_fd, &hist[index], strlen(hist) - index); 398 - } while (index < strlen(hist)); 397 + const ssize_t written = write(out_fd, &hist[index], hist_len - index); 398 + 399 + if (written < 0) { 400 + if (errno == EINTR) 401 + continue; 402 + err_msg(" Error writing hist file: %s\n", strerror(errno)); 403 + break; 404 + } 405 + index += written; 406 + } while (index < hist_len); 399 407 400 408 free(hist); 401 409 out_close: ··· 417 407 static void trace_event_disable_trigger(struct trace_instance *instance, 418 408 struct trace_events *tevent) 419 409 { 420 - char trigger[1024]; 410 + char trigger[MAX_PATH]; 421 411 int retval; 422 412 423 413 if (!tevent->trigger) ··· 431 421 432 422 trace_event_save_hist(instance, tevent); 433 423 434 - snprintf(trigger, 1024, "!%s\n", tevent->trigger); 424 + snprintf(trigger, ARRAY_SIZE(trigger), "!%s\n", tevent->trigger); 435 425 436 426 retval = tracefs_event_file_write(instance->inst, tevent->system, 437 427 tevent->event, "trigger", trigger); ··· 470 460 static int trace_event_enable_filter(struct trace_instance *instance, 471 461 struct trace_events *tevent) 472 462 { 473 - char filter[1024]; 463 + char filter[MAX_PATH]; 474 464 int retval; 475 465 476 466 if (!tevent->filter) ··· 482 472 return 1; 483 473 } 484 474 485 - snprintf(filter, 1024, "%s\n", tevent->filter); 475 + snprintf(filter, ARRAY_SIZE(filter), "%s\n", tevent->filter); 486 476 487 477 debug_msg("Enabling %s:%s filter %s\n", tevent->system, 488 478 tevent->event ? : "*", tevent->filter); ··· 505 495 static int trace_event_enable_trigger(struct trace_instance *instance, 506 496 struct trace_events *tevent) 507 497 { 508 - char trigger[1024]; 498 + char trigger[MAX_PATH]; 509 499 int retval; 510 500 511 501 if (!tevent->trigger) ··· 517 507 return 1; 518 508 } 519 509 520 - snprintf(trigger, 1024, "%s\n", tevent->trigger); 510 + snprintf(trigger, ARRAY_SIZE(trigger), "%s\n", tevent->trigger); 521 511 522 512 debug_msg("Enabling %s:%s trigger %s\n", tevent->system, 523 513 tevent->event ? : "*", tevent->trigger);
+2 -2
tools/tracing/rtla/src/trace.h
··· 45 45 int trace_events_enable(struct trace_instance *instance, 46 46 struct trace_events *events); 47 47 48 - int trace_event_add_filter(struct trace_events *event, char *filter); 49 - int trace_event_add_trigger(struct trace_events *event, char *trigger); 48 + void trace_event_add_filter(struct trace_events *event, char *filter); 49 + void trace_event_add_trigger(struct trace_events *event, char *trigger); 50 50 int trace_set_buffer_size(struct trace_instance *trace, int size);
+88 -25
tools/tracing/rtla/src/utils.c
··· 19 19 #include <stdio.h> 20 20 #include <limits.h> 21 21 22 - #include "utils.h" 22 + #include "common.h" 23 23 24 24 #define MAX_MSG_LENGTH 1024 25 25 int config_debug; ··· 119 119 { 120 120 const char *p; 121 121 int end_cpu; 122 - int nr_cpus; 123 122 int cpu; 124 123 int i; 125 124 126 125 CPU_ZERO(set); 127 - 128 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 129 126 130 127 for (p = cpu_list; *p; ) { 131 128 cpu = atoi(p); ··· 162 165 } 163 166 164 167 /* 168 + * parse_stack_format - parse the stack format 169 + * 170 + * Return: the stack format on success, -1 otherwise. 171 + */ 172 + int parse_stack_format(char *arg) 173 + { 174 + if (!strcmp(arg, "truncate")) 175 + return STACK_FORMAT_TRUNCATE; 176 + if (!strcmp(arg, "skip")) 177 + return STACK_FORMAT_SKIP; 178 + if (!strcmp(arg, "full")) 179 + return STACK_FORMAT_FULL; 180 + 181 + debug_msg("Error parsing the stack format %s\n", arg); 182 + return -1; 183 + } 184 + 185 + /* 165 186 * parse_duration - parse duration with s/m/h/d suffix converting it to seconds 166 187 */ 167 188 long parse_seconds_duration(char *val) ··· 214 199 } 215 200 216 201 /* 202 + * match_time_unit - check if str starts with unit followed by end-of-string or ':' 203 + * 204 + * This allows the time unit parser to work both in standalone duration strings 205 + * like "100ms" and in colon-delimited SCHED_DEADLINE specifications like 206 + * "d:10ms:100ms", while still rejecting malformed input like "100msx". 207 + */ 208 + static bool match_time_unit(const char *str, const char *unit) 209 + { 210 + size_t len = strlen(unit); 211 + 212 + return strncmp(str, unit, len) == 0 && 213 + (str[len] == '\0' || str[len] == ':'); 214 + } 215 + 216 + /* 217 217 * parse_ns_duration - parse duration with ns/us/ms/s converting it to nanoseconds 218 218 */ 219 219 long parse_ns_duration(char *val) ··· 239 209 t = strtol(val, &end, 10); 240 210 241 211 if (end) { 242 - if (!strncmp(end, "ns", 2)) { 212 + if (match_time_unit(end, "ns")) { 243 213 return t; 244 - } else if (!strncmp(end, "us", 2)) { 214 + } else if (match_time_unit(end, "us")) { 245 215 t *= 1000; 246 216 return t; 247 - } else if (!strncmp(end, "ms", 2)) { 217 + } else if (match_time_unit(end, "ms")) { 248 218 t *= 1000 * 1000; 249 219 return t; 250 - } else if (!strncmp(end, "s", 1)) { 220 + } else if (match_time_unit(end, "s")) { 251 221 t *= 1000 * 1000 * 1000; 252 222 return t; 253 223 } ··· 324 294 return 0; 325 295 326 296 /* check if the string is a pid */ 327 - for (t_name = proc_entry->d_name; t_name; t_name++) { 297 + for (t_name = proc_entry->d_name; *t_name; t_name++) { 328 298 if (!isdigit(*t_name)) 329 299 break; 330 300 } ··· 346 316 return 0; 347 317 348 318 buffer[MAX_PATH-1] = '\0'; 349 - retval = strncmp(comm_prefix, buffer, strlen(comm_prefix)); 350 - if (retval) 319 + if (!str_has_prefix(buffer, comm_prefix)) 351 320 return 0; 352 321 353 322 /* comm already have \n */ ··· 390 361 391 362 if (strtoi(proc_entry->d_name, &pid)) { 392 363 err_msg("'%s' is not a valid pid", proc_entry->d_name); 393 - goto out_err; 364 + retval = 1; 365 + goto out; 394 366 } 395 367 /* procfs_is_workload_pid confirmed it is a pid */ 396 368 retval = __set_sched_attr(pid, attr); 397 369 if (retval) { 398 370 err_msg("Error setting sched attributes for pid:%s\n", proc_entry->d_name); 399 - goto out_err; 371 + goto out; 400 372 } 401 373 402 374 debug_msg("Set sched attributes for pid:%s\n", proc_entry->d_name); 403 375 } 404 - return 0; 405 376 406 - out_err: 377 + retval = 0; 378 + out: 407 379 closedir(procfs); 408 - return 1; 380 + return retval; 409 381 } 410 382 411 383 #define INVALID_VAL (~0L) ··· 589 559 unsigned int nr_states; 590 560 unsigned int state; 591 561 int disabled; 592 - int nr_cpus; 593 562 594 563 nr_states = cpuidle_state_count(cpu); 595 564 ··· 596 567 return 0; 597 568 598 569 if (saved_cpu_idle_disable_state == NULL) { 599 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 600 570 saved_cpu_idle_disable_state = calloc(nr_cpus, sizeof(unsigned int *)); 601 571 if (!saved_cpu_idle_disable_state) 602 572 return -1; ··· 672 644 void free_cpu_idle_disable_states(void) 673 645 { 674 646 int cpu; 675 - int nr_cpus; 676 647 677 648 if (!saved_cpu_idle_disable_state) 678 649 return; 679 - 680 - nr_cpus = sysconf(_SC_NPROCESSORS_CONF); 681 650 682 651 for (cpu = 0; cpu < nr_cpus; cpu++) { 683 652 free(saved_cpu_idle_disable_state[cpu]); ··· 834 809 char cgroup_procs[MAX_PATH]; 835 810 int retval; 836 811 int cg_fd; 812 + size_t cg_path_len; 837 813 838 814 retval = find_mount("cgroup2", cgroup_path, sizeof(cgroup_path)); 839 815 if (!retval) { ··· 842 816 return -1; 843 817 } 844 818 819 + cg_path_len = strlen(cgroup_path); 820 + 845 821 if (!cgroup) { 846 - retval = get_self_cgroup(&cgroup_path[strlen(cgroup_path)], 847 - sizeof(cgroup_path) - strlen(cgroup_path)); 822 + retval = get_self_cgroup(&cgroup_path[cg_path_len], 823 + sizeof(cgroup_path) - cg_path_len); 848 824 if (!retval) { 849 825 err_msg("Did not find self cgroup\n"); 850 826 return -1; 851 827 } 852 828 } else { 853 - snprintf(&cgroup_path[strlen(cgroup_path)], 854 - sizeof(cgroup_path) - strlen(cgroup_path), "%s/", cgroup); 829 + snprintf(&cgroup_path[cg_path_len], 830 + sizeof(cgroup_path) - cg_path_len, "%s/", cgroup); 855 831 } 856 832 857 833 snprintf(cgroup_procs, MAX_PATH, "%s/cgroup.procs", cgroup_path); ··· 1057 1029 1058 1030 *res = (int) lres; 1059 1031 return 0; 1032 + } 1033 + 1034 + static inline void fatal_alloc(void) 1035 + { 1036 + fatal("Error allocating memory\n"); 1037 + } 1038 + 1039 + void *calloc_fatal(size_t n, size_t size) 1040 + { 1041 + void *p = calloc(n, size); 1042 + 1043 + if (!p) 1044 + fatal_alloc(); 1045 + 1046 + return p; 1047 + } 1048 + 1049 + void *reallocarray_fatal(void *p, size_t n, size_t size) 1050 + { 1051 + p = reallocarray(p, n, size); 1052 + 1053 + if (!p) 1054 + fatal_alloc(); 1055 + 1056 + return p; 1057 + } 1058 + 1059 + char *strdup_fatal(const char *s) 1060 + { 1061 + char *p = strdup(s); 1062 + 1063 + if (!p) 1064 + fatal_alloc(); 1065 + 1066 + return p; 1060 1067 }
+33
tools/tracing/rtla/src/utils.h
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 3 3 #include <stdint.h> 4 + #include <string.h> 4 5 #include <time.h> 5 6 #include <sched.h> 6 7 #include <stdbool.h> ··· 14 13 #define MAX_PATH 1024 15 14 #define MAX_NICE 20 16 15 #define MIN_NICE -19 16 + 17 + #ifndef ARRAY_SIZE 18 + #define ARRAY_SIZE(x) (sizeof(x) / sizeof(*(x))) 19 + #endif 20 + 21 + /* Calculate string length at compile time (excluding null terminator) */ 22 + #define STRING_LENGTH(s) (ARRAY_SIZE(s) - sizeof(*(s))) 23 + 24 + /* Compare string with static string, length determined at compile time */ 25 + #define strncmp_static(s1, s2) strncmp(s1, s2, ARRAY_SIZE(s2)) 26 + 27 + /** 28 + * str_has_prefix - Test if a string has a given prefix 29 + * @str: The string to test 30 + * @prefix: The string to see if @str starts with 31 + * 32 + * Returns: true if @str starts with @prefix, false otherwise 33 + */ 34 + static inline bool str_has_prefix(const char *str, const char *prefix) 35 + { 36 + return strncmp(str, prefix, strlen(prefix)) == 0; 37 + } 17 38 18 39 #define container_of(ptr, type, member)({ \ 19 40 const typeof(((type *)0)->member) *__mptr = (ptr); \ ··· 85 62 }; 86 63 #endif /* SCHED_ATTR_SIZE_VER0 */ 87 64 65 + enum stack_format { 66 + STACK_FORMAT_TRUNCATE, 67 + STACK_FORMAT_SKIP, 68 + STACK_FORMAT_FULL 69 + }; 70 + 88 71 int parse_prio(char *arg, struct sched_attr *sched_param); 89 72 int parse_cpu_set(char *cpu_list, cpu_set_t *set); 73 + int parse_stack_format(char *arg); 90 74 int __set_sched_attr(int pid, struct sched_attr *attr); 91 75 int set_comm_sched_attr(const char *comm_prefix, struct sched_attr *attr); 92 76 int set_comm_cgroup(const char *comm_prefix, const char *cgroup); 93 77 int set_pid_cgroup(pid_t pid, const char *cgroup); 94 78 int set_cpu_dma_latency(int32_t latency); 79 + void *calloc_fatal(size_t n, size_t size); 80 + void *reallocarray_fatal(void *p, size_t n, size_t size); 81 + char *strdup_fatal(const char *s); 95 82 #ifdef HAVE_LIBCPUPOWER_SUPPORT 96 83 int save_cpu_idle_disable_state(unsigned int cpu); 97 84 int restore_cpu_idle_disable_state(unsigned int cpu);
+2
tools/tracing/rtla/tests/unit/Build
··· 1 + unit_tests-y += unit_tests.o 2 + unit_tests-y +=../../src/utils.o
+17
tools/tracing/rtla/tests/unit/Makefile.unit
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + UNIT_TESTS := $(OUTPUT)unit_tests 4 + UNIT_TESTS_IN := $(UNIT_TESTS)-in.o 5 + 6 + $(UNIT_TESTS): $(UNIT_TESTS_IN) 7 + $(QUIET_LINK)$(CC) $(LDFLAGS) -o $@ $^ -lcheck 8 + 9 + $(UNIT_TESTS_IN): 10 + make $(build)=unit_tests 11 + 12 + unit-tests: FORCE 13 + $(Q)if [ "$(feature-libcheck)" = "1" ]; then \ 14 + $(MAKE) $(UNIT_TESTS) && $(UNIT_TESTS); \ 15 + else \ 16 + echo "libcheck is missing, skipping unit tests. Please install check-devel/check"; \ 17 + fi
+119
tools/tracing/rtla/tests/unit/unit_tests.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define _GNU_SOURCE 4 + #include <check.h> 5 + #include <stdio.h> 6 + #include <stdlib.h> 7 + #include <sched.h> 8 + #include <limits.h> 9 + #include <unistd.h> 10 + #include <sys/sysinfo.h> 11 + 12 + #include "../../src/utils.h" 13 + int nr_cpus; 14 + 15 + START_TEST(test_strtoi) 16 + { 17 + int result; 18 + char buf[64]; 19 + 20 + ck_assert_int_eq(strtoi("123", &result), 0); 21 + ck_assert_int_eq(result, 123); 22 + ck_assert_int_eq(strtoi(" -456", &result), 0); 23 + ck_assert_int_eq(result, -456); 24 + 25 + snprintf(buf, sizeof(buf), "%d", INT_MAX); 26 + ck_assert_int_eq(strtoi(buf, &result), 0); 27 + snprintf(buf, sizeof(buf), "%ld", (long)INT_MAX + 1); 28 + ck_assert_int_eq(strtoi(buf, &result), -1); 29 + 30 + ck_assert_int_eq(strtoi("", &result), -1); 31 + ck_assert_int_eq(strtoi("123abc", &result), -1); 32 + ck_assert_int_eq(strtoi("123 ", &result), -1); 33 + } 34 + END_TEST 35 + 36 + START_TEST(test_parse_cpu_set) 37 + { 38 + cpu_set_t set; 39 + 40 + nr_cpus = 8; 41 + ck_assert_int_eq(parse_cpu_set("0", &set), 0); 42 + ck_assert(CPU_ISSET(0, &set)); 43 + ck_assert(!CPU_ISSET(1, &set)); 44 + 45 + ck_assert_int_eq(parse_cpu_set("0,2", &set), 0); 46 + ck_assert(CPU_ISSET(0, &set)); 47 + ck_assert(CPU_ISSET(2, &set)); 48 + 49 + ck_assert_int_eq(parse_cpu_set("0-3", &set), 0); 50 + ck_assert(CPU_ISSET(0, &set)); 51 + ck_assert(CPU_ISSET(1, &set)); 52 + ck_assert(CPU_ISSET(2, &set)); 53 + ck_assert(CPU_ISSET(3, &set)); 54 + 55 + ck_assert_int_eq(parse_cpu_set("1-3,5", &set), 0); 56 + ck_assert(!CPU_ISSET(0, &set)); 57 + ck_assert(CPU_ISSET(1, &set)); 58 + ck_assert(CPU_ISSET(2, &set)); 59 + ck_assert(CPU_ISSET(3, &set)); 60 + ck_assert(!CPU_ISSET(4, &set)); 61 + ck_assert(CPU_ISSET(5, &set)); 62 + 63 + ck_assert_int_eq(parse_cpu_set("-1", &set), 1); 64 + ck_assert_int_eq(parse_cpu_set("abc", &set), 1); 65 + ck_assert_int_eq(parse_cpu_set("9999", &set), 1); 66 + } 67 + END_TEST 68 + 69 + START_TEST(test_parse_prio) 70 + { 71 + struct sched_attr attr; 72 + 73 + ck_assert_int_eq(parse_prio("f:50", &attr), 0); 74 + ck_assert_uint_eq(attr.sched_policy, SCHED_FIFO); 75 + ck_assert_uint_eq(attr.sched_priority, 50U); 76 + 77 + ck_assert_int_eq(parse_prio("r:30", &attr), 0); 78 + ck_assert_uint_eq(attr.sched_policy, SCHED_RR); 79 + 80 + ck_assert_int_eq(parse_prio("o:0", &attr), 0); 81 + ck_assert_uint_eq(attr.sched_policy, SCHED_OTHER); 82 + ck_assert_int_eq(attr.sched_nice, 0); 83 + 84 + ck_assert_int_eq(parse_prio("d:10ms:100ms", &attr), 0); 85 + ck_assert_uint_eq(attr.sched_policy, 6U); 86 + 87 + ck_assert_int_eq(parse_prio("f:999", &attr), -1); 88 + ck_assert_int_eq(parse_prio("o:-20", &attr), -1); 89 + ck_assert_int_eq(parse_prio("d:100ms:10ms", &attr), -1); 90 + ck_assert_int_eq(parse_prio("x:50", &attr), -1); 91 + } 92 + END_TEST 93 + 94 + Suite *utils_suite(void) 95 + { 96 + Suite *s = suite_create("utils"); 97 + TCase *tc = tcase_create("core"); 98 + 99 + tcase_add_test(tc, test_strtoi); 100 + tcase_add_test(tc, test_parse_cpu_set); 101 + tcase_add_test(tc, test_parse_prio); 102 + 103 + suite_add_tcase(s, tc); 104 + return s; 105 + } 106 + 107 + int main(void) 108 + { 109 + int num_failed; 110 + SRunner *sr; 111 + 112 + sr = srunner_create(utils_suite()); 113 + srunner_run_all(sr, CK_NORMAL); 114 + num_failed = srunner_ntests_failed(sr); 115 + 116 + srunner_free(sr); 117 + 118 + return (num_failed == 0) ? EXIT_SUCCESS : EXIT_FAILURE; 119 + }