Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux
1
fork

Configure Feed

Select the types of activity you want to include in your feed.

Merge tag 'trace-rv-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull runtime verification updates from Steven Rostedt:

- Refactor da_monitor header to share handlers across monitor types

No functional changes, only less code duplication.

- Add Hybrid Automata model class

Add a new model class that extends deterministic automata by adding
constraints on transitions and states. Those constraints can take
into account wall-clock time and as such allow RV monitor to make
assertions on real time. Add documentation and code generation
scripts.

- Add stall monitor as hybrid automaton example

Add a monitor that triggers a violation when a task is stalling as an
example of automaton working with real time variables.

- Convert the opid monitor to a hybrid automaton

The opid monitor can be heavily simplified if written as a hybrid
automaton: instead of tracking preempt and interrupt enable/disable
events, it can just run constraints on the preemption/interrupt
states when events like wakeup and need_resched verify.

- Add support for per-object monitors in DA/HA

Allow writing deterministic and hybrid automata monitors for generic
objects (e.g. any struct), by exploiting a hash table where objects
are saved. This allows to track more than just tasks in RV. For
instance it will be used to track deadline entities in deadline
monitors.

- Add deadline tracepoints and move some deadline utilities

Prepare the ground for deadline monitors by defining events and
exporting helpers.

- Add nomiss deadline monitor

Add first example of deadline monitor asserting all entities complete
before their deadline.

- Improve rvgen error handling

Introduce AutomataError exception class and better handle expected
exceptions while showing a backtrace for unexpected ones.

- Improve python code quality in rvgen

Refactor the rvgen generation scripts to align with python best
practices: use f-strings instead of %, use len() instead of
__len__(), remove semicolons, use context managers for file
operations, fix whitespace violations, extract magic strings into
constants, remove unused imports and methods.

- Fix small bugs in rvgen

The generator scripts presented some corner case bugs: logical error
in validating what a correct dot file looks like, fix an isinstance()
check, enforce a dot file has an initial state, fix type annotations
and typos in comments.

- rvgen refactoring

Refactor automata.py to use iterator-based parsing and handle
required arguments directly in argparse.

- Allow epoll in rtapp-sleep monitor

The epoll_wait call is now rt-friendly so it should be allowed in the
sleep monitor as a valid sleep method.

* tag 'trace-rv-v7.1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (32 commits)
rv: Allow epoll in rtapp-sleep monitor
rv/rvgen: fix _fill_states() return type annotation
rv/rvgen: fix unbound loop variable warning
rv/rvgen: enforce presence of initial state
rv/rvgen: extract node marker string to class constant
rv/rvgen: fix isinstance check in Variable.expand()
rv/rvgen: make monitor arguments required in rvgen
rv/rvgen: remove unused __get_main_name method
rv/rvgen: remove unused sys import from dot2c
rv/rvgen: refactor automata.py to use iterator-based parsing
rv/rvgen: use class constant for init marker
rv/rvgen: fix DOT file validation logic error
rv/rvgen: fix PEP 8 whitespace violations
rv/rvgen: fix typos in automata and generator docstring and comments
rv/rvgen: use context managers for file operations
rv/rvgen: remove unnecessary semicolons
rv/rvgen: replace __len__() calls with len()
rv/rvgen: replace % string formatting with f-strings
rv/rvgen: remove bare except clauses in generator
rv/rvgen: introduce AutomataError exception class
...

+3891 -694
+1
Documentation/tools/rv/index.rst
··· 16 16 rv-mon-wip 17 17 rv-mon-wwnr 18 18 rv-mon-sched 19 + rv-mon-stall
+44
Documentation/tools/rv/rv-mon-stall.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============ 4 + rv-mon-stall 5 + ============ 6 + -------------------- 7 + Stalled task monitor 8 + -------------------- 9 + 10 + :Manual section: 1 11 + 12 + SYNOPSIS 13 + ======== 14 + 15 + **rv mon stall** [*OPTIONS*] 16 + 17 + DESCRIPTION 18 + =========== 19 + 20 + The stalled task (**stall**) monitor is a sample per-task timed monitor that 21 + checks if tasks are scheduled within a defined threshold after they are ready. 22 + 23 + See kernel documentation for further information about this monitor: 24 + <https://docs.kernel.org/trace/rv/monitor_stall.html> 25 + 26 + OPTIONS 27 + ======= 28 + 29 + .. include:: common_ikm.rst 30 + 31 + SEE ALSO 32 + ======== 33 + 34 + **rv**\(1), **rv-mon**\(1) 35 + 36 + Linux kernel *RV* documentation: 37 + <https://www.kernel.org/doc/html/latest/trace/rv/index.html> 38 + 39 + AUTHOR 40 + ====== 41 + 42 + Written by Gabriele Monaco <gmonaco@redhat.com> 43 + 44 + .. include:: common_appendix.rst
+1 -1
Documentation/trace/rv/deterministic_automata.rst
··· 11 11 - *E* is the finite set of events; 12 12 - x\ :subscript:`0` is the initial state; 13 13 - X\ :subscript:`m` (subset of *X*) is the set of marked (or final) states. 14 - - *f* : *X* x *E* -> *X* $ is the transition function. It defines the state 14 + - *f* : *X* x *E* -> *X* is the transition function. It defines the state 15 15 transition in the occurrence of an event from *E* in the state *X*. In the 16 16 special case of deterministic automata, the occurrence of the event in *E* 17 17 in a state in *X* has a deterministic next state from *X*.
+341
Documentation/trace/rv/hybrid_automata.rst
··· 1 + Hybrid Automata 2 + =============== 3 + 4 + Hybrid automata are an extension of deterministic automata, there are several 5 + definitions of hybrid automata in the literature. The adaptation implemented 6 + here is formally denoted by G and defined as a 7-tuple: 7 + 8 + *G* = { *X*, *E*, *V*, *f*, x\ :subscript:`0`, X\ :subscript:`m`, *i* } 9 + 10 + - *X* is the set of states; 11 + - *E* is the finite set of events; 12 + - *V* is the finite set of environment variables; 13 + - x\ :subscript:`0` is the initial state; 14 + - X\ :subscript:`m` (subset of *X*) is the set of marked (or final) states. 15 + - *f* : *X* x *E* x *C(V)* -> *X* is the transition function. 16 + It defines the state transition in the occurrence of an event from *E* in the 17 + state *X*. Unlike deterministic automata, the transition function also 18 + includes guards from the set of all possible constraints (defined as *C(V)*). 19 + Guards can be true or false with the valuation of *V* when the event occurs, 20 + and the transition is possible only when constraints are true. Similarly to 21 + deterministic automata, the occurrence of the event in *E* in a state in *X* 22 + has a deterministic next state from *X*, if the guard is true. 23 + - *i* : *X* -> *C'(V)* is the invariant assignment function, this is a 24 + constraint assigned to each state in *X*, every state in *X* must be left 25 + before the invariant turns to false. We can omit the representation of 26 + invariants whose value is true regardless of the valuation of *V*. 27 + 28 + The set of all possible constraints *C(V)* is defined according to the 29 + following grammar: 30 + 31 + g = v < c | v > c | v <= c | v >= c | v == c | v != c | g && g | true 32 + 33 + With v a variable in *V* and c a numerical value. 34 + 35 + We define the special case of hybrid automata whose variables grow with uniform 36 + rates as timed automata. In this case, the variables are called clocks. 37 + As the name implies, timed automata can be used to describe real time. 38 + Additionally, clocks support another type of guard which always evaluates to true: 39 + 40 + reset(v) 41 + 42 + The reset constraint is used to set the value of a clock to 0. 43 + 44 + The set of invariant constraints *C'(V)* is a subset of *C(V)* including only 45 + constraint of the form: 46 + 47 + g = v < c | true 48 + 49 + This simplifies the implementation as a clock expiration is a necessary and 50 + sufficient condition for the violation of invariants while still allowing more 51 + complex constraints to be specified as guards. 52 + 53 + It is important to note that any hybrid automaton is a valid deterministic 54 + automaton with additional guards and invariants. Those can only further 55 + constrain what transitions are valid but it is not possible to define 56 + transition functions starting from the same state in *X* and the same event in 57 + *E* but ending up in different states in *X* based on the valuation of *V*. 58 + 59 + Examples 60 + -------- 61 + 62 + Wip as hybrid automaton 63 + ~~~~~~~~~~~~~~~~~~~~~~~ 64 + 65 + The 'wip' (wakeup in preemptive) example introduced as a deterministic automaton 66 + can also be described as: 67 + 68 + - *X* = { ``any_thread_running`` } 69 + - *E* = { ``sched_waking`` } 70 + - *V* = { ``preemptive`` } 71 + - x\ :subscript:`0` = ``any_thread_running`` 72 + - X\ :subscript:`m` = {``any_thread_running``} 73 + - *f* = 74 + - *f*\ (``any_thread_running``, ``sched_waking``, ``preemptive==0``) = ``any_thread_running`` 75 + - *i* = 76 + - *i*\ (``any_thread_running``) = ``true`` 77 + 78 + Which can be represented graphically as:: 79 + 80 + | 81 + | 82 + v 83 + #====================# sched_waking;preemptive==0 84 + H H ------------------------------+ 85 + H any_thread_running H | 86 + H H <-----------------------------+ 87 + #====================# 88 + 89 + In this example, by using the preemptive state of the system as an environment 90 + variable, we can assert this constraint on ``sched_waking`` without requiring 91 + preemption events (as we would in a deterministic automaton), which can be 92 + useful in case those events are not available or not reliable on the system. 93 + 94 + Since all the invariants in *i* are true, we can omit them from the representation. 95 + 96 + Stall model with guards (iteration 1) 97 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 98 + 99 + As a sample timed automaton we can define 'stall' as: 100 + 101 + - *X* = { ``dequeued``, ``enqueued``, ``running``} 102 + - *E* = { ``enqueue``, ``dequeue``, ``switch_in``} 103 + - *V* = { ``clk`` } 104 + - x\ :subscript:`0` = ``dequeue`` 105 + - X\ :subscript:`m` = {``dequeue``} 106 + - *f* = 107 + - *f*\ (``enqueued``, ``switch_in``, ``clk < threshold``) = ``running`` 108 + - *f*\ (``running``, ``dequeue``) = ``dequeued`` 109 + - *f*\ (``dequeued``, ``enqueue``, ``reset(clk)``) = ``enqueued`` 110 + - *i* = *omitted as all true* 111 + 112 + Graphically represented as:: 113 + 114 + | 115 + | 116 + v 117 + #============================# 118 + H dequeued H <+ 119 + #============================# | 120 + | | 121 + | enqueue; reset(clk) | 122 + v | 123 + +----------------------------+ | 124 + | enqueued | | dequeue 125 + +----------------------------+ | 126 + | | 127 + | switch_in; clk < threshold | 128 + v | 129 + +----------------------------+ | 130 + | running | -+ 131 + +----------------------------+ 132 + 133 + This model imposes that the time between when a task is enqueued (it becomes 134 + runnable) and when the task gets to run must be lower than a certain threshold. 135 + A failure in this model means that the task is starving. 136 + One problem in using guards on the edges in this case is that the model will 137 + not report a failure until the ``switch_in`` event occurs. This means that, 138 + according to the model, it is valid for the task never to run. 139 + 140 + Stall model with invariants (iteration 2) 141 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 142 + 143 + The first iteration isn't exactly what was intended, we can change the model as: 144 + 145 + - *X* = { ``dequeued``, ``enqueued``, ``running``} 146 + - *E* = { ``enqueue``, ``dequeue``, ``switch_in``} 147 + - *V* = { ``clk`` } 148 + - x\ :subscript:`0` = ``dequeue`` 149 + - X\ :subscript:`m` = {``dequeue``} 150 + - *f* = 151 + - *f*\ (``enqueued``, ``switch_in``) = ``running`` 152 + - *f*\ (``running``, ``dequeue``) = ``dequeued`` 153 + - *f*\ (``dequeued``, ``enqueue``, ``reset(clk)``) = ``enqueued`` 154 + - *i* = 155 + - *i*\ (``enqueued``) = ``clk < threshold`` 156 + 157 + Graphically:: 158 + 159 + | 160 + | 161 + v 162 + #=========================# 163 + H dequeued H <+ 164 + #=========================# | 165 + | | 166 + | enqueue; reset(clk) | 167 + v | 168 + +-------------------------+ | 169 + | enqueued | | 170 + | clk < threshold | | dequeue 171 + +-------------------------+ | 172 + | | 173 + | switch_in | 174 + v | 175 + +-------------------------+ | 176 + | running | -+ 177 + +-------------------------+ 178 + 179 + In this case, we moved the guard as an invariant to the ``enqueued`` state, 180 + this means we not only forbid the occurrence of ``switch_in`` when ``clk`` is 181 + past the threshold but also mark as invalid in case we are *still* in 182 + ``enqueued`` after the threshold. This model is effectively in an invalid state 183 + as soon as a task is starving, rather than when the starving task finally runs. 184 + 185 + Hybrid Automaton in C 186 + --------------------- 187 + 188 + The definition of hybrid automata in C is heavily based on the deterministic 189 + automata one. Specifically, we add the set of environment variables and the 190 + constraints (both guards on transitions and invariants on states) as follows. 191 + This is a combination of both iterations of the stall example:: 192 + 193 + /* enum representation of X (set of states) to be used as index */ 194 + enum states { 195 + dequeued, 196 + enqueued, 197 + running, 198 + state_max, 199 + }; 200 + 201 + #define INVALID_STATE state_max 202 + 203 + /* enum representation of E (set of events) to be used as index */ 204 + enum events { 205 + dequeue, 206 + enqueue, 207 + switch_in, 208 + event_max, 209 + }; 210 + 211 + /* enum representation of V (set of environment variables) to be used as index */ 212 + enum envs { 213 + clk, 214 + env_max, 215 + env_max_stored = env_max, 216 + }; 217 + 218 + struct automaton { 219 + char *state_names[state_max]; // X: the set of states 220 + char *event_names[event_max]; // E: the finite set of events 221 + char *env_names[env_max]; // V: the finite set of env vars 222 + unsigned char function[state_max][event_max]; // f: transition function 223 + unsigned char initial_state; // x_0: the initial state 224 + bool final_states[state_max]; // X_m: the set of marked states 225 + }; 226 + 227 + struct automaton aut = { 228 + .state_names = { 229 + "dequeued", 230 + "enqueued", 231 + "running", 232 + }, 233 + .event_names = { 234 + "dequeue", 235 + "enqueue", 236 + "switch_in", 237 + }, 238 + .env_names = { 239 + "clk", 240 + }, 241 + .function = { 242 + { INVALID_STATE, enqueued, INVALID_STATE }, 243 + { INVALID_STATE, INVALID_STATE, running }, 244 + { dequeued, INVALID_STATE, INVALID_STATE }, 245 + }, 246 + .initial_state = dequeued, 247 + .final_states = { 1, 0, 0 }, 248 + }; 249 + 250 + static bool verify_constraint(enum states curr_state, enum events event, 251 + enum states next_state) 252 + { 253 + bool res = true; 254 + 255 + /* Validate guards as part of f */ 256 + if (curr_state == enqueued && event == switch_in) 257 + res = get_env(clk) < threshold; 258 + else if (curr_state == dequeued && event == enqueue) 259 + reset_env(clk); 260 + 261 + /* Validate invariants in i */ 262 + if (next_state == curr_state || !res) 263 + return res; 264 + if (next_state == enqueued) 265 + ha_start_timer_jiffy(ha_mon, clk, threshold_jiffies); 266 + else if (curr_state == enqueued) 267 + res = !ha_cancel_timer(ha_mon); 268 + return res; 269 + } 270 + 271 + The function ``verify_constraint``, here reported as simplified, checks guards, 272 + performs resets and starts timers to validate invariants according to 273 + specification, those cannot easily be represented in the automaton struct. 274 + Due to the complex nature of environment variables, the user needs to provide 275 + functions to get and reset environment variables that are not common clocks 276 + (e.g. clocks with ns or jiffy granularity). 277 + Since invariants are only defined as clock expirations (e.g. *clk < 278 + threshold*), reaching the expiration of a timer armed when entering the state 279 + is in fact a failure in the model and triggers a reaction. Leaving the state 280 + stops the timer. 281 + 282 + It is important to note that timers implemented with hrtimers introduce 283 + overhead, if the monitor has several instances (e.g. all tasks) this can become 284 + an issue. The impact can be decreased using the timer wheel (``HA_TIMER_TYPE`` 285 + set to ``HA_TIMER_WHEEL``), this lowers the responsiveness of the timer without 286 + damaging the accuracy of the model, since the invariant condition is checked 287 + before disabling the timer in case the callback is late. 288 + Alternatively, if the monitor is guaranteed to *eventually* leave the state and 289 + the incurred delay to wait for the next event is acceptable, guards can be used 290 + in place of invariants, as seen in the stall example. 291 + 292 + Graphviz .dot format 293 + -------------------- 294 + 295 + Also the Graphviz representation of hybrid automata is an extension of the 296 + deterministic automata one. Specifically, guards can be provided in the event 297 + name separated by ``;``:: 298 + 299 + "state_start" -> "state_dest" [ label = "sched_waking;preemptible==0;reset(clk)" ]; 300 + 301 + Invariant can be specified in the state label (not the node name!) separated by ``\n``:: 302 + 303 + "enqueued" [label = "enqueued\nclk < threshold_jiffies"]; 304 + 305 + Constraints can be specified as valid C comparisons and allow spaces, the first 306 + element of the comparison must be the clock while the second is a numerical or 307 + parametrised value. Guards allow comparisons to be combined with boolean 308 + operations (``&&`` and ``||``), resets must be separated from other constraints. 309 + 310 + This is the full example of the last version of the 'stall' model in DOT:: 311 + 312 + digraph state_automaton { 313 + {node [shape = circle] "enqueued"}; 314 + {node [shape = plaintext, style=invis, label=""] "__init_dequeued"}; 315 + {node [shape = doublecircle] "dequeued"}; 316 + {node [shape = circle] "running"}; 317 + "__init_dequeued" -> "dequeued"; 318 + "enqueued" [label = "enqueued\nclk < threshold_jiffies"]; 319 + "running" [label = "running"]; 320 + "dequeued" [label = "dequeued"]; 321 + "enqueued" -> "running" [ label = "switch_in" ]; 322 + "running" -> "dequeued" [ label = "dequeue" ]; 323 + "dequeued" -> "enqueued" [ label = "enqueue;reset(clk)" ]; 324 + { rank = min ; 325 + "__init_dequeued"; 326 + "dequeued"; 327 + } 328 + } 329 + 330 + References 331 + ---------- 332 + 333 + One book covering model checking and timed automata is:: 334 + 335 + Christel Baier and Joost-Pieter Katoen: Principles of Model Checking, 336 + The MIT Press, 2008. 337 + 338 + Hybrid automata are described in detail in:: 339 + 340 + Thomas Henzinger: The theory of hybrid automata, 341 + Proceedings 11th Annual IEEE Symposium on Logic in Computer Science, 1996.
+3
Documentation/trace/rv/index.rst
··· 9 9 runtime-verification.rst 10 10 deterministic_automata.rst 11 11 linear_temporal_logic.rst 12 + hybrid_automata.rst 12 13 monitor_synthesis.rst 13 14 da_monitor_instrumentation.rst 14 15 monitor_wip.rst 15 16 monitor_wwnr.rst 16 17 monitor_sched.rst 17 18 monitor_rtapp.rst 19 + monitor_stall.rst 20 + monitor_deadline.rst
+84
Documentation/trace/rv/monitor_deadline.rst
··· 1 + Deadline monitors 2 + ================= 3 + 4 + - Name: deadline 5 + - Type: container for multiple monitors 6 + - Author: Gabriele Monaco <gmonaco@redhat.com> 7 + 8 + Description 9 + ----------- 10 + 11 + The deadline monitor is a set of specifications to describe the deadline 12 + scheduler behaviour. It includes monitors per scheduling entity (deadline tasks 13 + and servers) that work independently to verify different specifications the 14 + deadline scheduler should follow. 15 + 16 + Specifications 17 + -------------- 18 + 19 + Monitor nomiss 20 + ~~~~~~~~~~~~~~ 21 + 22 + The nomiss monitor ensures dl entities get to run *and* run to completion 23 + before their deadline, although deferrable servers may not run. An entity is 24 + considered done if ``throttled``, either because it yielded or used up its 25 + runtime, or when it voluntarily starts ``sleeping``. 26 + The monitor includes a user configurable deadline threshold. If the total 27 + utilisation of deadline tasks is larger than 1, they are only guaranteed 28 + bounded tardiness. See Documentation/scheduler/sched-deadline.rst for more 29 + details. The threshold (module parameter ``nomiss.deadline_thresh``) can be 30 + configured to avoid the monitor to fail based on the acceptable tardiness in 31 + the system. Since ``dl_throttle`` is a valid outcome for the entity to be done, 32 + the minimum tardiness needs be 1 tick to consider the throttle delay, unless 33 + the ``HRTICK_DL`` scheduler feature is active. 34 + 35 + Servers have also an intermediate ``idle`` state, occurring as soon as no 36 + runnable task is available from ready or running where no timing constraint 37 + is applied. A server goes to sleep by stopping, there is no wakeup equivalent 38 + as the order of a server starting and replenishing is not defined, hence a 39 + server can run from sleeping without being ready:: 40 + 41 + | 42 + sched_wakeup v 43 + dl_replenish;reset(clk) -- #=========================# 44 + | H H dl_replenish;reset(clk) 45 + +-----------> H H <--------------------+ 46 + H H | 47 + +- dl_server_stop ---- H ready H | 48 + | +-----------------> H clk < DEADLINE_NS() H dl_throttle; | 49 + | | H H is_defer == 1 | 50 + | | sched_switch_in - H H -----------------+ | 51 + | | | #=========================# | | 52 + | | | | ^ | | 53 + | | | dl_server_idle dl_replenish;reset(clk) | | 54 + | | | v | | | 55 + | | | +--------------+ | | 56 + | | | +------ | | | | 57 + | | | dl_server_idle | | dl_throttle | | 58 + | | | | | idle | -----------------+ | | 59 + | | | +-----> | | | | | 60 + | | | | | | | | 61 + | | | | | | | | 62 + +--+--+---+--- dl_server_stop -- +--------------+ | | | 63 + | | | | | ^ | | | 64 + | | | | sched_switch_in dl_server_idle | | | 65 + | | | | v | | | | 66 + | | | | +---------- +---------------------+ | | | 67 + | | | | sched_switch_in | | | | | 68 + | | | | sched_wakeup | | | | | 69 + | | | | dl_replenish; | running | -------+ | | | 70 + | | | | reset(clk) | clk < DEADLINE_NS() | | | | | 71 + | | | | +---------> | | dl_throttle | | | 72 + | | | +----------------> | | | | | | 73 + | | | +---------------------+ | | | | 74 + | | sched_wakeup ^ sched_switch_suspend | | | | 75 + v v dl_replenish;reset(clk) | dl_server_stop | | | | 76 + +--------------+ | | v v v | 77 + | | - sched_switch_in + | +---------------+ 78 + | | <---------------------+ dl_throttle +-- | | 79 + | sleeping | sched_wakeup | | throttled | 80 + | | -- dl_server_stop dl_server_idle +-> | | 81 + | | dl_server_idle sched_switch_suspend +---------------+ 82 + +--------------+ <---------+ ^ 83 + | | 84 + +------ dl_throttle;is_constr_dl == 1 || is_defer == 1 ------+
+13 -47
Documentation/trace/rv/monitor_sched.rst
··· 346 346 347 347 The operations with preemption and irq disabled (opid) monitor ensures 348 348 operations like ``wakeup`` and ``need_resched`` occur with interrupts and 349 - preemption disabled or during interrupt context, in such case preemption may 350 - not be disabled explicitly. 349 + preemption disabled. 351 350 ``need_resched`` can be set by some RCU internals functions, in which case it 352 - doesn't match a task wakeup and might occur with only interrupts disabled:: 351 + doesn't match a task wakeup and might occur with only interrupts disabled. 352 + The interrupt and preemption status are validated by the hybrid automaton 353 + constraints when processing the events:: 353 354 354 - | sched_need_resched 355 - | sched_waking 356 - | irq_entry 357 - | +--------------------+ 358 - v v | 359 - +------------------------------------------------------+ 360 - +----------- | disabled | <+ 361 - | +------------------------------------------------------+ | 362 - | | ^ | 363 - | | preempt_disable sched_need_resched | 364 - | preempt_enable | +--------------------+ | 365 - | v | v | | 366 - | +------------------------------------------------------+ | 367 - | | irq_disabled | | 368 - | +------------------------------------------------------+ | 369 - | | | ^ | 370 - | irq_entry irq_entry | | | 371 - | sched_need_resched v | irq_disable | 372 - | sched_waking +--------------+ | | | 373 - | +----- | | irq_enable | | 374 - | | | in_irq | | | | 375 - | +----> | | | | | 376 - | +--------------+ | | irq_disable 377 - | | | | | 378 - | irq_enable | irq_enable | | | 379 - | v v | | 380 - | #======================================================# | 381 - | H enabled H | 382 - | #======================================================# | 383 - | | ^ ^ preempt_enable | | 384 - | preempt_disable preempt_enable +--------------------+ | 385 - | v | | 386 - | +------------------+ | | 387 - +----------> | preempt_disabled | -+ | 388 - +------------------+ | 389 - | | 390 - +-------------------------------------------------------+ 391 - 392 - This monitor is designed to work on ``PREEMPT_RT`` kernels, the special case of 393 - events occurring in interrupt context is a shortcut to identify valid scenarios 394 - where the preemption tracepoints might not be visible, during interrupts 395 - preemption is always disabled. On non- ``PREEMPT_RT`` kernels, the interrupts 396 - might invoke a softirq to set ``need_resched`` and wake up a task. This is 397 - another special case that is currently not supported by the monitor. 355 + | 356 + | 357 + v 358 + #=========# sched_need_resched;irq_off == 1 359 + H H sched_waking;irq_off == 1 && preempt_off == 1 360 + H any H ------------------------------------------------+ 361 + H H | 362 + H H <-----------------------------------------------+ 363 + #=========# 398 364 399 365 References 400 366 ----------
+43
Documentation/trace/rv/monitor_stall.rst
··· 1 + Monitor stall 2 + ============= 3 + 4 + - Name: stall - stalled task monitor 5 + - Type: per-task hybrid automaton 6 + - Author: Gabriele Monaco <gmonaco@redhat.com> 7 + 8 + Description 9 + ----------- 10 + 11 + The stalled task (stall) monitor is a sample per-task timed monitor that checks 12 + if tasks are scheduled within a defined threshold after they are ready:: 13 + 14 + | 15 + | 16 + v 17 + #==========================# 18 + +-----------------> H dequeued H 19 + | #==========================# 20 + | | 21 + sched_switch_wait | sched_wakeup;reset(clk) 22 + | v 23 + | +--------------------------+ <+ 24 + | | enqueued | | sched_wakeup 25 + | | clk < threshold_jiffies | -+ 26 + | +--------------------------+ 27 + | | ^ 28 + | sched_switch_in sched_switch_preempt;reset(clk) 29 + | v | 30 + | +--------------------------+ 31 + +------------------ | running | 32 + +--------------------------+ 33 + ^ sched_switch_in | 34 + | sched_wakeup | 35 + +----------------------+ 36 + 37 + The threshold can be configured as a parameter by either booting with the 38 + ``stall.threshold_jiffies=<new value>`` argument or writing a new value to 39 + ``/sys/module/stall/parameters/threshold_jiffies``. 40 + 41 + Specification 42 + ------------- 43 + Graphviz Dot file in tools/verification/models/stall.dot
+115 -2
Documentation/trace/rv/monitor_synthesis.rst
··· 18 18 trace output as a reaction to event parsing and exceptions, as depicted 19 19 below:: 20 20 21 - Linux +----- RV Monitor ----------------------------------+ Formal 22 - Realm | | Realm 21 + Linux +---- RV Monitor ----------------------------------+ Formal 22 + Realm | | Realm 23 23 +-------------------+ +----------------+ +-----------------+ 24 24 | Linux kernel | | Monitor | | Reference | 25 25 | Tracing | -> | Instance(s) | <- | Model | ··· 45 45 46 46 * rv/da_monitor.h for deterministic automaton monitor. 47 47 * rv/ltl_monitor.h for linear temporal logic monitor. 48 + * rv/ha_monitor.h for hybrid automaton monitor. 48 49 49 50 rvgen 50 51 ----- ··· 252 251 the task, the monitor may need some time to start validating tasks which have 253 252 been running before the monitor is enabled. Therefore, it is recommended to 254 253 start the tasks of interest after enabling the monitor. 254 + 255 + rv/ha_monitor.h 256 + +++++++++++++++ 257 + 258 + The implementation of hybrid automaton monitors derives directly from the 259 + deterministic automaton one. Despite using a different header 260 + (``ha_monitor.h``) the functions to handle events are the same (e.g. 261 + ``da_handle_event``). 262 + 263 + Additionally, the `rvgen` tool populates skeletons for the 264 + ``ha_verify_constraint``, ``ha_get_env`` and ``ha_reset_env`` based on the 265 + monitor specification in the monitor source file. 266 + 267 + ``ha_verify_constraint`` is typically ready as it is generated by `rvgen`: 268 + 269 + * standard constraints on edges are turned into the form:: 270 + 271 + res = ha_get_env(ha_mon, ENV) < VALUE; 272 + 273 + * reset constraints are turned into the form:: 274 + 275 + ha_reset_env(ha_mon, ENV); 276 + 277 + * constraints on the state are implemented using timers 278 + 279 + - armed before entering the state 280 + 281 + - cancelled while entering any other state 282 + 283 + - untouched if the state does not change as a result of the event 284 + 285 + - checked if the timer expired but the callback did not run 286 + 287 + - available implementation are `HA_TIMER_HRTIMER` and `HA_TIMER_WHEEL` 288 + 289 + - hrtimers are more precise but may have higher overhead 290 + 291 + - select by defining `HA_TIMER_TYPE` before including the header:: 292 + 293 + #define HA_TIMER_TYPE HA_TIMER_HRTIMER 294 + 295 + Constraint values can be specified in different forms: 296 + 297 + * literal value (with optional unit). E.g.:: 298 + 299 + preemptive == 0 300 + clk < 100ns 301 + threshold <= 10j 302 + 303 + * constant value (uppercase string). E.g.:: 304 + 305 + clk < MAX_NS 306 + 307 + * parameter (lowercase string). E.g.:: 308 + 309 + clk <= threshold_jiffies 310 + 311 + * macro (uppercase string with parentheses). E.g.:: 312 + 313 + clk < MAX_NS() 314 + 315 + * function (lowercase string with parentheses). E.g.:: 316 + 317 + clk <= threshold_jiffies() 318 + 319 + In all cases, `rvgen` will try to understand the type of the environment 320 + variable from the name or unit. For instance, constants or parameters 321 + terminating with ``_NS`` or ``_jiffies`` are intended as clocks with ns and jiffy 322 + granularity, respectively. Literals with measure unit `j` are jiffies and if a 323 + time unit is specified (`ns` to `s`), `rvgen` will convert the value to `ns`. 324 + 325 + Constants need to be defined by the user (but unlike the name, they don't 326 + necessarily need to be defined as constants). Parameters get converted to 327 + module parameters and the user needs to provide a default value. 328 + Also function and macros are defined by the user, by default they get as an 329 + argument the ``ha_monitor``, a common usage would be to get the required value 330 + from the target, e.g. the task in per-task monitors, using the helper 331 + ``ha_get_target(ha_mon)``. 332 + 333 + If `rvgen` determines that the variable is a clock, it provides the getter and 334 + resetter based on the unit. Otherwise, the user needs to provide an appropriate 335 + definition. 336 + Typically non-clock environment variables are not reset. In such case only the 337 + getter skeleton will be present in the file generated by `rvgen`. 338 + For instance, the getter for preemptive can be filled as:: 339 + 340 + static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs env) 341 + { 342 + if (env == preemptible) 343 + return preempt_count() == 0; 344 + return ENV_INVALID_VALUE; 345 + } 346 + 347 + The function is supplied the ``ha_mon`` parameter in case some storage is 348 + required (as it is for clocks), but environment variables without reset do not 349 + require a storage and can ignore that argument. 350 + The number of environment variables requiring a storage is limited by 351 + ``MAX_HA_ENV_LEN``, however such limitation doesn't stand for other variables. 352 + 353 + Finally, constraints on states are only valid for clocks and only if the 354 + constraint is of the form `clk < N`. This is because such constraints are 355 + implemented with the expiration of a timer. 356 + Typically the clock variables are reset just before arming the timer, but this 357 + doesn't have to be the case and the available functions take care of it. 358 + It is a responsibility of per-task monitors to make sure no timer is left 359 + running when the task exits. 360 + 361 + By default the generator implements timers with hrtimers (setting 362 + ``HA_TIMER_TYPE`` to ``HA_TIMER_HRTIMER``), this gives better responsiveness 363 + but higher overhead. The timer wheel (``HA_TIMER_WHEEL``) is a good alternative 364 + for monitors with several instances (e.g. per-task) that achieves lower 365 + overhead with increased latency, yet without compromising precision. 255 366 256 367 Final remarks 257 368 -------------
+39
include/linux/rv.h
··· 13 13 #define RV_MON_GLOBAL 0 14 14 #define RV_MON_PER_CPU 1 15 15 #define RV_MON_PER_TASK 2 16 + #define RV_MON_PER_OBJ 3 16 17 17 18 #ifdef CONFIG_RV 18 19 #include <linux/array_size.h> ··· 82 81 83 82 #endif /* CONFIG_RV_LTL_MONITOR */ 84 83 84 + #ifdef CONFIG_RV_HA_MONITOR 85 + /* 86 + * In the future, hybrid automata may rely on multiple 87 + * environment variables, e.g. different clocks started at 88 + * different times or running at different speed. 89 + * For now we support only 1 variable. 90 + */ 91 + #define MAX_HA_ENV_LEN 1 92 + 93 + /* 94 + * Monitors can pick the preferred timer implementation: 95 + * No timer: if monitors don't have state invariants. 96 + * Timer wheel: lightweight invariants check but far less precise. 97 + * Hrtimer: accurate invariants check with higher overhead. 98 + */ 99 + #define HA_TIMER_NONE 0 100 + #define HA_TIMER_WHEEL 1 101 + #define HA_TIMER_HRTIMER 2 102 + 103 + /* 104 + * Hybrid automaton per-object variables. 105 + */ 106 + struct ha_monitor { 107 + struct da_monitor da_mon; 108 + u64 env_store[MAX_HA_ENV_LEN]; 109 + union { 110 + struct hrtimer hrtimer; 111 + struct timer_list timer; 112 + }; 113 + }; 114 + 115 + #else 116 + 117 + struct ha_monitor { }; 118 + 119 + #endif /* CONFIG_RV_HA_MONITOR */ 120 + 85 121 #define RV_PER_TASK_MONITOR_INIT (CONFIG_RV_PER_TASK_MONITORS) 86 122 87 123 union rv_task_monitor { 88 124 struct da_monitor da_mon; 89 125 struct ltl_monitor ltl_mon; 126 + struct ha_monitor ha_mon; 90 127 }; 91 128 92 129 #ifdef CONFIG_RV_REACTORS
+27
include/linux/sched/deadline.h
··· 37 37 extern u64 dl_cookie; 38 38 extern bool dl_bw_visited(int cpu, u64 cookie); 39 39 40 + static inline bool dl_server(struct sched_dl_entity *dl_se) 41 + { 42 + return dl_se->dl_server; 43 + } 44 + 45 + static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se) 46 + { 47 + BUG_ON(dl_server(dl_se)); 48 + return container_of(dl_se, struct task_struct, dl); 49 + } 50 + 51 + /* 52 + * Regarding the deadline, a task with implicit deadline has a relative 53 + * deadline == relative period. A task with constrained deadline has a 54 + * relative deadline <= relative period. 55 + * 56 + * We support constrained deadline tasks. However, there are some restrictions 57 + * applied only for tasks which do not have an implicit deadline. See 58 + * update_dl_entity() to know more about such restrictions. 59 + * 60 + * The dl_is_implicit() returns true if the task has an implicit deadline. 61 + */ 62 + static inline bool dl_is_implicit(struct sched_dl_entity *dl_se) 63 + { 64 + return dl_se->dl_deadline == dl_se->dl_period; 65 + } 66 + 40 67 #endif /* _LINUX_SCHED_DEADLINE_H */
+489 -173
include/rv/da_monitor.h
··· 3 3 * Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org> 4 4 * 5 5 * Deterministic automata (DA) monitor functions, to be used together 6 - * with automata models in C generated by the dot2k tool. 6 + * with automata models in C generated by the rvgen tool. 7 7 * 8 - * The dot2k tool is available at tools/verification/dot2k/ 8 + * The rvgen tool is available at tools/verification/rvgen/ 9 9 * 10 10 * For further information, see: 11 11 * Documentation/trace/rv/monitor_synthesis.rst ··· 19 19 #include <linux/stringify.h> 20 20 #include <linux/bug.h> 21 21 #include <linux/sched.h> 22 + #include <linux/slab.h> 23 + #include <linux/hashtable.h> 22 24 23 25 /* 24 26 * Per-cpu variables require a unique name although static in some ··· 29 27 #define DA_MON_NAME CONCATENATE(da_mon_, MONITOR_NAME) 30 28 31 29 static struct rv_monitor rv_this; 30 + 31 + /* 32 + * Hook to allow the implementation of hybrid automata: define it with a 33 + * function that takes curr_state, event and next_state and returns true if the 34 + * environment constraints (e.g. timing) are satisfied, false otherwise. 35 + */ 36 + #ifndef da_monitor_event_hook 37 + #define da_monitor_event_hook(...) true 38 + #endif 39 + 40 + /* 41 + * Hook to allow the implementation of hybrid automata: define it with a 42 + * function that takes the da_monitor and performs further initialisation 43 + * (e.g. reset set up timers). 44 + */ 45 + #ifndef da_monitor_init_hook 46 + #define da_monitor_init_hook(da_mon) 47 + #endif 48 + 49 + /* 50 + * Hook to allow the implementation of hybrid automata: define it with a 51 + * function that takes the da_monitor and performs further reset (e.g. reset 52 + * all clocks). 53 + */ 54 + #ifndef da_monitor_reset_hook 55 + #define da_monitor_reset_hook(da_mon) 56 + #endif 57 + 58 + /* 59 + * Type for the target id, default to int but can be overridden. 60 + * A long type can work as hash table key (PER_OBJ) but will be downgraded to 61 + * int in the event tracepoint. 62 + * Unused for implicit monitors. 63 + */ 64 + #ifndef da_id_type 65 + #define da_id_type int 66 + #endif 32 67 33 68 static void react(enum states curr_state, enum events event) 34 69 { ··· 81 42 */ 82 43 static inline void da_monitor_reset(struct da_monitor *da_mon) 83 44 { 45 + da_monitor_reset_hook(da_mon); 84 46 da_mon->monitoring = 0; 85 47 da_mon->curr_state = model_get_initial_state(); 86 48 } ··· 96 56 { 97 57 da_mon->curr_state = model_get_initial_state(); 98 58 da_mon->monitoring = 1; 59 + da_monitor_init_hook(da_mon); 99 60 } 100 61 101 62 /* ··· 138 97 return 1; 139 98 } 140 99 141 - #if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU 142 - /* 143 - * Event handler for implicit monitors. Implicit monitor is the one which the 144 - * handler does not need to specify which da_monitor to manipulate. Examples 145 - * of implicit monitor are the per_cpu or the global ones. 146 - * 147 - * Retry in case there is a race between getting and setting the next state, 148 - * warn and reset the monitor if it runs out of retries. The monitor should be 149 - * able to handle various orders. 150 - */ 151 - 152 - static inline bool da_event(struct da_monitor *da_mon, enum events event) 153 - { 154 - enum states curr_state, next_state; 155 - 156 - curr_state = READ_ONCE(da_mon->curr_state); 157 - for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) { 158 - next_state = model_get_next_state(curr_state, event); 159 - if (next_state == INVALID_STATE) { 160 - react(curr_state, event); 161 - CONCATENATE(trace_error_, MONITOR_NAME)( 162 - model_get_state_name(curr_state), 163 - model_get_event_name(event)); 164 - return false; 165 - } 166 - if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) { 167 - CONCATENATE(trace_event_, MONITOR_NAME)( 168 - model_get_state_name(curr_state), 169 - model_get_event_name(event), 170 - model_get_state_name(next_state), 171 - model_is_final_state(next_state)); 172 - return true; 173 - } 174 - } 175 - 176 - trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event)); 177 - pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS) 178 - " retries reached for event %s, resetting monitor %s", 179 - model_get_event_name(event), __stringify(MONITOR_NAME)); 180 - return false; 181 - } 182 - 183 - #elif RV_MON_TYPE == RV_MON_PER_TASK 184 - /* 185 - * Event handler for per_task monitors. 186 - * 187 - * Retry in case there is a race between getting and setting the next state, 188 - * warn and reset the monitor if it runs out of retries. The monitor should be 189 - * able to handle various orders. 190 - */ 191 - 192 - static inline bool da_event(struct da_monitor *da_mon, struct task_struct *tsk, 193 - enum events event) 194 - { 195 - enum states curr_state, next_state; 196 - 197 - curr_state = READ_ONCE(da_mon->curr_state); 198 - for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) { 199 - next_state = model_get_next_state(curr_state, event); 200 - if (next_state == INVALID_STATE) { 201 - react(curr_state, event); 202 - CONCATENATE(trace_error_, MONITOR_NAME)(tsk->pid, 203 - model_get_state_name(curr_state), 204 - model_get_event_name(event)); 205 - return false; 206 - } 207 - if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) { 208 - CONCATENATE(trace_event_, MONITOR_NAME)(tsk->pid, 209 - model_get_state_name(curr_state), 210 - model_get_event_name(event), 211 - model_get_state_name(next_state), 212 - model_is_final_state(next_state)); 213 - return true; 214 - } 215 - } 216 - 217 - trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event)); 218 - pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS) 219 - " retries reached for event %s, resetting monitor %s", 220 - model_get_event_name(event), __stringify(MONITOR_NAME)); 221 - return false; 222 - } 223 - #endif /* RV_MON_TYPE */ 224 - 225 100 #if RV_MON_TYPE == RV_MON_GLOBAL 226 101 /* 227 102 * Functions to define, init and get a global monitor. ··· 176 219 /* 177 220 * da_monitor_destroy - destroy the monitor 178 221 */ 179 - static inline void da_monitor_destroy(void) { } 222 + static inline void da_monitor_destroy(void) 223 + { 224 + da_monitor_reset_all(); 225 + } 180 226 181 227 #elif RV_MON_TYPE == RV_MON_PER_CPU 182 228 /* ··· 225 265 /* 226 266 * da_monitor_destroy - destroy the monitor 227 267 */ 228 - static inline void da_monitor_destroy(void) { } 268 + static inline void da_monitor_destroy(void) 269 + { 270 + da_monitor_reset_all(); 271 + } 229 272 230 273 #elif RV_MON_TYPE == RV_MON_PER_TASK 231 274 /* ··· 247 284 static inline struct da_monitor *da_get_monitor(struct task_struct *tsk) 248 285 { 249 286 return &tsk->rv[task_mon_slot].da_mon; 287 + } 288 + 289 + /* 290 + * da_get_target - return the task associated to the monitor 291 + */ 292 + static inline struct task_struct *da_get_target(struct da_monitor *da_mon) 293 + { 294 + return container_of(da_mon, struct task_struct, rv[task_mon_slot].da_mon); 295 + } 296 + 297 + /* 298 + * da_get_id - return the id associated to the monitor 299 + * 300 + * For per-task monitors, the id is the task's PID. 301 + */ 302 + static inline da_id_type da_get_id(struct da_monitor *da_mon) 303 + { 304 + return da_get_target(da_mon)->pid; 250 305 } 251 306 252 307 static void da_monitor_reset_all(void) ··· 311 330 } 312 331 rv_put_task_monitor_slot(task_mon_slot); 313 332 task_mon_slot = RV_PER_TASK_MONITOR_INIT; 333 + 334 + da_monitor_reset_all(); 335 + } 336 + 337 + #elif RV_MON_TYPE == RV_MON_PER_OBJ 338 + /* 339 + * Functions to define, init and get a per-object monitor. 340 + */ 341 + 342 + struct da_monitor_storage { 343 + da_id_type id; 344 + monitor_target target; 345 + union rv_task_monitor rv; 346 + struct hlist_node node; 347 + struct rcu_head rcu; 348 + }; 349 + 350 + #ifndef DA_MONITOR_HT_BITS 351 + #define DA_MONITOR_HT_BITS 10 352 + #endif 353 + static DEFINE_HASHTABLE(da_monitor_ht, DA_MONITOR_HT_BITS); 354 + 355 + /* 356 + * da_create_empty_storage - pre-allocate an empty storage 357 + */ 358 + static inline struct da_monitor_storage *da_create_empty_storage(da_id_type id) 359 + { 360 + struct da_monitor_storage *mon_storage; 361 + 362 + mon_storage = kmalloc_nolock(sizeof(struct da_monitor_storage), 363 + __GFP_ZERO, NUMA_NO_NODE); 364 + if (!mon_storage) 365 + return NULL; 366 + 367 + hash_add_rcu(da_monitor_ht, &mon_storage->node, id); 368 + mon_storage->id = id; 369 + return mon_storage; 370 + } 371 + 372 + /* 373 + * da_create_storage - create the per-object storage 374 + * 375 + * The caller is responsible to synchronise writers, either with locks or 376 + * implicitly. For instance, if da_create_storage is only called from a single 377 + * event for target (e.g. sched_switch), it's safe to call this without locks. 378 + */ 379 + static inline struct da_monitor *da_create_storage(da_id_type id, 380 + monitor_target target, 381 + struct da_monitor *da_mon) 382 + { 383 + struct da_monitor_storage *mon_storage; 384 + 385 + if (da_mon) 386 + return da_mon; 387 + 388 + mon_storage = da_create_empty_storage(id); 389 + if (!mon_storage) 390 + return NULL; 391 + 392 + mon_storage->target = target; 393 + return &mon_storage->rv.da_mon; 394 + } 395 + 396 + /* 397 + * __da_get_mon_storage - get the monitor storage from the hash table 398 + */ 399 + static inline struct da_monitor_storage *__da_get_mon_storage(da_id_type id) 400 + { 401 + struct da_monitor_storage *mon_storage; 402 + 403 + lockdep_assert_in_rcu_read_lock(); 404 + hash_for_each_possible_rcu(da_monitor_ht, mon_storage, node, id) { 405 + if (mon_storage->id == id) 406 + return mon_storage; 407 + } 408 + 409 + return NULL; 410 + } 411 + 412 + /* 413 + * da_get_monitor - return the monitor for target 414 + */ 415 + static struct da_monitor *da_get_monitor(da_id_type id, monitor_target target) 416 + { 417 + struct da_monitor_storage *mon_storage; 418 + 419 + mon_storage = __da_get_mon_storage(id); 420 + return mon_storage ? &mon_storage->rv.da_mon : NULL; 421 + } 422 + 423 + /* 424 + * da_get_target - return the object associated to the monitor 425 + */ 426 + static inline monitor_target da_get_target(struct da_monitor *da_mon) 427 + { 428 + return container_of(da_mon, struct da_monitor_storage, rv.da_mon)->target; 429 + } 430 + 431 + /* 432 + * da_get_id - return the id associated to the monitor 433 + */ 434 + static inline da_id_type da_get_id(struct da_monitor *da_mon) 435 + { 436 + return container_of(da_mon, struct da_monitor_storage, rv.da_mon)->id; 437 + } 438 + 439 + /* 440 + * da_create_or_get - create the per-object storage if not already there 441 + * 442 + * This needs a lookup so should be guarded by RCU, the condition is checked 443 + * directly in da_create_storage() 444 + */ 445 + static inline void da_create_or_get(da_id_type id, monitor_target target) 446 + { 447 + guard(rcu)(); 448 + da_create_storage(id, target, da_get_monitor(id, target)); 449 + } 450 + 451 + /* 452 + * da_fill_empty_storage - store the target in a pre-allocated storage 453 + * 454 + * Can be used as a substitute of da_create_storage when starting a monitor in 455 + * an environment where allocation is unsafe. 456 + */ 457 + static inline struct da_monitor *da_fill_empty_storage(da_id_type id, 458 + monitor_target target, 459 + struct da_monitor *da_mon) 460 + { 461 + if (unlikely(da_mon && !da_get_target(da_mon))) 462 + container_of(da_mon, struct da_monitor_storage, rv.da_mon)->target = target; 463 + return da_mon; 464 + } 465 + 466 + /* 467 + * da_get_target_by_id - return the object associated to the id 468 + */ 469 + static inline monitor_target da_get_target_by_id(da_id_type id) 470 + { 471 + struct da_monitor_storage *mon_storage; 472 + 473 + guard(rcu)(); 474 + mon_storage = __da_get_mon_storage(id); 475 + 476 + if (unlikely(!mon_storage)) 477 + return NULL; 478 + return mon_storage->target; 479 + } 480 + 481 + /* 482 + * da_destroy_storage - destroy the per-object storage 483 + * 484 + * The caller is responsible to synchronise writers, either with locks or 485 + * implicitly. For instance, if da_destroy_storage is called at sched_exit and 486 + * da_create_storage can never occur after that, it's safe to call this without 487 + * locks. 488 + * This function includes an RCU read-side critical section to synchronise 489 + * against da_monitor_destroy(). 490 + */ 491 + static inline void da_destroy_storage(da_id_type id) 492 + { 493 + struct da_monitor_storage *mon_storage; 494 + 495 + guard(rcu)(); 496 + mon_storage = __da_get_mon_storage(id); 497 + 498 + if (!mon_storage) 499 + return; 500 + da_monitor_reset_hook(&mon_storage->rv.da_mon); 501 + hash_del_rcu(&mon_storage->node); 502 + kfree_rcu(mon_storage, rcu); 503 + } 504 + 505 + static void da_monitor_reset_all(void) 506 + { 507 + struct da_monitor_storage *mon_storage; 508 + int bkt; 509 + 510 + rcu_read_lock(); 511 + hash_for_each_rcu(da_monitor_ht, bkt, mon_storage, node) 512 + da_monitor_reset(&mon_storage->rv.da_mon); 513 + rcu_read_unlock(); 514 + } 515 + 516 + static inline int da_monitor_init(void) 517 + { 518 + hash_init(da_monitor_ht); 519 + return 0; 520 + } 521 + 522 + static inline void da_monitor_destroy(void) 523 + { 524 + struct da_monitor_storage *mon_storage; 525 + struct hlist_node *tmp; 526 + int bkt; 527 + 528 + /* 529 + * This function is called after all probes are disabled, we need only 530 + * worry about concurrency against old events. 531 + */ 532 + synchronize_rcu(); 533 + hash_for_each_safe(da_monitor_ht, bkt, tmp, mon_storage, node) { 534 + da_monitor_reset_hook(&mon_storage->rv.da_mon); 535 + hash_del_rcu(&mon_storage->node); 536 + kfree(mon_storage); 537 + } 538 + } 539 + 540 + /* 541 + * Allow the per-object monitors to run allocation manually, necessary if the 542 + * start condition is in a context problematic for allocation (e.g. scheduling). 543 + * In such case, if the storage was pre-allocated without a target, set it now. 544 + */ 545 + #ifdef DA_SKIP_AUTO_ALLOC 546 + #define da_prepare_storage da_fill_empty_storage 547 + #else 548 + #define da_prepare_storage da_create_storage 549 + #endif /* DA_SKIP_AUTO_ALLOC */ 550 + 551 + #endif /* RV_MON_TYPE */ 552 + 553 + #if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU 554 + /* 555 + * Trace events for implicit monitors. Implicit monitor is the one which the 556 + * handler does not need to specify which da_monitor to manipulate. Examples 557 + * of implicit monitor are the per_cpu or the global ones. 558 + */ 559 + 560 + static inline void da_trace_event(struct da_monitor *da_mon, 561 + char *curr_state, char *event, 562 + char *next_state, bool is_final, 563 + da_id_type id) 564 + { 565 + CONCATENATE(trace_event_, MONITOR_NAME)(curr_state, event, next_state, 566 + is_final); 567 + } 568 + 569 + static inline void da_trace_error(struct da_monitor *da_mon, 570 + char *curr_state, char *event, 571 + da_id_type id) 572 + { 573 + CONCATENATE(trace_error_, MONITOR_NAME)(curr_state, event); 574 + } 575 + 576 + /* 577 + * da_get_id - unused for implicit monitors 578 + */ 579 + static inline da_id_type da_get_id(struct da_monitor *da_mon) 580 + { 581 + return 0; 582 + } 583 + 584 + #elif RV_MON_TYPE == RV_MON_PER_TASK || RV_MON_TYPE == RV_MON_PER_OBJ 585 + /* 586 + * Trace events for per_task/per_object monitors, report the target id. 587 + */ 588 + 589 + static inline void da_trace_event(struct da_monitor *da_mon, 590 + char *curr_state, char *event, 591 + char *next_state, bool is_final, 592 + da_id_type id) 593 + { 594 + CONCATENATE(trace_event_, MONITOR_NAME)(id, curr_state, event, 595 + next_state, is_final); 596 + } 597 + 598 + static inline void da_trace_error(struct da_monitor *da_mon, 599 + char *curr_state, char *event, 600 + da_id_type id) 601 + { 602 + CONCATENATE(trace_error_, MONITOR_NAME)(id, curr_state, event); 314 603 } 315 604 #endif /* RV_MON_TYPE */ 605 + 606 + /* 607 + * da_event - handle an event for the da_mon 608 + * 609 + * This function is valid for both implicit and id monitors. 610 + * Retry in case there is a race between getting and setting the next state, 611 + * warn and reset the monitor if it runs out of retries. The monitor should be 612 + * able to handle various orders. 613 + */ 614 + static inline bool da_event(struct da_monitor *da_mon, enum events event, da_id_type id) 615 + { 616 + enum states curr_state, next_state; 617 + 618 + curr_state = READ_ONCE(da_mon->curr_state); 619 + for (int i = 0; i < MAX_DA_RETRY_RACING_EVENTS; i++) { 620 + next_state = model_get_next_state(curr_state, event); 621 + if (next_state == INVALID_STATE) { 622 + react(curr_state, event); 623 + da_trace_error(da_mon, model_get_state_name(curr_state), 624 + model_get_event_name(event), id); 625 + return false; 626 + } 627 + if (likely(try_cmpxchg(&da_mon->curr_state, &curr_state, next_state))) { 628 + if (!da_monitor_event_hook(da_mon, curr_state, event, next_state, id)) 629 + return false; 630 + da_trace_event(da_mon, model_get_state_name(curr_state), 631 + model_get_event_name(event), 632 + model_get_state_name(next_state), 633 + model_is_final_state(next_state), id); 634 + return true; 635 + } 636 + } 637 + 638 + trace_rv_retries_error(__stringify(MONITOR_NAME), model_get_event_name(event)); 639 + pr_warn("rv: " __stringify(MAX_DA_RETRY_RACING_EVENTS) 640 + " retries reached for event %s, resetting monitor %s", 641 + model_get_event_name(event), __stringify(MONITOR_NAME)); 642 + return false; 643 + } 644 + 645 + static inline void __da_handle_event_common(struct da_monitor *da_mon, 646 + enum events event, da_id_type id) 647 + { 648 + if (!da_event(da_mon, event, id)) 649 + da_monitor_reset(da_mon); 650 + } 651 + 652 + static inline void __da_handle_event(struct da_monitor *da_mon, 653 + enum events event, da_id_type id) 654 + { 655 + if (da_monitor_handling_event(da_mon)) 656 + __da_handle_event_common(da_mon, event, id); 657 + } 658 + 659 + static inline bool __da_handle_start_event(struct da_monitor *da_mon, 660 + enum events event, da_id_type id) 661 + { 662 + if (!da_monitor_enabled()) 663 + return 0; 664 + if (unlikely(!da_monitoring(da_mon))) { 665 + da_monitor_start(da_mon); 666 + return 0; 667 + } 668 + 669 + __da_handle_event_common(da_mon, event, id); 670 + 671 + return 1; 672 + } 673 + 674 + static inline bool __da_handle_start_run_event(struct da_monitor *da_mon, 675 + enum events event, da_id_type id) 676 + { 677 + if (!da_monitor_enabled()) 678 + return 0; 679 + if (unlikely(!da_monitoring(da_mon))) 680 + da_monitor_start(da_mon); 681 + 682 + __da_handle_event_common(da_mon, event, id); 683 + 684 + return 1; 685 + } 316 686 317 687 #if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU 318 688 /* ··· 671 339 * the monitor. 672 340 */ 673 341 674 - static inline void __da_handle_event(struct da_monitor *da_mon, 675 - enum events event) 676 - { 677 - bool retval; 678 - 679 - retval = da_event(da_mon, event); 680 - if (!retval) 681 - da_monitor_reset(da_mon); 682 - } 683 - 684 342 /* 685 343 * da_handle_event - handle an event 686 344 */ 687 345 static inline void da_handle_event(enum events event) 688 346 { 689 - struct da_monitor *da_mon = da_get_monitor(); 690 - bool retval; 691 - 692 - retval = da_monitor_handling_event(da_mon); 693 - if (!retval) 694 - return; 695 - 696 - __da_handle_event(da_mon, event); 347 + __da_handle_event(da_get_monitor(), event, 0); 697 348 } 698 349 699 350 /* ··· 691 376 */ 692 377 static inline bool da_handle_start_event(enum events event) 693 378 { 694 - struct da_monitor *da_mon; 695 - 696 - if (!da_monitor_enabled()) 697 - return 0; 698 - 699 - da_mon = da_get_monitor(); 700 - 701 - if (unlikely(!da_monitoring(da_mon))) { 702 - da_monitor_start(da_mon); 703 - return 0; 704 - } 705 - 706 - __da_handle_event(da_mon, event); 707 - 708 - return 1; 379 + return __da_handle_start_event(da_get_monitor(), event, 0); 709 380 } 710 381 711 382 /* ··· 702 401 */ 703 402 static inline bool da_handle_start_run_event(enum events event) 704 403 { 705 - struct da_monitor *da_mon; 706 - 707 - if (!da_monitor_enabled()) 708 - return 0; 709 - 710 - da_mon = da_get_monitor(); 711 - 712 - if (unlikely(!da_monitoring(da_mon))) 713 - da_monitor_start(da_mon); 714 - 715 - __da_handle_event(da_mon, event); 716 - 717 - return 1; 404 + return __da_handle_start_run_event(da_get_monitor(), event, 0); 718 405 } 719 406 720 407 #elif RV_MON_TYPE == RV_MON_PER_TASK ··· 710 421 * Handle event for per task. 711 422 */ 712 423 713 - static inline void __da_handle_event(struct da_monitor *da_mon, 714 - struct task_struct *tsk, enum events event) 715 - { 716 - bool retval; 717 - 718 - retval = da_event(da_mon, tsk, event); 719 - if (!retval) 720 - da_monitor_reset(da_mon); 721 - } 722 - 723 424 /* 724 425 * da_handle_event - handle an event 725 426 */ 726 427 static inline void da_handle_event(struct task_struct *tsk, enum events event) 727 428 { 728 - struct da_monitor *da_mon = da_get_monitor(tsk); 729 - bool retval; 730 - 731 - retval = da_monitor_handling_event(da_mon); 732 - if (!retval) 733 - return; 734 - 735 - __da_handle_event(da_mon, tsk, event); 429 + __da_handle_event(da_get_monitor(tsk), event, tsk->pid); 736 430 } 737 431 738 432 /* ··· 731 459 static inline bool da_handle_start_event(struct task_struct *tsk, 732 460 enum events event) 733 461 { 734 - struct da_monitor *da_mon; 735 - 736 - if (!da_monitor_enabled()) 737 - return 0; 738 - 739 - da_mon = da_get_monitor(tsk); 740 - 741 - if (unlikely(!da_monitoring(da_mon))) { 742 - da_monitor_start(da_mon); 743 - return 0; 744 - } 745 - 746 - __da_handle_event(da_mon, tsk, event); 747 - 748 - return 1; 462 + return __da_handle_start_event(da_get_monitor(tsk), event, tsk->pid); 749 463 } 750 464 751 465 /* ··· 743 485 static inline bool da_handle_start_run_event(struct task_struct *tsk, 744 486 enum events event) 745 487 { 488 + return __da_handle_start_run_event(da_get_monitor(tsk), event, tsk->pid); 489 + } 490 + 491 + #elif RV_MON_TYPE == RV_MON_PER_OBJ 492 + /* 493 + * Handle event for per object. 494 + */ 495 + 496 + /* 497 + * da_handle_event - handle an event 498 + */ 499 + static inline void da_handle_event(da_id_type id, monitor_target target, enum events event) 500 + { 746 501 struct da_monitor *da_mon; 747 502 748 - if (!da_monitor_enabled()) 503 + guard(rcu)(); 504 + da_mon = da_get_monitor(id, target); 505 + if (likely(da_mon)) 506 + __da_handle_event(da_mon, event, id); 507 + } 508 + 509 + /* 510 + * da_handle_start_event - start monitoring or handle event 511 + * 512 + * This function is used to notify the monitor that the system is returning 513 + * to the initial state, so the monitor can start monitoring in the next event. 514 + * Thus: 515 + * 516 + * If the monitor already started, handle the event. 517 + * If the monitor did not start yet, start the monitor but skip the event. 518 + */ 519 + static inline bool da_handle_start_event(da_id_type id, monitor_target target, 520 + enum events event) 521 + { 522 + struct da_monitor *da_mon; 523 + 524 + guard(rcu)(); 525 + da_mon = da_get_monitor(id, target); 526 + da_mon = da_prepare_storage(id, target, da_mon); 527 + if (unlikely(!da_mon)) 749 528 return 0; 529 + return __da_handle_start_event(da_mon, event, id); 530 + } 750 531 751 - da_mon = da_get_monitor(tsk); 532 + /* 533 + * da_handle_start_run_event - start monitoring and handle event 534 + * 535 + * This function is used to notify the monitor that the system is in the 536 + * initial state, so the monitor can start monitoring and handling event. 537 + */ 538 + static inline bool da_handle_start_run_event(da_id_type id, monitor_target target, 539 + enum events event) 540 + { 541 + struct da_monitor *da_mon; 752 542 753 - if (unlikely(!da_monitoring(da_mon))) 754 - da_monitor_start(da_mon); 543 + guard(rcu)(); 544 + da_mon = da_get_monitor(id, target); 545 + da_mon = da_prepare_storage(id, target, da_mon); 546 + if (unlikely(!da_mon)) 547 + return 0; 548 + return __da_handle_start_run_event(da_mon, event, id); 549 + } 755 550 756 - __da_handle_event(da_mon, tsk, event); 551 + static inline void da_reset(da_id_type id, monitor_target target) 552 + { 553 + struct da_monitor *da_mon; 757 554 758 - return 1; 555 + guard(rcu)(); 556 + da_mon = da_get_monitor(id, target); 557 + if (likely(da_mon)) 558 + da_monitor_reset(da_mon); 759 559 } 760 560 #endif /* RV_MON_TYPE */ 761 561
+478
include/rv/ha_monitor.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2025-2028 Red Hat, Inc. Gabriele Monaco <gmonaco@redhat.com> 4 + * 5 + * Hybrid automata (HA) monitor functions, to be used together 6 + * with automata models in C generated by the rvgen tool. 7 + * 8 + * This type of monitors extends the Deterministic automata (DA) class by 9 + * adding a set of environment variables (e.g. clocks) that can be used to 10 + * constraint the valid transitions. 11 + * 12 + * The rvgen tool is available at tools/verification/rvgen/ 13 + * 14 + * For further information, see: 15 + * Documentation/trace/rv/monitor_synthesis.rst 16 + */ 17 + 18 + #ifndef _RV_HA_MONITOR_H 19 + #define _RV_HA_MONITOR_H 20 + 21 + #include <rv/automata.h> 22 + 23 + #ifndef da_id_type 24 + #define da_id_type int 25 + #endif 26 + 27 + static inline void ha_monitor_init_env(struct da_monitor *da_mon); 28 + static inline void ha_monitor_reset_env(struct da_monitor *da_mon); 29 + static inline void ha_setup_timer(struct ha_monitor *ha_mon); 30 + static inline bool ha_cancel_timer(struct ha_monitor *ha_mon); 31 + static bool ha_monitor_handle_constraint(struct da_monitor *da_mon, 32 + enum states curr_state, 33 + enum events event, 34 + enum states next_state, 35 + da_id_type id); 36 + #define da_monitor_event_hook ha_monitor_handle_constraint 37 + #define da_monitor_init_hook ha_monitor_init_env 38 + #define da_monitor_reset_hook ha_monitor_reset_env 39 + 40 + #include <rv/da_monitor.h> 41 + #include <linux/seq_buf.h> 42 + 43 + /* This simplifies things since da_mon and ha_mon coexist in the same union */ 44 + _Static_assert(offsetof(struct ha_monitor, da_mon) == 0, 45 + "da_mon must be the first element in an ha_mon!"); 46 + #define to_ha_monitor(da) container_of(da, struct ha_monitor, da_mon) 47 + 48 + #define ENV_MAX CONCATENATE(env_max_, MONITOR_NAME) 49 + #define ENV_MAX_STORED CONCATENATE(env_max_stored_, MONITOR_NAME) 50 + #define envs CONCATENATE(envs_, MONITOR_NAME) 51 + 52 + /* Environment storage before being reset */ 53 + #define ENV_INVALID_VALUE U64_MAX 54 + /* Error with no event occurs only on timeouts */ 55 + #define EVENT_NONE EVENT_MAX 56 + #define EVENT_NONE_LBL "none" 57 + #define ENV_BUFFER_SIZE 64 58 + 59 + #ifdef CONFIG_RV_REACTORS 60 + 61 + /* 62 + * ha_react - trigger the reaction after a failed environment constraint 63 + * 64 + * The transition from curr_state with event is otherwise valid, but the 65 + * environment constraint is false. This function can be called also with no 66 + * event from a timer (state constraints only). 67 + */ 68 + static void ha_react(enum states curr_state, enum events event, char *env) 69 + { 70 + rv_react(&rv_this, 71 + "rv: monitor %s does not allow event %s on state %s with env %s\n", 72 + __stringify(MONITOR_NAME), 73 + event == EVENT_NONE ? EVENT_NONE_LBL : model_get_event_name(event), 74 + model_get_state_name(curr_state), env); 75 + } 76 + 77 + #else /* CONFIG_RV_REACTOR */ 78 + 79 + static void ha_react(enum states curr_state, enum events event, char *env) { } 80 + #endif 81 + 82 + /* 83 + * model_get_state_name - return the (string) name of the given state 84 + */ 85 + static char *model_get_env_name(enum envs env) 86 + { 87 + if ((env < 0) || (env >= ENV_MAX)) 88 + return "INVALID"; 89 + 90 + return RV_AUTOMATON_NAME.env_names[env]; 91 + } 92 + 93 + /* 94 + * Monitors requiring a timer implementation need to request it explicitly. 95 + */ 96 + #ifndef HA_TIMER_TYPE 97 + #define HA_TIMER_TYPE HA_TIMER_NONE 98 + #endif 99 + 100 + #if HA_TIMER_TYPE == HA_TIMER_WHEEL 101 + static void ha_monitor_timer_callback(struct timer_list *timer); 102 + #elif HA_TIMER_TYPE == HA_TIMER_HRTIMER 103 + static enum hrtimer_restart ha_monitor_timer_callback(struct hrtimer *hrtimer); 104 + #endif 105 + 106 + /* 107 + * ktime_get_ns is expensive, since we usually don't require precise accounting 108 + * of changes within the same event, cache the current time at the beginning of 109 + * the constraint handler and use the cache for subsequent calls. 110 + * Monitors without ns clocks automatically skip this. 111 + */ 112 + #ifdef HA_CLK_NS 113 + #define ha_get_ns() ktime_get_ns() 114 + #else 115 + #define ha_get_ns() 0 116 + #endif /* HA_CLK_NS */ 117 + 118 + /* Should be supplied by the monitor */ 119 + static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs env, u64 time_ns); 120 + static bool ha_verify_constraint(struct ha_monitor *ha_mon, 121 + enum states curr_state, 122 + enum events event, 123 + enum states next_state, 124 + u64 time_ns); 125 + 126 + /* 127 + * ha_monitor_reset_all_stored - reset all environment variables in the monitor 128 + */ 129 + static inline void ha_monitor_reset_all_stored(struct ha_monitor *ha_mon) 130 + { 131 + for (int i = 0; i < ENV_MAX_STORED; i++) 132 + WRITE_ONCE(ha_mon->env_store[i], ENV_INVALID_VALUE); 133 + } 134 + 135 + /* 136 + * ha_monitor_init_env - setup timer and reset all environment 137 + * 138 + * Called from a hook in the DA start functions, it supplies the da_mon 139 + * corresponding to the current ha_mon. 140 + * Not all hybrid automata require the timer, still set it for simplicity. 141 + */ 142 + static inline void ha_monitor_init_env(struct da_monitor *da_mon) 143 + { 144 + struct ha_monitor *ha_mon = to_ha_monitor(da_mon); 145 + 146 + ha_monitor_reset_all_stored(ha_mon); 147 + ha_setup_timer(ha_mon); 148 + } 149 + 150 + /* 151 + * ha_monitor_reset_env - stop timer and reset all environment 152 + * 153 + * Called from a hook in the DA reset functions, it supplies the da_mon 154 + * corresponding to the current ha_mon. 155 + * Not all hybrid automata require the timer, still clear it for simplicity. 156 + */ 157 + static inline void ha_monitor_reset_env(struct da_monitor *da_mon) 158 + { 159 + struct ha_monitor *ha_mon = to_ha_monitor(da_mon); 160 + 161 + /* Initialisation resets the monitor before initialising the timer */ 162 + if (likely(da_monitoring(da_mon))) 163 + ha_cancel_timer(ha_mon); 164 + } 165 + 166 + /* 167 + * ha_monitor_env_invalid - return true if env has not been initialised 168 + */ 169 + static inline bool ha_monitor_env_invalid(struct ha_monitor *ha_mon, enum envs env) 170 + { 171 + return READ_ONCE(ha_mon->env_store[env]) == ENV_INVALID_VALUE; 172 + } 173 + 174 + static inline void ha_get_env_string(struct seq_buf *s, 175 + struct ha_monitor *ha_mon, u64 time_ns) 176 + { 177 + const char *format_str = "%s=%llu"; 178 + 179 + for (int i = 0; i < ENV_MAX; i++) { 180 + seq_buf_printf(s, format_str, model_get_env_name(i), 181 + ha_get_env(ha_mon, i, time_ns)); 182 + format_str = ",%s=%llu"; 183 + } 184 + } 185 + 186 + #if RV_MON_TYPE == RV_MON_GLOBAL || RV_MON_TYPE == RV_MON_PER_CPU 187 + static inline void ha_trace_error_env(struct ha_monitor *ha_mon, 188 + char *curr_state, char *event, char *env, 189 + da_id_type id) 190 + { 191 + CONCATENATE(trace_error_env_, MONITOR_NAME)(curr_state, event, env); 192 + } 193 + #elif RV_MON_TYPE == RV_MON_PER_TASK || RV_MON_TYPE == RV_MON_PER_OBJ 194 + 195 + #define ha_get_target(ha_mon) da_get_target(&ha_mon->da_mon) 196 + 197 + static inline void ha_trace_error_env(struct ha_monitor *ha_mon, 198 + char *curr_state, char *event, char *env, 199 + da_id_type id) 200 + { 201 + CONCATENATE(trace_error_env_, MONITOR_NAME)(id, curr_state, event, env); 202 + } 203 + #endif /* RV_MON_TYPE */ 204 + 205 + /* 206 + * ha_get_monitor - return the current monitor 207 + */ 208 + #define ha_get_monitor(...) to_ha_monitor(da_get_monitor(__VA_ARGS__)) 209 + 210 + /* 211 + * ha_monitor_handle_constraint - handle the constraint on the current transition 212 + * 213 + * If the monitor implementation defines a constraint in the transition from 214 + * curr_state to event, react and trace appropriately as well as return false. 215 + * This function is called from the hook in the DA event handle function and 216 + * triggers a failure in the monitor. 217 + */ 218 + static bool ha_monitor_handle_constraint(struct da_monitor *da_mon, 219 + enum states curr_state, 220 + enum events event, 221 + enum states next_state, 222 + da_id_type id) 223 + { 224 + struct ha_monitor *ha_mon = to_ha_monitor(da_mon); 225 + u64 time_ns = ha_get_ns(); 226 + DECLARE_SEQ_BUF(env_string, ENV_BUFFER_SIZE); 227 + 228 + if (ha_verify_constraint(ha_mon, curr_state, event, next_state, time_ns)) 229 + return true; 230 + 231 + ha_get_env_string(&env_string, ha_mon, time_ns); 232 + ha_react(curr_state, event, env_string.buffer); 233 + ha_trace_error_env(ha_mon, 234 + model_get_state_name(curr_state), 235 + model_get_event_name(event), 236 + env_string.buffer, id); 237 + return false; 238 + } 239 + 240 + static inline void __ha_monitor_timer_callback(struct ha_monitor *ha_mon) 241 + { 242 + enum states curr_state = READ_ONCE(ha_mon->da_mon.curr_state); 243 + DECLARE_SEQ_BUF(env_string, ENV_BUFFER_SIZE); 244 + u64 time_ns = ha_get_ns(); 245 + 246 + ha_get_env_string(&env_string, ha_mon, time_ns); 247 + ha_react(curr_state, EVENT_NONE, env_string.buffer); 248 + ha_trace_error_env(ha_mon, model_get_state_name(curr_state), 249 + EVENT_NONE_LBL, env_string.buffer, 250 + da_get_id(&ha_mon->da_mon)); 251 + 252 + da_monitor_reset(&ha_mon->da_mon); 253 + } 254 + 255 + /* 256 + * The clock variables have 2 different representations in the env_store: 257 + * - The guard representation is the timestamp of the last reset 258 + * - The invariant representation is the timestamp when the invariant expires 259 + * As the representations are incompatible, care must be taken when switching 260 + * between them: the invariant representation can only be used when starting a 261 + * timer when the previous representation was guard (e.g. no other invariant 262 + * started since the last reset operation). 263 + * Likewise, switching from invariant to guard representation without a reset 264 + * can be done only by subtracting the exact value used to start the invariant. 265 + * 266 + * Reading the environment variable (ha_get_clk) also reflects this difference 267 + * any reads in states that have an invariant return the (possibly negative) 268 + * time since expiration, other reads return the time since last reset. 269 + */ 270 + 271 + /* 272 + * Helper functions for env variables describing clocks with ns granularity 273 + */ 274 + static inline u64 ha_get_clk_ns(struct ha_monitor *ha_mon, enum envs env, u64 time_ns) 275 + { 276 + return time_ns - READ_ONCE(ha_mon->env_store[env]); 277 + } 278 + static inline void ha_reset_clk_ns(struct ha_monitor *ha_mon, enum envs env, u64 time_ns) 279 + { 280 + WRITE_ONCE(ha_mon->env_store[env], time_ns); 281 + } 282 + static inline void ha_set_invariant_ns(struct ha_monitor *ha_mon, enum envs env, 283 + u64 value, u64 time_ns) 284 + { 285 + WRITE_ONCE(ha_mon->env_store[env], time_ns + value); 286 + } 287 + static inline bool ha_check_invariant_ns(struct ha_monitor *ha_mon, 288 + enum envs env, u64 time_ns) 289 + { 290 + return READ_ONCE(ha_mon->env_store[env]) >= time_ns; 291 + } 292 + /* 293 + * ha_invariant_passed_ns - prepare the invariant and return the time since reset 294 + */ 295 + static inline u64 ha_invariant_passed_ns(struct ha_monitor *ha_mon, enum envs env, 296 + u64 expire, u64 time_ns) 297 + { 298 + u64 passed = 0; 299 + 300 + if (env < 0 || env >= ENV_MAX_STORED) 301 + return 0; 302 + if (ha_monitor_env_invalid(ha_mon, env)) 303 + return 0; 304 + passed = ha_get_env(ha_mon, env, time_ns); 305 + ha_set_invariant_ns(ha_mon, env, expire - passed, time_ns); 306 + return passed; 307 + } 308 + 309 + /* 310 + * Helper functions for env variables describing clocks with jiffy granularity 311 + */ 312 + static inline u64 ha_get_clk_jiffy(struct ha_monitor *ha_mon, enum envs env) 313 + { 314 + return get_jiffies_64() - READ_ONCE(ha_mon->env_store[env]); 315 + } 316 + static inline void ha_reset_clk_jiffy(struct ha_monitor *ha_mon, enum envs env) 317 + { 318 + WRITE_ONCE(ha_mon->env_store[env], get_jiffies_64()); 319 + } 320 + static inline void ha_set_invariant_jiffy(struct ha_monitor *ha_mon, 321 + enum envs env, u64 value) 322 + { 323 + WRITE_ONCE(ha_mon->env_store[env], get_jiffies_64() + value); 324 + } 325 + static inline bool ha_check_invariant_jiffy(struct ha_monitor *ha_mon, 326 + enum envs env, u64 time_ns) 327 + { 328 + return time_after64(READ_ONCE(ha_mon->env_store[env]), get_jiffies_64()); 329 + 330 + } 331 + /* 332 + * ha_invariant_passed_jiffy - prepare the invariant and return the time since reset 333 + */ 334 + static inline u64 ha_invariant_passed_jiffy(struct ha_monitor *ha_mon, enum envs env, 335 + u64 expire, u64 time_ns) 336 + { 337 + u64 passed = 0; 338 + 339 + if (env < 0 || env >= ENV_MAX_STORED) 340 + return 0; 341 + if (ha_monitor_env_invalid(ha_mon, env)) 342 + return 0; 343 + passed = ha_get_env(ha_mon, env, time_ns); 344 + ha_set_invariant_jiffy(ha_mon, env, expire - passed); 345 + return passed; 346 + } 347 + 348 + /* 349 + * Retrieve the last reset time (guard representation) from the invariant 350 + * representation (expiration). 351 + * It the caller's responsibility to make sure the storage was actually in the 352 + * invariant representation (e.g. the current state has an invariant). 353 + * The provided value must be the same used when starting the invariant. 354 + * 355 + * This function's access to the storage is NOT atomic, due to the rarity when 356 + * this is used. If a monitor allows writes concurrent to this, likely 357 + * other things are broken and need rethinking the model or additional locking. 358 + */ 359 + static inline void ha_inv_to_guard(struct ha_monitor *ha_mon, enum envs env, 360 + u64 value, u64 time_ns) 361 + { 362 + WRITE_ONCE(ha_mon->env_store[env], READ_ONCE(ha_mon->env_store[env]) - value); 363 + } 364 + 365 + #if HA_TIMER_TYPE == HA_TIMER_WHEEL 366 + /* 367 + * Helper functions to handle the monitor timer. 368 + * Not all monitors require a timer, in such case the timer will be set up but 369 + * never armed. 370 + * Timers start since the last reset of the supplied env or from now if env is 371 + * not an environment variable. If env was not initialised no timer starts. 372 + * Timers can expire on any CPU unless the monitor is per-cpu, 373 + * where we assume every event occurs on the local CPU. 374 + */ 375 + static void ha_monitor_timer_callback(struct timer_list *timer) 376 + { 377 + struct ha_monitor *ha_mon = container_of(timer, struct ha_monitor, timer); 378 + 379 + __ha_monitor_timer_callback(ha_mon); 380 + } 381 + static inline void ha_setup_timer(struct ha_monitor *ha_mon) 382 + { 383 + int mode = 0; 384 + 385 + if (RV_MON_TYPE == RV_MON_PER_CPU) 386 + mode |= TIMER_PINNED; 387 + timer_setup(&ha_mon->timer, ha_monitor_timer_callback, mode); 388 + } 389 + static inline void ha_start_timer_jiffy(struct ha_monitor *ha_mon, enum envs env, 390 + u64 expire, u64 time_ns) 391 + { 392 + u64 passed = ha_invariant_passed_jiffy(ha_mon, env, expire, time_ns); 393 + 394 + mod_timer(&ha_mon->timer, get_jiffies_64() + expire - passed); 395 + } 396 + static inline void ha_start_timer_ns(struct ha_monitor *ha_mon, enum envs env, 397 + u64 expire, u64 time_ns) 398 + { 399 + u64 passed = ha_invariant_passed_ns(ha_mon, env, expire, time_ns); 400 + 401 + ha_start_timer_jiffy(ha_mon, ENV_MAX_STORED, 402 + nsecs_to_jiffies(expire - passed + TICK_NSEC - 1), time_ns); 403 + } 404 + /* 405 + * ha_cancel_timer - Cancel the timer 406 + * 407 + * Returns: 408 + * * 1 when the timer was active 409 + * * 0 when the timer was not active or running a callback 410 + */ 411 + static inline bool ha_cancel_timer(struct ha_monitor *ha_mon) 412 + { 413 + return timer_delete(&ha_mon->timer); 414 + } 415 + #elif HA_TIMER_TYPE == HA_TIMER_HRTIMER 416 + /* 417 + * Helper functions to handle the monitor timer. 418 + * Not all monitors require a timer, in such case the timer will be set up but 419 + * never armed. 420 + * Timers start since the last reset of the supplied env or from now if env is 421 + * not an environment variable. If env was not initialised no timer starts. 422 + * Timers can expire on any CPU unless the monitor is per-cpu, 423 + * where we assume every event occurs on the local CPU. 424 + */ 425 + static enum hrtimer_restart ha_monitor_timer_callback(struct hrtimer *hrtimer) 426 + { 427 + struct ha_monitor *ha_mon = container_of(hrtimer, struct ha_monitor, hrtimer); 428 + 429 + __ha_monitor_timer_callback(ha_mon); 430 + return HRTIMER_NORESTART; 431 + } 432 + static inline void ha_setup_timer(struct ha_monitor *ha_mon) 433 + { 434 + hrtimer_setup(&ha_mon->hrtimer, ha_monitor_timer_callback, 435 + CLOCK_MONOTONIC, HRTIMER_MODE_REL_HARD); 436 + } 437 + static inline void ha_start_timer_ns(struct ha_monitor *ha_mon, enum envs env, 438 + u64 expire, u64 time_ns) 439 + { 440 + int mode = HRTIMER_MODE_REL_HARD; 441 + u64 passed = ha_invariant_passed_ns(ha_mon, env, expire, time_ns); 442 + 443 + if (RV_MON_TYPE == RV_MON_PER_CPU) 444 + mode |= HRTIMER_MODE_PINNED; 445 + hrtimer_start(&ha_mon->hrtimer, ns_to_ktime(expire - passed), mode); 446 + } 447 + static inline void ha_start_timer_jiffy(struct ha_monitor *ha_mon, enum envs env, 448 + u64 expire, u64 time_ns) 449 + { 450 + u64 passed = ha_invariant_passed_jiffy(ha_mon, env, expire, time_ns); 451 + 452 + ha_start_timer_ns(ha_mon, ENV_MAX_STORED, 453 + jiffies_to_nsecs(expire - passed), time_ns); 454 + } 455 + /* 456 + * ha_cancel_timer - Cancel the timer 457 + * 458 + * Returns: 459 + * * 1 when the timer was active 460 + * * 0 when the timer was not active or running a callback 461 + */ 462 + static inline bool ha_cancel_timer(struct ha_monitor *ha_mon) 463 + { 464 + return hrtimer_try_to_cancel(&ha_mon->hrtimer) == 1; 465 + } 466 + #else /* HA_TIMER_NONE */ 467 + /* 468 + * Start function is intentionally not defined, monitors using timers must 469 + * set HA_TIMER_TYPE to either HA_TIMER_WHEEL or HA_TIMER_HRTIMER. 470 + */ 471 + static inline void ha_setup_timer(struct ha_monitor *ha_mon) { } 472 + static inline bool ha_cancel_timer(struct ha_monitor *ha_mon) 473 + { 474 + return false; 475 + } 476 + #endif 477 + 478 + #endif
+26
include/trace/events/sched.h
··· 896 896 TP_PROTO(struct task_struct *tsk, int cpu, int tif), 897 897 TP_ARGS(tsk, cpu, tif)); 898 898 899 + #define DL_OTHER 0 900 + #define DL_TASK 1 901 + #define DL_SERVER_FAIR 2 902 + #define DL_SERVER_EXT 3 903 + 904 + DECLARE_TRACE(sched_dl_throttle, 905 + TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type), 906 + TP_ARGS(dl_se, cpu, type)); 907 + 908 + DECLARE_TRACE(sched_dl_replenish, 909 + TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type), 910 + TP_ARGS(dl_se, cpu, type)); 911 + 912 + /* Call to update_curr_dl_se not involving throttle or replenish */ 913 + DECLARE_TRACE(sched_dl_update, 914 + TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type), 915 + TP_ARGS(dl_se, cpu, type)); 916 + 917 + DECLARE_TRACE(sched_dl_server_start, 918 + TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type), 919 + TP_ARGS(dl_se, cpu, type)); 920 + 921 + DECLARE_TRACE(sched_dl_server_stop, 922 + TP_PROTO(struct sched_dl_entity *dl_se, int cpu, u8 type), 923 + TP_ARGS(dl_se, cpu, type)); 924 + 899 925 #endif /* _TRACE_SCHED_H */ 900 926 901 927 /* This part must be outside protection */
+5
kernel/sched/core.c
··· 122 122 EXPORT_TRACEPOINT_SYMBOL_GPL(sched_entry_tp); 123 123 EXPORT_TRACEPOINT_SYMBOL_GPL(sched_exit_tp); 124 124 EXPORT_TRACEPOINT_SYMBOL_GPL(sched_set_need_resched_tp); 125 + EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_throttle_tp); 126 + EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_replenish_tp); 127 + EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_update_tp); 128 + EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_server_start_tp); 129 + EXPORT_TRACEPOINT_SYMBOL_GPL(sched_dl_server_stop_tp); 125 130 126 131 DEFINE_PER_CPU_SHARED_ALIGNED(struct rq, runqueues); 127 132 DEFINE_PER_CPU(struct rnd_state, sched_rnd_state);
+24 -27
kernel/sched/deadline.c
··· 18 18 19 19 #include <linux/cpuset.h> 20 20 #include <linux/sched/clock.h> 21 + #include <linux/sched/deadline.h> 21 22 #include <uapi/linux/sched/types.h> 22 23 #include "sched.h" 23 24 #include "pelt.h" ··· 57 56 } 58 57 late_initcall(sched_dl_sysctl_init); 59 58 #endif /* CONFIG_SYSCTL */ 60 - 61 - static bool dl_server(struct sched_dl_entity *dl_se) 62 - { 63 - return dl_se->dl_server; 64 - } 65 - 66 - static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se) 67 - { 68 - BUG_ON(dl_server(dl_se)); 69 - return container_of(dl_se, struct task_struct, dl); 70 - } 71 59 72 60 static inline struct rq *rq_of_dl_rq(struct dl_rq *dl_rq) 73 61 { ··· 104 114 return false; 105 115 } 106 116 #endif /* !CONFIG_RT_MUTEXES */ 117 + 118 + static inline u8 dl_get_type(struct sched_dl_entity *dl_se, struct rq *rq) 119 + { 120 + if (!dl_server(dl_se)) 121 + return DL_TASK; 122 + if (dl_se == &rq->fair_server) 123 + return DL_SERVER_FAIR; 124 + #ifdef CONFIG_SCHED_CLASS_EXT 125 + if (dl_se == &rq->ext_server) 126 + return DL_SERVER_EXT; 127 + #endif 128 + return DL_OTHER; 129 + } 107 130 108 131 static inline struct dl_bw *dl_bw_of(int i) 109 132 { ··· 736 733 dl_se->dl_throttled = 1; 737 734 dl_se->dl_defer_armed = 1; 738 735 } 736 + trace_sched_dl_replenish_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 739 737 } 740 738 741 739 /* ··· 851 847 dl_se->dl_yielded = 0; 852 848 if (dl_se->dl_throttled) 853 849 dl_se->dl_throttled = 0; 850 + 851 + trace_sched_dl_replenish_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 854 852 855 853 /* 856 854 * If this is the replenishment of a deferred reservation, ··· 978 972 WARN_ON(dl_time_before(dl_se->deadline, rq_clock(rq))); 979 973 980 974 dl_se->runtime = (dl_se->dl_density * laxity) >> BW_SHIFT; 981 - } 982 - 983 - /* 984 - * Regarding the deadline, a task with implicit deadline has a relative 985 - * deadline == relative period. A task with constrained deadline has a 986 - * relative deadline <= relative period. 987 - * 988 - * We support constrained deadline tasks. However, there are some restrictions 989 - * applied only for tasks which do not have an implicit deadline. See 990 - * update_dl_entity() to know more about such restrictions. 991 - * 992 - * The dl_is_implicit() returns true if the task has an implicit deadline. 993 - */ 994 - static inline bool dl_is_implicit(struct sched_dl_entity *dl_se) 995 - { 996 - return dl_se->dl_deadline == dl_se->dl_period; 997 975 } 998 976 999 977 /* ··· 1335 1345 dl_time_before(rq_clock(rq), dl_next_period(dl_se))) { 1336 1346 if (unlikely(is_dl_boosted(dl_se) || !start_dl_timer(dl_se))) 1337 1347 return; 1348 + trace_sched_dl_throttle_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 1338 1349 dl_se->dl_throttled = 1; 1339 1350 if (dl_se->runtime > 0) 1340 1351 dl_se->runtime = 0; ··· 1499 1508 1500 1509 throttle: 1501 1510 if (dl_runtime_exceeded(dl_se) || dl_se->dl_yielded) { 1511 + trace_sched_dl_throttle_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 1502 1512 dl_se->dl_throttled = 1; 1503 1513 1504 1514 /* If requested, inform the user about runtime overruns. */ ··· 1524 1532 1525 1533 if (!is_leftmost(dl_se, &rq->dl)) 1526 1534 resched_curr(rq); 1535 + } else { 1536 + trace_sched_dl_update_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 1527 1537 } 1528 1538 1529 1539 /* ··· 1804 1810 if (WARN_ON_ONCE(!cpu_online(cpu_of(rq)))) 1805 1811 return; 1806 1812 1813 + trace_sched_dl_server_start_tp(dl_se, cpu_of(rq), dl_get_type(dl_se, rq)); 1807 1814 dl_se->dl_server_active = 1; 1808 1815 enqueue_dl_entity(dl_se, ENQUEUE_WAKEUP); 1809 1816 if (!dl_task(dl_se->rq->curr) || dl_entity_preempt(dl_se, &rq->curr->dl)) ··· 1816 1821 if (!dl_server(dl_se) || !dl_server_active(dl_se)) 1817 1822 return; 1818 1823 1824 + trace_sched_dl_server_stop_tp(dl_se, cpu_of(dl_se->rq), 1825 + dl_get_type(dl_se, dl_se->rq)); 1819 1826 dequeue_dl_entity(dl_se, DEQUEUE_SLEEP); 1820 1827 hrtimer_try_to_cancel(&dl_se->dl_timer); 1821 1828 dl_se->dl_defer_armed = 0;
+18
kernel/trace/rv/Kconfig
··· 23 23 config RV_LTL_MONITOR 24 24 bool 25 25 26 + config RV_HA_MONITOR 27 + bool 28 + 29 + config HA_MON_EVENTS_IMPLICIT 30 + select DA_MON_EVENTS_IMPLICIT 31 + select RV_HA_MONITOR 32 + bool 33 + 34 + config HA_MON_EVENTS_ID 35 + select DA_MON_EVENTS_ID 36 + select RV_HA_MONITOR 37 + bool 38 + 26 39 menuconfig RV 27 40 bool "Runtime Verification" 28 41 select TRACING ··· 77 64 source "kernel/trace/rv/monitors/pagefault/Kconfig" 78 65 source "kernel/trace/rv/monitors/sleep/Kconfig" 79 66 # Add new rtapp monitors here 67 + 68 + source "kernel/trace/rv/monitors/stall/Kconfig" 69 + source "kernel/trace/rv/monitors/deadline/Kconfig" 70 + source "kernel/trace/rv/monitors/nomiss/Kconfig" 71 + # Add new deadline monitors here 80 72 81 73 # Add new monitors here 82 74
+3
kernel/trace/rv/Makefile
··· 17 17 obj-$(CONFIG_RV_MON_NRP) += monitors/nrp/nrp.o 18 18 obj-$(CONFIG_RV_MON_SSSW) += monitors/sssw/sssw.o 19 19 obj-$(CONFIG_RV_MON_OPID) += monitors/opid/opid.o 20 + obj-$(CONFIG_RV_MON_STALL) += monitors/stall/stall.o 21 + obj-$(CONFIG_RV_MON_DEADLINE) += monitors/deadline/deadline.o 22 + obj-$(CONFIG_RV_MON_NOMISS) += monitors/nomiss/nomiss.o 20 23 # Add new monitors here 21 24 obj-$(CONFIG_RV_REACTORS) += rv_reactors.o 22 25 obj-$(CONFIG_RV_REACT_PRINTK) += reactor_printk.o
+10
kernel/trace/rv/monitors/deadline/Kconfig
··· 1 + config RV_MON_DEADLINE 2 + depends on RV 3 + bool "deadline monitor" 4 + help 5 + Collection of monitors to check the deadline scheduler and server 6 + behave according to specifications. Enable this to enable all 7 + scheduler specification supported by the current kernel. 8 + 9 + For further information, see: 10 + Documentation/trace/rv/monitor_deadline.rst
+44
kernel/trace/rv/monitors/deadline/deadline.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/kernel.h> 3 + #include <linux/module.h> 4 + #include <linux/init.h> 5 + #include <linux/rv.h> 6 + #include <linux/kallsyms.h> 7 + 8 + #define MODULE_NAME "deadline" 9 + 10 + #include "deadline.h" 11 + 12 + struct rv_monitor rv_deadline = { 13 + .name = "deadline", 14 + .description = "container for several deadline scheduler specifications.", 15 + .enable = NULL, 16 + .disable = NULL, 17 + .reset = NULL, 18 + .enabled = 0, 19 + }; 20 + 21 + /* Used by other monitors */ 22 + struct sched_class *rv_ext_sched_class; 23 + 24 + static int __init register_deadline(void) 25 + { 26 + if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT)) { 27 + rv_ext_sched_class = (void *)kallsyms_lookup_name("ext_sched_class"); 28 + if (!rv_ext_sched_class) 29 + pr_warn("rv: Missing ext_sched_class, monitors may not work.\n"); 30 + } 31 + return rv_register_monitor(&rv_deadline, NULL); 32 + } 33 + 34 + static void __exit unregister_deadline(void) 35 + { 36 + rv_unregister_monitor(&rv_deadline); 37 + } 38 + 39 + module_init(register_deadline); 40 + module_exit(unregister_deadline); 41 + 42 + MODULE_LICENSE("GPL"); 43 + MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>"); 44 + MODULE_DESCRIPTION("deadline: container for several deadline scheduler specifications.");
+202
kernel/trace/rv/monitors/deadline/deadline.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #include <linux/kernel.h> 4 + #include <linux/uaccess.h> 5 + #include <linux/sched/deadline.h> 6 + #include <asm/syscall.h> 7 + #include <uapi/linux/sched/types.h> 8 + #include <trace/events/sched.h> 9 + 10 + /* 11 + * Dummy values if not available 12 + */ 13 + #ifndef __NR_sched_setscheduler 14 + #define __NR_sched_setscheduler -__COUNTER__ 15 + #endif 16 + #ifndef __NR_sched_setattr 17 + #define __NR_sched_setattr -__COUNTER__ 18 + #endif 19 + 20 + extern struct rv_monitor rv_deadline; 21 + /* Initialised when registering the deadline container */ 22 + extern struct sched_class *rv_ext_sched_class; 23 + 24 + /* 25 + * If both have dummy values, the syscalls are not supported and we don't even 26 + * need to register the handler. 27 + */ 28 + static inline bool should_skip_syscall_handle(void) 29 + { 30 + return __NR_sched_setattr < 0 && __NR_sched_setscheduler < 0; 31 + } 32 + 33 + /* 34 + * is_supported_type - return true if @type is supported by the deadline monitors 35 + */ 36 + static inline bool is_supported_type(u8 type) 37 + { 38 + return type == DL_TASK || type == DL_SERVER_FAIR || type == DL_SERVER_EXT; 39 + } 40 + 41 + /* 42 + * is_server_type - return true if @type is a supported server 43 + */ 44 + static inline bool is_server_type(u8 type) 45 + { 46 + return is_supported_type(type) && type != DL_TASK; 47 + } 48 + 49 + /* 50 + * Use negative numbers for the server. 51 + * Currently only one fair server per CPU, may change in the future. 52 + */ 53 + #define fair_server_id(cpu) (-cpu) 54 + #define ext_server_id(cpu) (-cpu - num_possible_cpus()) 55 + #define NO_SERVER_ID (-2 * num_possible_cpus()) 56 + /* 57 + * Get a unique id used for dl entities 58 + * 59 + * The cpu is not required for tasks as the pid is used there, if this function 60 + * is called on a dl_se that for sure corresponds to a task, DL_TASK can be 61 + * used in place of cpu. 62 + * We need the cpu for servers as it is provided in the tracepoint and we 63 + * cannot easily retrieve it from the dl_se (requires the struct rq definition). 64 + */ 65 + static inline int get_entity_id(struct sched_dl_entity *dl_se, int cpu, u8 type) 66 + { 67 + if (dl_server(dl_se) && type != DL_TASK) { 68 + if (type == DL_SERVER_FAIR) 69 + return fair_server_id(cpu); 70 + if (type == DL_SERVER_EXT) 71 + return ext_server_id(cpu); 72 + return NO_SERVER_ID; 73 + } 74 + return dl_task_of(dl_se)->pid; 75 + } 76 + 77 + static inline bool task_is_scx_enabled(struct task_struct *tsk) 78 + { 79 + return IS_ENABLED(CONFIG_SCHED_CLASS_EXT) && 80 + tsk->sched_class == rv_ext_sched_class; 81 + } 82 + 83 + /* Expand id and target as arguments for da functions */ 84 + #define EXPAND_ID(dl_se, cpu, type) get_entity_id(dl_se, cpu, type), dl_se 85 + #define EXPAND_ID_TASK(tsk) get_entity_id(&tsk->dl, task_cpu(tsk), DL_TASK), &tsk->dl 86 + 87 + static inline u8 get_server_type(struct task_struct *tsk) 88 + { 89 + if (tsk->policy == SCHED_NORMAL || tsk->policy == SCHED_EXT || 90 + tsk->policy == SCHED_BATCH || tsk->policy == SCHED_IDLE) 91 + return task_is_scx_enabled(tsk) ? DL_SERVER_EXT : DL_SERVER_FAIR; 92 + return DL_OTHER; 93 + } 94 + 95 + static inline int extract_params(struct pt_regs *regs, long id, pid_t *pid_out) 96 + { 97 + size_t size = offsetofend(struct sched_attr, sched_flags); 98 + struct sched_attr __user *uattr, attr; 99 + int new_policy = -1, ret; 100 + unsigned long args[6]; 101 + 102 + switch (id) { 103 + case __NR_sched_setscheduler: 104 + syscall_get_arguments(current, regs, args); 105 + *pid_out = args[0]; 106 + new_policy = args[1]; 107 + break; 108 + case __NR_sched_setattr: 109 + syscall_get_arguments(current, regs, args); 110 + *pid_out = args[0]; 111 + uattr = (struct sched_attr __user *)args[1]; 112 + /* 113 + * Just copy up to sched_flags, we are not interested after that 114 + */ 115 + ret = copy_struct_from_user(&attr, size, uattr, size); 116 + if (ret) 117 + return ret; 118 + if (attr.sched_flags & SCHED_FLAG_KEEP_POLICY) 119 + return -EINVAL; 120 + new_policy = attr.sched_policy; 121 + break; 122 + default: 123 + return -EINVAL; 124 + } 125 + 126 + return new_policy & ~SCHED_RESET_ON_FORK; 127 + } 128 + 129 + /* Helper functions requiring DA/HA utilities */ 130 + #ifdef RV_MON_TYPE 131 + 132 + /* 133 + * get_fair_server - get the fair server associated to a task 134 + * 135 + * If the task is a boosted task, the server is available in the task_struct, 136 + * otherwise grab the dl entity saved for the CPU where the task is enqueued. 137 + * This function assumes the task is enqueued somewhere. 138 + */ 139 + static inline struct sched_dl_entity *get_server(struct task_struct *tsk, u8 type) 140 + { 141 + if (tsk->dl_server && get_server_type(tsk) == type) 142 + return tsk->dl_server; 143 + if (type == DL_SERVER_FAIR) 144 + return da_get_target_by_id(fair_server_id(task_cpu(tsk))); 145 + if (type == DL_SERVER_EXT) 146 + return da_get_target_by_id(ext_server_id(task_cpu(tsk))); 147 + return NULL; 148 + } 149 + 150 + /* 151 + * Initialise monitors for all tasks and pre-allocate the storage for servers. 152 + * This is necessary since we don't have access to the servers here and 153 + * allocation can cause deadlocks from their tracepoints. We can only fill 154 + * pre-initialised storage from there. 155 + */ 156 + static inline int init_storage(bool skip_tasks) 157 + { 158 + struct task_struct *g, *p; 159 + int cpu; 160 + 161 + for_each_possible_cpu(cpu) { 162 + if (!da_create_empty_storage(fair_server_id(cpu))) 163 + goto fail; 164 + if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT) && 165 + !da_create_empty_storage(ext_server_id(cpu))) 166 + goto fail; 167 + } 168 + 169 + if (skip_tasks) 170 + return 0; 171 + 172 + read_lock(&tasklist_lock); 173 + for_each_process_thread(g, p) { 174 + if (p->policy == SCHED_DEADLINE) { 175 + if (!da_create_storage(EXPAND_ID_TASK(p), NULL)) { 176 + read_unlock(&tasklist_lock); 177 + goto fail; 178 + } 179 + } 180 + } 181 + read_unlock(&tasklist_lock); 182 + return 0; 183 + 184 + fail: 185 + da_monitor_destroy(); 186 + return -ENOMEM; 187 + } 188 + 189 + static void __maybe_unused handle_newtask(void *data, struct task_struct *task, u64 flags) 190 + { 191 + /* Might be superfluous as tasks are not started with this policy.. */ 192 + if (task->policy == SCHED_DEADLINE) 193 + da_create_storage(EXPAND_ID_TASK(task), NULL); 194 + } 195 + 196 + static void __maybe_unused handle_exit(void *data, struct task_struct *p, bool group_dead) 197 + { 198 + if (p->policy == SCHED_DEADLINE) 199 + da_destroy_storage(get_entity_id(&p->dl, DL_TASK, DL_TASK)); 200 + } 201 + 202 + #endif
+15
kernel/trace/rv/monitors/nomiss/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + config RV_MON_NOMISS 4 + depends on RV 5 + depends on HAVE_SYSCALL_TRACEPOINTS 6 + depends on RV_MON_DEADLINE 7 + default y 8 + select HA_MON_EVENTS_ID 9 + bool "nomiss monitor" 10 + help 11 + Monitor to ensure dl entities run to completion before their deadiline. 12 + This monitor is part of the deadline monitors collection. 13 + 14 + For further information, see: 15 + Documentation/trace/rv/monitor_deadline.rst
+293
kernel/trace/rv/monitors/nomiss/nomiss.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/ftrace.h> 3 + #include <linux/tracepoint.h> 4 + #include <linux/kernel.h> 5 + #include <linux/module.h> 6 + #include <linux/init.h> 7 + #include <linux/rv.h> 8 + #include <rv/instrumentation.h> 9 + 10 + #define MODULE_NAME "nomiss" 11 + 12 + #include <uapi/linux/sched/types.h> 13 + #include <trace/events/syscalls.h> 14 + #include <trace/events/sched.h> 15 + #include <trace/events/task.h> 16 + #include <rv_trace.h> 17 + 18 + #define RV_MON_TYPE RV_MON_PER_OBJ 19 + #define HA_TIMER_TYPE HA_TIMER_WHEEL 20 + /* The start condition is on sched_switch, it's dangerous to allocate there */ 21 + #define DA_SKIP_AUTO_ALLOC 22 + typedef struct sched_dl_entity *monitor_target; 23 + #include "nomiss.h" 24 + #include <rv/ha_monitor.h> 25 + #include <monitors/deadline/deadline.h> 26 + 27 + /* 28 + * User configurable deadline threshold. If the total utilisation of deadline 29 + * tasks is larger than 1, they are only guaranteed bounded tardiness. See 30 + * Documentation/scheduler/sched-deadline.rst for more details. 31 + * The minimum tardiness without sched_feat(HRTICK_DL) is 1 tick to accommodate 32 + * for throttle enforced on the next tick. 33 + */ 34 + static u64 deadline_thresh = TICK_NSEC; 35 + module_param(deadline_thresh, ullong, 0644); 36 + #define DEADLINE_NS(ha_mon) (ha_get_target(ha_mon)->dl_deadline + deadline_thresh) 37 + 38 + static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_nomiss env, u64 time_ns) 39 + { 40 + if (env == clk_nomiss) 41 + return ha_get_clk_ns(ha_mon, env, time_ns); 42 + else if (env == is_constr_dl_nomiss) 43 + return !dl_is_implicit(ha_get_target(ha_mon)); 44 + else if (env == is_defer_nomiss) 45 + return ha_get_target(ha_mon)->dl_defer; 46 + return ENV_INVALID_VALUE; 47 + } 48 + 49 + static void ha_reset_env(struct ha_monitor *ha_mon, enum envs_nomiss env, u64 time_ns) 50 + { 51 + if (env == clk_nomiss) 52 + ha_reset_clk_ns(ha_mon, env, time_ns); 53 + } 54 + 55 + static inline bool ha_verify_invariants(struct ha_monitor *ha_mon, 56 + enum states curr_state, enum events event, 57 + enum states next_state, u64 time_ns) 58 + { 59 + if (curr_state == ready_nomiss) 60 + return ha_check_invariant_ns(ha_mon, clk_nomiss, time_ns); 61 + else if (curr_state == running_nomiss) 62 + return ha_check_invariant_ns(ha_mon, clk_nomiss, time_ns); 63 + return true; 64 + } 65 + 66 + static inline void ha_convert_inv_guard(struct ha_monitor *ha_mon, 67 + enum states curr_state, enum events event, 68 + enum states next_state, u64 time_ns) 69 + { 70 + if (curr_state == next_state) 71 + return; 72 + if (curr_state == ready_nomiss) 73 + ha_inv_to_guard(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns); 74 + else if (curr_state == running_nomiss) 75 + ha_inv_to_guard(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns); 76 + } 77 + 78 + static inline bool ha_verify_guards(struct ha_monitor *ha_mon, 79 + enum states curr_state, enum events event, 80 + enum states next_state, u64 time_ns) 81 + { 82 + bool res = true; 83 + 84 + if (curr_state == ready_nomiss && event == dl_replenish_nomiss) 85 + ha_reset_env(ha_mon, clk_nomiss, time_ns); 86 + else if (curr_state == ready_nomiss && event == dl_throttle_nomiss) 87 + res = ha_get_env(ha_mon, is_defer_nomiss, time_ns) == 1ull; 88 + else if (curr_state == idle_nomiss && event == dl_replenish_nomiss) 89 + ha_reset_env(ha_mon, clk_nomiss, time_ns); 90 + else if (curr_state == running_nomiss && event == dl_replenish_nomiss) 91 + ha_reset_env(ha_mon, clk_nomiss, time_ns); 92 + else if (curr_state == sleeping_nomiss && event == dl_replenish_nomiss) 93 + ha_reset_env(ha_mon, clk_nomiss, time_ns); 94 + else if (curr_state == sleeping_nomiss && event == dl_throttle_nomiss) 95 + res = ha_get_env(ha_mon, is_constr_dl_nomiss, time_ns) == 1ull || 96 + ha_get_env(ha_mon, is_defer_nomiss, time_ns) == 1ull; 97 + else if (curr_state == throttled_nomiss && event == dl_replenish_nomiss) 98 + ha_reset_env(ha_mon, clk_nomiss, time_ns); 99 + return res; 100 + } 101 + 102 + static inline void ha_setup_invariants(struct ha_monitor *ha_mon, 103 + enum states curr_state, enum events event, 104 + enum states next_state, u64 time_ns) 105 + { 106 + if (next_state == curr_state && event != dl_replenish_nomiss) 107 + return; 108 + if (next_state == ready_nomiss) 109 + ha_start_timer_ns(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns); 110 + else if (next_state == running_nomiss) 111 + ha_start_timer_ns(ha_mon, clk_nomiss, DEADLINE_NS(ha_mon), time_ns); 112 + else if (curr_state == ready_nomiss) 113 + ha_cancel_timer(ha_mon); 114 + else if (curr_state == running_nomiss) 115 + ha_cancel_timer(ha_mon); 116 + } 117 + 118 + static bool ha_verify_constraint(struct ha_monitor *ha_mon, 119 + enum states curr_state, enum events event, 120 + enum states next_state, u64 time_ns) 121 + { 122 + if (!ha_verify_invariants(ha_mon, curr_state, event, next_state, time_ns)) 123 + return false; 124 + 125 + ha_convert_inv_guard(ha_mon, curr_state, event, next_state, time_ns); 126 + 127 + if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns)) 128 + return false; 129 + 130 + ha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns); 131 + 132 + return true; 133 + } 134 + 135 + static void handle_dl_replenish(void *data, struct sched_dl_entity *dl_se, 136 + int cpu, u8 type) 137 + { 138 + if (is_supported_type(type)) 139 + da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_replenish_nomiss); 140 + } 141 + 142 + static void handle_dl_throttle(void *data, struct sched_dl_entity *dl_se, 143 + int cpu, u8 type) 144 + { 145 + if (is_supported_type(type)) 146 + da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_throttle_nomiss); 147 + } 148 + 149 + static void handle_dl_server_stop(void *data, struct sched_dl_entity *dl_se, 150 + int cpu, u8 type) 151 + { 152 + /* 153 + * This isn't the standard use of da_handle_start_run_event since this 154 + * event cannot only occur from the initial state. 155 + * It is fine to use here because it always brings to a known state and 156 + * the fact we "pretend" the transition starts from the initial state 157 + * has no side effect. 158 + */ 159 + if (is_supported_type(type)) 160 + da_handle_start_run_event(EXPAND_ID(dl_se, cpu, type), dl_server_stop_nomiss); 161 + } 162 + 163 + static inline void handle_server_switch(struct task_struct *next, int cpu, u8 type) 164 + { 165 + struct sched_dl_entity *dl_se = get_server(next, type); 166 + 167 + if (dl_se && is_idle_task(next)) 168 + da_handle_event(EXPAND_ID(dl_se, cpu, type), dl_server_idle_nomiss); 169 + } 170 + 171 + static void handle_sched_switch(void *data, bool preempt, 172 + struct task_struct *prev, 173 + struct task_struct *next, 174 + unsigned int prev_state) 175 + { 176 + int cpu = task_cpu(next); 177 + 178 + if (prev_state != TASK_RUNNING && !preempt && prev->policy == SCHED_DEADLINE) 179 + da_handle_event(EXPAND_ID_TASK(prev), sched_switch_suspend_nomiss); 180 + if (next->policy == SCHED_DEADLINE) 181 + da_handle_start_run_event(EXPAND_ID_TASK(next), sched_switch_in_nomiss); 182 + 183 + /* 184 + * The server is available in next only if the next task is boosted, 185 + * otherwise we need to retrieve it. 186 + * Here the server continues in the state running/armed until actually 187 + * stopped, this works since we continue expecting a throttle. 188 + */ 189 + if (next->dl_server) 190 + da_handle_start_event(EXPAND_ID(next->dl_server, cpu, 191 + get_server_type(next)), 192 + sched_switch_in_nomiss); 193 + else { 194 + handle_server_switch(next, cpu, DL_SERVER_FAIR); 195 + if (IS_ENABLED(CONFIG_SCHED_CLASS_EXT)) 196 + handle_server_switch(next, cpu, DL_SERVER_EXT); 197 + } 198 + } 199 + 200 + static void handle_sys_enter(void *data, struct pt_regs *regs, long id) 201 + { 202 + struct task_struct *p; 203 + int new_policy = -1; 204 + pid_t pid = 0; 205 + 206 + new_policy = extract_params(regs, id, &pid); 207 + if (new_policy < 0) 208 + return; 209 + guard(rcu)(); 210 + p = pid ? find_task_by_vpid(pid) : current; 211 + if (unlikely(!p) || new_policy == p->policy) 212 + return; 213 + 214 + if (p->policy == SCHED_DEADLINE) 215 + da_reset(EXPAND_ID_TASK(p)); 216 + else if (new_policy == SCHED_DEADLINE) 217 + da_create_or_get(EXPAND_ID_TASK(p)); 218 + } 219 + 220 + static void handle_sched_wakeup(void *data, struct task_struct *tsk) 221 + { 222 + if (tsk->policy == SCHED_DEADLINE) 223 + da_handle_event(EXPAND_ID_TASK(tsk), sched_wakeup_nomiss); 224 + } 225 + 226 + static int enable_nomiss(void) 227 + { 228 + int retval; 229 + 230 + retval = da_monitor_init(); 231 + if (retval) 232 + return retval; 233 + 234 + retval = init_storage(false); 235 + if (retval) 236 + return retval; 237 + rv_attach_trace_probe("nomiss", sched_dl_replenish_tp, handle_dl_replenish); 238 + rv_attach_trace_probe("nomiss", sched_dl_throttle_tp, handle_dl_throttle); 239 + rv_attach_trace_probe("nomiss", sched_dl_server_stop_tp, handle_dl_server_stop); 240 + rv_attach_trace_probe("nomiss", sched_switch, handle_sched_switch); 241 + rv_attach_trace_probe("nomiss", sched_wakeup, handle_sched_wakeup); 242 + if (!should_skip_syscall_handle()) 243 + rv_attach_trace_probe("nomiss", sys_enter, handle_sys_enter); 244 + rv_attach_trace_probe("nomiss", task_newtask, handle_newtask); 245 + rv_attach_trace_probe("nomiss", sched_process_exit, handle_exit); 246 + 247 + return 0; 248 + } 249 + 250 + static void disable_nomiss(void) 251 + { 252 + rv_this.enabled = 0; 253 + 254 + /* Those are RCU writers, detach earlier hoping to close a bit faster */ 255 + rv_detach_trace_probe("nomiss", task_newtask, handle_newtask); 256 + rv_detach_trace_probe("nomiss", sched_process_exit, handle_exit); 257 + if (!should_skip_syscall_handle()) 258 + rv_detach_trace_probe("nomiss", sys_enter, handle_sys_enter); 259 + 260 + rv_detach_trace_probe("nomiss", sched_dl_replenish_tp, handle_dl_replenish); 261 + rv_detach_trace_probe("nomiss", sched_dl_throttle_tp, handle_dl_throttle); 262 + rv_detach_trace_probe("nomiss", sched_dl_server_stop_tp, handle_dl_server_stop); 263 + rv_detach_trace_probe("nomiss", sched_switch, handle_sched_switch); 264 + rv_detach_trace_probe("nomiss", sched_wakeup, handle_sched_wakeup); 265 + 266 + da_monitor_destroy(); 267 + } 268 + 269 + static struct rv_monitor rv_this = { 270 + .name = "nomiss", 271 + .description = "dl entities run to completion before their deadline.", 272 + .enable = enable_nomiss, 273 + .disable = disable_nomiss, 274 + .reset = da_monitor_reset_all, 275 + .enabled = 0, 276 + }; 277 + 278 + static int __init register_nomiss(void) 279 + { 280 + return rv_register_monitor(&rv_this, &rv_deadline); 281 + } 282 + 283 + static void __exit unregister_nomiss(void) 284 + { 285 + rv_unregister_monitor(&rv_this); 286 + } 287 + 288 + module_init(register_nomiss); 289 + module_exit(unregister_nomiss); 290 + 291 + MODULE_LICENSE("GPL"); 292 + MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>"); 293 + MODULE_DESCRIPTION("nomiss: dl entities run to completion before their deadline.");
+123
kernel/trace/rv/monitors/nomiss/nomiss.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Automatically generated C representation of nomiss automaton 4 + * For further information about this format, see kernel documentation: 5 + * Documentation/trace/rv/deterministic_automata.rst 6 + */ 7 + 8 + #define MONITOR_NAME nomiss 9 + 10 + enum states_nomiss { 11 + ready_nomiss, 12 + idle_nomiss, 13 + running_nomiss, 14 + sleeping_nomiss, 15 + throttled_nomiss, 16 + state_max_nomiss, 17 + }; 18 + 19 + #define INVALID_STATE state_max_nomiss 20 + 21 + enum events_nomiss { 22 + dl_replenish_nomiss, 23 + dl_server_idle_nomiss, 24 + dl_server_stop_nomiss, 25 + dl_throttle_nomiss, 26 + sched_switch_in_nomiss, 27 + sched_switch_suspend_nomiss, 28 + sched_wakeup_nomiss, 29 + event_max_nomiss, 30 + }; 31 + 32 + enum envs_nomiss { 33 + clk_nomiss, 34 + is_constr_dl_nomiss, 35 + is_defer_nomiss, 36 + env_max_nomiss, 37 + env_max_stored_nomiss = is_constr_dl_nomiss, 38 + }; 39 + 40 + _Static_assert(env_max_stored_nomiss <= MAX_HA_ENV_LEN, "Not enough slots"); 41 + #define HA_CLK_NS 42 + 43 + struct automaton_nomiss { 44 + char *state_names[state_max_nomiss]; 45 + char *event_names[event_max_nomiss]; 46 + char *env_names[env_max_nomiss]; 47 + unsigned char function[state_max_nomiss][event_max_nomiss]; 48 + unsigned char initial_state; 49 + bool final_states[state_max_nomiss]; 50 + }; 51 + 52 + static const struct automaton_nomiss automaton_nomiss = { 53 + .state_names = { 54 + "ready", 55 + "idle", 56 + "running", 57 + "sleeping", 58 + "throttled", 59 + }, 60 + .event_names = { 61 + "dl_replenish", 62 + "dl_server_idle", 63 + "dl_server_stop", 64 + "dl_throttle", 65 + "sched_switch_in", 66 + "sched_switch_suspend", 67 + "sched_wakeup", 68 + }, 69 + .env_names = { 70 + "clk", 71 + "is_constr_dl", 72 + "is_defer", 73 + }, 74 + .function = { 75 + { 76 + ready_nomiss, 77 + idle_nomiss, 78 + sleeping_nomiss, 79 + throttled_nomiss, 80 + running_nomiss, 81 + INVALID_STATE, 82 + ready_nomiss, 83 + }, 84 + { 85 + ready_nomiss, 86 + idle_nomiss, 87 + sleeping_nomiss, 88 + throttled_nomiss, 89 + running_nomiss, 90 + INVALID_STATE, 91 + INVALID_STATE, 92 + }, 93 + { 94 + running_nomiss, 95 + idle_nomiss, 96 + sleeping_nomiss, 97 + throttled_nomiss, 98 + running_nomiss, 99 + sleeping_nomiss, 100 + running_nomiss, 101 + }, 102 + { 103 + ready_nomiss, 104 + sleeping_nomiss, 105 + sleeping_nomiss, 106 + throttled_nomiss, 107 + running_nomiss, 108 + INVALID_STATE, 109 + ready_nomiss, 110 + }, 111 + { 112 + ready_nomiss, 113 + throttled_nomiss, 114 + INVALID_STATE, 115 + throttled_nomiss, 116 + INVALID_STATE, 117 + throttled_nomiss, 118 + throttled_nomiss, 119 + }, 120 + }, 121 + .initial_state = ready_nomiss, 122 + .final_states = { 1, 0, 0, 0, 0 }, 123 + };
+19
kernel/trace/rv/monitors/nomiss/nomiss_trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * Snippet to be included in rv_trace.h 5 + */ 6 + 7 + #ifdef CONFIG_RV_MON_NOMISS 8 + DEFINE_EVENT(event_da_monitor_id, event_nomiss, 9 + TP_PROTO(int id, char *state, char *event, char *next_state, bool final_state), 10 + TP_ARGS(id, state, event, next_state, final_state)); 11 + 12 + DEFINE_EVENT(error_da_monitor_id, error_nomiss, 13 + TP_PROTO(int id, char *state, char *event), 14 + TP_ARGS(id, state, event)); 15 + 16 + DEFINE_EVENT(error_env_da_monitor_id, error_env_nomiss, 17 + TP_PROTO(int id, char *state, char *event, char *env), 18 + TP_ARGS(id, state, event, env)); 19 + #endif /* CONFIG_RV_MON_NOMISS */
+3 -8
kernel/trace/rv/monitors/opid/Kconfig
··· 2 2 # 3 3 config RV_MON_OPID 4 4 depends on RV 5 - depends on TRACE_IRQFLAGS 6 - depends on TRACE_PREEMPT_TOGGLE 7 5 depends on RV_MON_SCHED 8 - default y if PREEMPT_RT 9 - select DA_MON_EVENTS_IMPLICIT 6 + default y 7 + select HA_MON_EVENTS_IMPLICIT 10 8 bool "opid monitor" 11 9 help 12 10 Monitor to ensure operations like wakeup and need resched occur with 13 - interrupts and preemption disabled or during IRQs, where preemption 14 - may not be disabled explicitly. 15 - 16 - This monitor is unstable on !PREEMPT_RT, say N unless you are testing it. 11 + interrupts and preemption disabled. 17 12 18 13 For further information, see: 19 14 Documentation/trace/rv/monitor_sched.rst
+34 -77
kernel/trace/rv/monitors/opid/opid.c
··· 10 10 #define MODULE_NAME "opid" 11 11 12 12 #include <trace/events/sched.h> 13 - #include <trace/events/irq.h> 14 - #include <trace/events/preemptirq.h> 15 13 #include <rv_trace.h> 16 14 #include <monitors/sched/sched.h> 17 15 18 16 #define RV_MON_TYPE RV_MON_PER_CPU 19 17 #include "opid.h" 20 - #include <rv/da_monitor.h> 18 + #include <rv/ha_monitor.h> 21 19 22 - #ifdef CONFIG_X86_LOCAL_APIC 23 - #include <asm/trace/irq_vectors.h> 24 - 25 - static void handle_vector_irq_entry(void *data, int vector) 20 + static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_opid env, u64 time_ns) 26 21 { 27 - da_handle_event(irq_entry_opid); 28 - } 29 - 30 - static void attach_vector_irq(void) 31 - { 32 - rv_attach_trace_probe("opid", local_timer_entry, handle_vector_irq_entry); 33 - if (IS_ENABLED(CONFIG_IRQ_WORK)) 34 - rv_attach_trace_probe("opid", irq_work_entry, handle_vector_irq_entry); 35 - if (IS_ENABLED(CONFIG_SMP)) { 36 - rv_attach_trace_probe("opid", reschedule_entry, handle_vector_irq_entry); 37 - rv_attach_trace_probe("opid", call_function_entry, handle_vector_irq_entry); 38 - rv_attach_trace_probe("opid", call_function_single_entry, handle_vector_irq_entry); 22 + if (env == irq_off_opid) 23 + return irqs_disabled(); 24 + else if (env == preempt_off_opid) { 25 + /* 26 + * If CONFIG_PREEMPTION is enabled, then the tracepoint itself disables 27 + * preemption (adding one to the preempt_count). Since we are 28 + * interested in the preempt_count at the time the tracepoint was 29 + * hit, we consider 1 as still enabled. 30 + */ 31 + if (IS_ENABLED(CONFIG_PREEMPTION)) 32 + return (preempt_count() & PREEMPT_MASK) > 1; 33 + return true; 39 34 } 35 + return ENV_INVALID_VALUE; 40 36 } 41 37 42 - static void detach_vector_irq(void) 38 + static inline bool ha_verify_guards(struct ha_monitor *ha_mon, 39 + enum states curr_state, enum events event, 40 + enum states next_state, u64 time_ns) 43 41 { 44 - rv_detach_trace_probe("opid", local_timer_entry, handle_vector_irq_entry); 45 - if (IS_ENABLED(CONFIG_IRQ_WORK)) 46 - rv_detach_trace_probe("opid", irq_work_entry, handle_vector_irq_entry); 47 - if (IS_ENABLED(CONFIG_SMP)) { 48 - rv_detach_trace_probe("opid", reschedule_entry, handle_vector_irq_entry); 49 - rv_detach_trace_probe("opid", call_function_entry, handle_vector_irq_entry); 50 - rv_detach_trace_probe("opid", call_function_single_entry, handle_vector_irq_entry); 51 - } 42 + bool res = true; 43 + 44 + if (curr_state == any_opid && event == sched_need_resched_opid) 45 + res = ha_get_env(ha_mon, irq_off_opid, time_ns) == 1ull; 46 + else if (curr_state == any_opid && event == sched_waking_opid) 47 + res = ha_get_env(ha_mon, irq_off_opid, time_ns) == 1ull && 48 + ha_get_env(ha_mon, preempt_off_opid, time_ns) == 1ull; 49 + return res; 52 50 } 53 51 54 - #else 55 - /* We assume irq_entry tracepoints are sufficient on other architectures */ 56 - static void attach_vector_irq(void) { } 57 - static void detach_vector_irq(void) { } 58 - #endif 59 - 60 - static void handle_irq_disable(void *data, unsigned long ip, unsigned long parent_ip) 52 + static bool ha_verify_constraint(struct ha_monitor *ha_mon, 53 + enum states curr_state, enum events event, 54 + enum states next_state, u64 time_ns) 61 55 { 62 - da_handle_event(irq_disable_opid); 63 - } 56 + if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns)) 57 + return false; 64 58 65 - static void handle_irq_enable(void *data, unsigned long ip, unsigned long parent_ip) 66 - { 67 - da_handle_event(irq_enable_opid); 68 - } 69 - 70 - static void handle_irq_entry(void *data, int irq, struct irqaction *action) 71 - { 72 - da_handle_event(irq_entry_opid); 73 - } 74 - 75 - static void handle_preempt_disable(void *data, unsigned long ip, unsigned long parent_ip) 76 - { 77 - da_handle_event(preempt_disable_opid); 78 - } 79 - 80 - static void handle_preempt_enable(void *data, unsigned long ip, unsigned long parent_ip) 81 - { 82 - da_handle_event(preempt_enable_opid); 59 + return true; 83 60 } 84 61 85 62 static void handle_sched_need_resched(void *data, struct task_struct *tsk, int cpu, int tif) 86 63 { 87 - /* The monitor's intitial state is not in_irq */ 88 - if (this_cpu_read(hardirq_context)) 89 - da_handle_event(sched_need_resched_opid); 90 - else 91 - da_handle_start_event(sched_need_resched_opid); 64 + da_handle_start_run_event(sched_need_resched_opid); 92 65 } 93 66 94 67 static void handle_sched_waking(void *data, struct task_struct *p) 95 68 { 96 - /* The monitor's intitial state is not in_irq */ 97 - if (this_cpu_read(hardirq_context)) 98 - da_handle_event(sched_waking_opid); 99 - else 100 - da_handle_start_event(sched_waking_opid); 69 + da_handle_start_run_event(sched_waking_opid); 101 70 } 102 71 103 72 static int enable_opid(void) ··· 77 108 if (retval) 78 109 return retval; 79 110 80 - rv_attach_trace_probe("opid", irq_disable, handle_irq_disable); 81 - rv_attach_trace_probe("opid", irq_enable, handle_irq_enable); 82 - rv_attach_trace_probe("opid", irq_handler_entry, handle_irq_entry); 83 - rv_attach_trace_probe("opid", preempt_disable, handle_preempt_disable); 84 - rv_attach_trace_probe("opid", preempt_enable, handle_preempt_enable); 85 111 rv_attach_trace_probe("opid", sched_set_need_resched_tp, handle_sched_need_resched); 86 112 rv_attach_trace_probe("opid", sched_waking, handle_sched_waking); 87 - attach_vector_irq(); 88 113 89 114 return 0; 90 115 } ··· 87 124 { 88 125 rv_this.enabled = 0; 89 126 90 - rv_detach_trace_probe("opid", irq_disable, handle_irq_disable); 91 - rv_detach_trace_probe("opid", irq_enable, handle_irq_enable); 92 - rv_detach_trace_probe("opid", irq_handler_entry, handle_irq_entry); 93 - rv_detach_trace_probe("opid", preempt_disable, handle_preempt_disable); 94 - rv_detach_trace_probe("opid", preempt_enable, handle_preempt_enable); 95 127 rv_detach_trace_probe("opid", sched_set_need_resched_tp, handle_sched_need_resched); 96 128 rv_detach_trace_probe("opid", sched_waking, handle_sched_waking); 97 - detach_vector_irq(); 98 129 99 130 da_monitor_destroy(); 100 131 }
+20 -68
kernel/trace/rv/monitors/opid/opid.h
··· 8 8 #define MONITOR_NAME opid 9 9 10 10 enum states_opid { 11 - disabled_opid, 12 - enabled_opid, 13 - in_irq_opid, 14 - irq_disabled_opid, 15 - preempt_disabled_opid, 11 + any_opid, 16 12 state_max_opid, 17 13 }; 18 14 19 15 #define INVALID_STATE state_max_opid 20 16 21 17 enum events_opid { 22 - irq_disable_opid, 23 - irq_enable_opid, 24 - irq_entry_opid, 25 - preempt_disable_opid, 26 - preempt_enable_opid, 27 18 sched_need_resched_opid, 28 19 sched_waking_opid, 29 20 event_max_opid, 30 21 }; 31 22 23 + enum envs_opid { 24 + irq_off_opid, 25 + preempt_off_opid, 26 + env_max_opid, 27 + env_max_stored_opid = irq_off_opid, 28 + }; 29 + 30 + _Static_assert(env_max_stored_opid <= MAX_HA_ENV_LEN, "Not enough slots"); 31 + 32 32 struct automaton_opid { 33 33 char *state_names[state_max_opid]; 34 34 char *event_names[event_max_opid]; 35 + char *env_names[env_max_opid]; 35 36 unsigned char function[state_max_opid][event_max_opid]; 36 37 unsigned char initial_state; 37 38 bool final_states[state_max_opid]; ··· 40 39 41 40 static const struct automaton_opid automaton_opid = { 42 41 .state_names = { 43 - "disabled", 44 - "enabled", 45 - "in_irq", 46 - "irq_disabled", 47 - "preempt_disabled", 42 + "any", 48 43 }, 49 44 .event_names = { 50 - "irq_disable", 51 - "irq_enable", 52 - "irq_entry", 53 - "preempt_disable", 54 - "preempt_enable", 55 45 "sched_need_resched", 56 46 "sched_waking", 57 47 }, 58 - .function = { 59 - { 60 - INVALID_STATE, 61 - preempt_disabled_opid, 62 - disabled_opid, 63 - INVALID_STATE, 64 - irq_disabled_opid, 65 - disabled_opid, 66 - disabled_opid, 67 - }, 68 - { 69 - irq_disabled_opid, 70 - INVALID_STATE, 71 - INVALID_STATE, 72 - preempt_disabled_opid, 73 - enabled_opid, 74 - INVALID_STATE, 75 - INVALID_STATE, 76 - }, 77 - { 78 - INVALID_STATE, 79 - enabled_opid, 80 - in_irq_opid, 81 - INVALID_STATE, 82 - INVALID_STATE, 83 - in_irq_opid, 84 - in_irq_opid, 85 - }, 86 - { 87 - INVALID_STATE, 88 - enabled_opid, 89 - in_irq_opid, 90 - disabled_opid, 91 - INVALID_STATE, 92 - irq_disabled_opid, 93 - INVALID_STATE, 94 - }, 95 - { 96 - disabled_opid, 97 - INVALID_STATE, 98 - INVALID_STATE, 99 - INVALID_STATE, 100 - enabled_opid, 101 - INVALID_STATE, 102 - INVALID_STATE, 103 - }, 48 + .env_names = { 49 + "irq_off", 50 + "preempt_off", 104 51 }, 105 - .initial_state = disabled_opid, 106 - .final_states = { 0, 1, 0, 0, 0 }, 52 + .function = { 53 + { any_opid, any_opid }, 54 + }, 55 + .initial_state = any_opid, 56 + .final_states = { 1 }, 107 57 };
+4
kernel/trace/rv/monitors/opid/opid_trace.h
··· 12 12 DEFINE_EVENT(error_da_monitor, error_opid, 13 13 TP_PROTO(char *state, char *event), 14 14 TP_ARGS(state, event)); 15 + 16 + DEFINE_EVENT(error_env_da_monitor, error_env_opid, 17 + TP_PROTO(char *state, char *event, char *env), 18 + TP_ARGS(state, event, env)); 15 19 #endif /* CONFIG_RV_MON_OPID */
+8
kernel/trace/rv/monitors/sleep/sleep.c
··· 49 49 ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false); 50 50 ltl_atom_set(mon, LTL_CLOCK_NANOSLEEP, false); 51 51 ltl_atom_set(mon, LTL_FUTEX_WAIT, false); 52 + ltl_atom_set(mon, LTL_EPOLL_WAIT, false); 52 53 ltl_atom_set(mon, LTL_FUTEX_LOCK_PI, false); 53 54 ltl_atom_set(mon, LTL_BLOCK_ON_RT_MUTEX, false); 54 55 } ··· 64 63 ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_TAI, false); 65 64 ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false); 66 65 ltl_atom_set(mon, LTL_CLOCK_NANOSLEEP, false); 66 + ltl_atom_set(mon, LTL_EPOLL_WAIT, false); 67 67 68 68 if (strstarts(task->comm, "migration/")) 69 69 ltl_atom_set(mon, LTL_TASK_IS_MIGRATION, true); ··· 164 162 break; 165 163 } 166 164 break; 165 + #ifdef __NR_epoll_wait 166 + case __NR_epoll_wait: 167 + ltl_atom_update(current, LTL_EPOLL_WAIT, true); 168 + break; 169 + #endif 167 170 } 168 171 } 169 172 ··· 181 174 ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_MONOTONIC, false); 182 175 ltl_atom_set(mon, LTL_NANOSLEEP_CLOCK_TAI, false); 183 176 ltl_atom_set(mon, LTL_NANOSLEEP_TIMER_ABSTIME, false); 177 + ltl_atom_set(mon, LTL_EPOLL_WAIT, false); 184 178 ltl_atom_update(current, LTL_CLOCK_NANOSLEEP, false); 185 179 } 186 180
+52 -46
kernel/trace/rv/monitors/sleep/sleep.h
··· 15 15 LTL_ABORT_SLEEP, 16 16 LTL_BLOCK_ON_RT_MUTEX, 17 17 LTL_CLOCK_NANOSLEEP, 18 + LTL_EPOLL_WAIT, 18 19 LTL_FUTEX_LOCK_PI, 19 20 LTL_FUTEX_WAIT, 20 21 LTL_KERNEL_THREAD, ··· 41 40 "ab_sl", 42 41 "bl_on_rt_mu", 43 42 "cl_na", 43 + "ep_wa", 44 44 "fu_lo_pi", 45 45 "fu_wa", 46 46 "ker_th", ··· 77 75 78 76 static void ltl_start(struct task_struct *task, struct ltl_monitor *mon) 79 77 { 80 - bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms); 81 - bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms); 82 - bool val40 = task_is_rcu || task_is_migration; 83 - bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms); 84 - bool val41 = futex_lock_pi || val40; 85 - bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms); 86 - bool val5 = block_on_rt_mutex || val41; 87 - bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms); 88 - bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms); 89 - bool val32 = abort_sleep || kthread_should_stop; 90 78 bool woken_by_nmi = test_bit(LTL_WOKEN_BY_NMI, mon->atoms); 91 - bool val33 = woken_by_nmi || val32; 92 79 bool woken_by_hardirq = test_bit(LTL_WOKEN_BY_HARDIRQ, mon->atoms); 93 - bool val34 = woken_by_hardirq || val33; 94 80 bool woken_by_equal_or_higher_prio = test_bit(LTL_WOKEN_BY_EQUAL_OR_HIGHER_PRIO, 95 81 mon->atoms); 96 - bool val14 = woken_by_equal_or_higher_prio || val34; 97 82 bool wake = test_bit(LTL_WAKE, mon->atoms); 98 - bool val13 = !wake; 99 - bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms); 83 + bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms); 84 + bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms); 85 + bool sleep = test_bit(LTL_SLEEP, mon->atoms); 86 + bool rt = test_bit(LTL_RT, mon->atoms); 87 + bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms); 100 88 bool nanosleep_clock_tai = test_bit(LTL_NANOSLEEP_CLOCK_TAI, mon->atoms); 101 89 bool nanosleep_clock_monotonic = test_bit(LTL_NANOSLEEP_CLOCK_MONOTONIC, mon->atoms); 102 - bool val24 = nanosleep_clock_monotonic || nanosleep_clock_tai; 103 - bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms); 104 - bool val25 = nanosleep_timer_abstime && val24; 105 - bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms); 106 - bool val18 = clock_nanosleep && val25; 90 + bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms); 91 + bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms); 107 92 bool futex_wait = test_bit(LTL_FUTEX_WAIT, mon->atoms); 108 - bool val9 = futex_wait || val18; 93 + bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms); 94 + bool epoll_wait = test_bit(LTL_EPOLL_WAIT, mon->atoms); 95 + bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms); 96 + bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms); 97 + bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms); 98 + bool val42 = task_is_rcu || task_is_migration; 99 + bool val43 = futex_lock_pi || val42; 100 + bool val5 = block_on_rt_mutex || val43; 101 + bool val34 = abort_sleep || kthread_should_stop; 102 + bool val35 = woken_by_nmi || val34; 103 + bool val36 = woken_by_hardirq || val35; 104 + bool val14 = woken_by_equal_or_higher_prio || val36; 105 + bool val13 = !wake; 106 + bool val26 = nanosleep_clock_monotonic || nanosleep_clock_tai; 107 + bool val27 = nanosleep_timer_abstime && val26; 108 + bool val18 = clock_nanosleep && val27; 109 + bool val20 = val18 || epoll_wait; 110 + bool val9 = futex_wait || val20; 109 111 bool val11 = val9 || kernel_thread; 110 - bool sleep = test_bit(LTL_SLEEP, mon->atoms); 111 112 bool val2 = !sleep; 112 - bool rt = test_bit(LTL_RT, mon->atoms); 113 113 bool val1 = !rt; 114 114 bool val3 = val1 || val2; 115 115 ··· 128 124 static void 129 125 ltl_possible_next_states(struct ltl_monitor *mon, unsigned int state, unsigned long *next) 130 126 { 131 - bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms); 132 - bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms); 133 - bool val40 = task_is_rcu || task_is_migration; 134 - bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms); 135 - bool val41 = futex_lock_pi || val40; 136 - bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms); 137 - bool val5 = block_on_rt_mutex || val41; 138 - bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms); 139 - bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms); 140 - bool val32 = abort_sleep || kthread_should_stop; 141 127 bool woken_by_nmi = test_bit(LTL_WOKEN_BY_NMI, mon->atoms); 142 - bool val33 = woken_by_nmi || val32; 143 128 bool woken_by_hardirq = test_bit(LTL_WOKEN_BY_HARDIRQ, mon->atoms); 144 - bool val34 = woken_by_hardirq || val33; 145 129 bool woken_by_equal_or_higher_prio = test_bit(LTL_WOKEN_BY_EQUAL_OR_HIGHER_PRIO, 146 130 mon->atoms); 147 - bool val14 = woken_by_equal_or_higher_prio || val34; 148 131 bool wake = test_bit(LTL_WAKE, mon->atoms); 149 - bool val13 = !wake; 150 - bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms); 132 + bool task_is_rcu = test_bit(LTL_TASK_IS_RCU, mon->atoms); 133 + bool task_is_migration = test_bit(LTL_TASK_IS_MIGRATION, mon->atoms); 134 + bool sleep = test_bit(LTL_SLEEP, mon->atoms); 135 + bool rt = test_bit(LTL_RT, mon->atoms); 136 + bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms); 151 137 bool nanosleep_clock_tai = test_bit(LTL_NANOSLEEP_CLOCK_TAI, mon->atoms); 152 138 bool nanosleep_clock_monotonic = test_bit(LTL_NANOSLEEP_CLOCK_MONOTONIC, mon->atoms); 153 - bool val24 = nanosleep_clock_monotonic || nanosleep_clock_tai; 154 - bool nanosleep_timer_abstime = test_bit(LTL_NANOSLEEP_TIMER_ABSTIME, mon->atoms); 155 - bool val25 = nanosleep_timer_abstime && val24; 156 - bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms); 157 - bool val18 = clock_nanosleep && val25; 139 + bool kthread_should_stop = test_bit(LTL_KTHREAD_SHOULD_STOP, mon->atoms); 140 + bool kernel_thread = test_bit(LTL_KERNEL_THREAD, mon->atoms); 158 141 bool futex_wait = test_bit(LTL_FUTEX_WAIT, mon->atoms); 159 - bool val9 = futex_wait || val18; 142 + bool futex_lock_pi = test_bit(LTL_FUTEX_LOCK_PI, mon->atoms); 143 + bool epoll_wait = test_bit(LTL_EPOLL_WAIT, mon->atoms); 144 + bool clock_nanosleep = test_bit(LTL_CLOCK_NANOSLEEP, mon->atoms); 145 + bool block_on_rt_mutex = test_bit(LTL_BLOCK_ON_RT_MUTEX, mon->atoms); 146 + bool abort_sleep = test_bit(LTL_ABORT_SLEEP, mon->atoms); 147 + bool val42 = task_is_rcu || task_is_migration; 148 + bool val43 = futex_lock_pi || val42; 149 + bool val5 = block_on_rt_mutex || val43; 150 + bool val34 = abort_sleep || kthread_should_stop; 151 + bool val35 = woken_by_nmi || val34; 152 + bool val36 = woken_by_hardirq || val35; 153 + bool val14 = woken_by_equal_or_higher_prio || val36; 154 + bool val13 = !wake; 155 + bool val26 = nanosleep_clock_monotonic || nanosleep_clock_tai; 156 + bool val27 = nanosleep_timer_abstime && val26; 157 + bool val18 = clock_nanosleep && val27; 158 + bool val20 = val18 || epoll_wait; 159 + bool val9 = futex_wait || val20; 160 160 bool val11 = val9 || kernel_thread; 161 - bool sleep = test_bit(LTL_SLEEP, mon->atoms); 162 161 bool val2 = !sleep; 163 - bool rt = test_bit(LTL_RT, mon->atoms); 164 162 bool val1 = !rt; 165 163 bool val3 = val1 || val2; 166 164
+13
kernel/trace/rv/monitors/stall/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + config RV_MON_STALL 4 + depends on RV 5 + select HA_MON_EVENTS_ID 6 + bool "stall monitor" 7 + help 8 + Enable the stall sample monitor that illustrates the usage of hybrid 9 + automata monitors. It can be used to identify tasks stalled for 10 + longer than a threshold. 11 + 12 + For further information, see: 13 + Documentation/trace/rv/monitor_stall.rst
+150
kernel/trace/rv/monitors/stall/stall.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + #include <linux/ftrace.h> 3 + #include <linux/tracepoint.h> 4 + #include <linux/kernel.h> 5 + #include <linux/module.h> 6 + #include <linux/init.h> 7 + #include <linux/rv.h> 8 + #include <rv/instrumentation.h> 9 + 10 + #define MODULE_NAME "stall" 11 + 12 + #include <trace/events/sched.h> 13 + #include <rv_trace.h> 14 + 15 + #define RV_MON_TYPE RV_MON_PER_TASK 16 + #define HA_TIMER_TYPE HA_TIMER_WHEEL 17 + #include "stall.h" 18 + #include <rv/ha_monitor.h> 19 + 20 + static u64 threshold_jiffies = 1000; 21 + module_param(threshold_jiffies, ullong, 0644); 22 + 23 + static u64 ha_get_env(struct ha_monitor *ha_mon, enum envs_stall env, u64 time_ns) 24 + { 25 + if (env == clk_stall) 26 + return ha_get_clk_jiffy(ha_mon, env); 27 + return ENV_INVALID_VALUE; 28 + } 29 + 30 + static void ha_reset_env(struct ha_monitor *ha_mon, enum envs_stall env, u64 time_ns) 31 + { 32 + if (env == clk_stall) 33 + ha_reset_clk_jiffy(ha_mon, env); 34 + } 35 + 36 + static inline bool ha_verify_invariants(struct ha_monitor *ha_mon, 37 + enum states curr_state, enum events event, 38 + enum states next_state, u64 time_ns) 39 + { 40 + if (curr_state == enqueued_stall) 41 + return ha_check_invariant_jiffy(ha_mon, clk_stall, time_ns); 42 + return true; 43 + } 44 + 45 + static inline bool ha_verify_guards(struct ha_monitor *ha_mon, 46 + enum states curr_state, enum events event, 47 + enum states next_state, u64 time_ns) 48 + { 49 + bool res = true; 50 + 51 + if (curr_state == dequeued_stall && event == sched_wakeup_stall) 52 + ha_reset_env(ha_mon, clk_stall, time_ns); 53 + else if (curr_state == running_stall && event == sched_switch_preempt_stall) 54 + ha_reset_env(ha_mon, clk_stall, time_ns); 55 + return res; 56 + } 57 + 58 + static inline void ha_setup_invariants(struct ha_monitor *ha_mon, 59 + enum states curr_state, enum events event, 60 + enum states next_state, u64 time_ns) 61 + { 62 + if (next_state == curr_state) 63 + return; 64 + if (next_state == enqueued_stall) 65 + ha_start_timer_jiffy(ha_mon, clk_stall, threshold_jiffies, time_ns); 66 + else if (curr_state == enqueued_stall) 67 + ha_cancel_timer(ha_mon); 68 + } 69 + 70 + static bool ha_verify_constraint(struct ha_monitor *ha_mon, 71 + enum states curr_state, enum events event, 72 + enum states next_state, u64 time_ns) 73 + { 74 + if (!ha_verify_invariants(ha_mon, curr_state, event, next_state, time_ns)) 75 + return false; 76 + 77 + if (!ha_verify_guards(ha_mon, curr_state, event, next_state, time_ns)) 78 + return false; 79 + 80 + ha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns); 81 + 82 + return true; 83 + } 84 + 85 + static void handle_sched_switch(void *data, bool preempt, 86 + struct task_struct *prev, 87 + struct task_struct *next, 88 + unsigned int prev_state) 89 + { 90 + if (!preempt && prev_state != TASK_RUNNING) 91 + da_handle_start_event(prev, sched_switch_wait_stall); 92 + else 93 + da_handle_event(prev, sched_switch_preempt_stall); 94 + da_handle_event(next, sched_switch_in_stall); 95 + } 96 + 97 + static void handle_sched_wakeup(void *data, struct task_struct *p) 98 + { 99 + da_handle_event(p, sched_wakeup_stall); 100 + } 101 + 102 + static int enable_stall(void) 103 + { 104 + int retval; 105 + 106 + retval = da_monitor_init(); 107 + if (retval) 108 + return retval; 109 + 110 + rv_attach_trace_probe("stall", sched_switch, handle_sched_switch); 111 + rv_attach_trace_probe("stall", sched_wakeup, handle_sched_wakeup); 112 + 113 + return 0; 114 + } 115 + 116 + static void disable_stall(void) 117 + { 118 + rv_this.enabled = 0; 119 + 120 + rv_detach_trace_probe("stall", sched_switch, handle_sched_switch); 121 + rv_detach_trace_probe("stall", sched_wakeup, handle_sched_wakeup); 122 + 123 + da_monitor_destroy(); 124 + } 125 + 126 + static struct rv_monitor rv_this = { 127 + .name = "stall", 128 + .description = "identify tasks stalled for longer than a threshold.", 129 + .enable = enable_stall, 130 + .disable = disable_stall, 131 + .reset = da_monitor_reset_all, 132 + .enabled = 0, 133 + }; 134 + 135 + static int __init register_stall(void) 136 + { 137 + return rv_register_monitor(&rv_this, NULL); 138 + } 139 + 140 + static void __exit unregister_stall(void) 141 + { 142 + rv_unregister_monitor(&rv_this); 143 + } 144 + 145 + module_init(register_stall); 146 + module_exit(unregister_stall); 147 + 148 + MODULE_LICENSE("GPL"); 149 + MODULE_AUTHOR("Gabriele Monaco <gmonaco@redhat.com>"); 150 + MODULE_DESCRIPTION("stall: identify tasks stalled for longer than a threshold.");
+81
kernel/trace/rv/monitors/stall/stall.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Automatically generated C representation of stall automaton 4 + * For further information about this format, see kernel documentation: 5 + * Documentation/trace/rv/deterministic_automata.rst 6 + */ 7 + 8 + #define MONITOR_NAME stall 9 + 10 + enum states_stall { 11 + dequeued_stall, 12 + enqueued_stall, 13 + running_stall, 14 + state_max_stall, 15 + }; 16 + 17 + #define INVALID_STATE state_max_stall 18 + 19 + enum events_stall { 20 + sched_switch_in_stall, 21 + sched_switch_preempt_stall, 22 + sched_switch_wait_stall, 23 + sched_wakeup_stall, 24 + event_max_stall, 25 + }; 26 + 27 + enum envs_stall { 28 + clk_stall, 29 + env_max_stall, 30 + env_max_stored_stall = env_max_stall, 31 + }; 32 + 33 + _Static_assert(env_max_stored_stall <= MAX_HA_ENV_LEN, "Not enough slots"); 34 + 35 + struct automaton_stall { 36 + char *state_names[state_max_stall]; 37 + char *event_names[event_max_stall]; 38 + char *env_names[env_max_stall]; 39 + unsigned char function[state_max_stall][event_max_stall]; 40 + unsigned char initial_state; 41 + bool final_states[state_max_stall]; 42 + }; 43 + 44 + static const struct automaton_stall automaton_stall = { 45 + .state_names = { 46 + "dequeued", 47 + "enqueued", 48 + "running", 49 + }, 50 + .event_names = { 51 + "sched_switch_in", 52 + "sched_switch_preempt", 53 + "sched_switch_wait", 54 + "sched_wakeup", 55 + }, 56 + .env_names = { 57 + "clk", 58 + }, 59 + .function = { 60 + { 61 + INVALID_STATE, 62 + INVALID_STATE, 63 + INVALID_STATE, 64 + enqueued_stall, 65 + }, 66 + { 67 + running_stall, 68 + INVALID_STATE, 69 + INVALID_STATE, 70 + enqueued_stall, 71 + }, 72 + { 73 + running_stall, 74 + enqueued_stall, 75 + dequeued_stall, 76 + running_stall, 77 + }, 78 + }, 79 + .initial_state = dequeued_stall, 80 + .final_states = { 1, 0, 0 }, 81 + };
+19
kernel/trace/rv/monitors/stall/stall_trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * Snippet to be included in rv_trace.h 5 + */ 6 + 7 + #ifdef CONFIG_RV_MON_STALL 8 + DEFINE_EVENT(event_da_monitor_id, event_stall, 9 + TP_PROTO(int id, char *state, char *event, char *next_state, bool final_state), 10 + TP_ARGS(id, state, event, next_state, final_state)); 11 + 12 + DEFINE_EVENT(error_da_monitor_id, error_stall, 13 + TP_PROTO(int id, char *state, char *event), 14 + TP_ARGS(id, state, event)); 15 + 16 + DEFINE_EVENT(error_env_da_monitor_id, error_env_stall, 17 + TP_PROTO(int id, char *state, char *event, char *env), 18 + TP_ARGS(id, state, event, env)); 19 + #endif /* CONFIG_RV_MON_STALL */
+66 -1
kernel/trace/rv/rv_trace.h
··· 62 62 #include <monitors/scpd/scpd_trace.h> 63 63 #include <monitors/snep/snep_trace.h> 64 64 #include <monitors/sts/sts_trace.h> 65 - #include <monitors/opid/opid_trace.h> 66 65 // Add new monitors based on CONFIG_DA_MON_EVENTS_IMPLICIT here 66 + 67 + #ifdef CONFIG_HA_MON_EVENTS_IMPLICIT 68 + /* For simplicity this class is marked as DA although relevant only for HA */ 69 + DECLARE_EVENT_CLASS(error_env_da_monitor, 70 + 71 + TP_PROTO(char *state, char *event, char *env), 72 + 73 + TP_ARGS(state, event, env), 74 + 75 + TP_STRUCT__entry( 76 + __string( state, state ) 77 + __string( event, event ) 78 + __string( env, env ) 79 + ), 80 + 81 + TP_fast_assign( 82 + __assign_str(state); 83 + __assign_str(event); 84 + __assign_str(env); 85 + ), 86 + 87 + TP_printk("event %s not expected in the state %s with env %s", 88 + __get_str(event), 89 + __get_str(state), 90 + __get_str(env)) 91 + ); 92 + 93 + #include <monitors/opid/opid_trace.h> 94 + // Add new monitors based on CONFIG_HA_MON_EVENTS_IMPLICIT here 95 + 96 + #endif 67 97 68 98 #endif /* CONFIG_DA_MON_EVENTS_IMPLICIT */ 69 99 ··· 157 127 #include <monitors/nrp/nrp_trace.h> 158 128 #include <monitors/sssw/sssw_trace.h> 159 129 // Add new monitors based on CONFIG_DA_MON_EVENTS_ID here 130 + 131 + #ifdef CONFIG_HA_MON_EVENTS_ID 132 + /* For simplicity this class is marked as DA although relevant only for HA */ 133 + DECLARE_EVENT_CLASS(error_env_da_monitor_id, 134 + 135 + TP_PROTO(int id, char *state, char *event, char *env), 136 + 137 + TP_ARGS(id, state, event, env), 138 + 139 + TP_STRUCT__entry( 140 + __field( int, id ) 141 + __string( state, state ) 142 + __string( event, event ) 143 + __string( env, env ) 144 + ), 145 + 146 + TP_fast_assign( 147 + __assign_str(state); 148 + __assign_str(event); 149 + __assign_str(env); 150 + __entry->id = id; 151 + ), 152 + 153 + TP_printk("%d: event %s not expected in the state %s with env %s", 154 + __entry->id, 155 + __get_str(event), 156 + __get_str(state), 157 + __get_str(env)) 158 + ); 159 + 160 + #include <monitors/stall/stall_trace.h> 161 + #include <monitors/nomiss/nomiss_trace.h> 162 + // Add new monitors based on CONFIG_HA_MON_EVENTS_ID here 163 + 164 + #endif 160 165 161 166 #endif /* CONFIG_DA_MON_EVENTS_ID */ 162 167 #ifdef CONFIG_LTL_MON_EVENTS_ID
+41
tools/verification/models/deadline/nomiss.dot
··· 1 + digraph state_automaton { 2 + center = true; 3 + size = "7,11"; 4 + {node [shape = circle] "idle"}; 5 + {node [shape = plaintext, style=invis, label=""] "__init_ready"}; 6 + {node [shape = doublecircle] "ready"}; 7 + {node [shape = circle] "ready"}; 8 + {node [shape = circle] "running"}; 9 + {node [shape = circle] "sleeping"}; 10 + {node [shape = circle] "throttled"}; 11 + "__init_ready" -> "ready"; 12 + "idle" [label = "idle"]; 13 + "idle" -> "idle" [ label = "dl_server_idle" ]; 14 + "idle" -> "ready" [ label = "dl_replenish;reset(clk)" ]; 15 + "idle" -> "running" [ label = "sched_switch_in" ]; 16 + "idle" -> "sleeping" [ label = "dl_server_stop" ]; 17 + "idle" -> "throttled" [ label = "dl_throttle" ]; 18 + "ready" [label = "ready\nclk < DEADLINE_NS()", color = green3]; 19 + "ready" -> "idle" [ label = "dl_server_idle" ]; 20 + "ready" -> "ready" [ label = "sched_wakeup\ndl_replenish;reset(clk)" ]; 21 + "ready" -> "running" [ label = "sched_switch_in" ]; 22 + "ready" -> "sleeping" [ label = "dl_server_stop" ]; 23 + "ready" -> "throttled" [ label = "dl_throttle;is_defer == 1" ]; 24 + "running" [label = "running\nclk < DEADLINE_NS()"]; 25 + "running" -> "idle" [ label = "dl_server_idle" ]; 26 + "running" -> "running" [ label = "dl_replenish;reset(clk)\nsched_switch_in\nsched_wakeup" ]; 27 + "running" -> "sleeping" [ label = "sched_switch_suspend\ndl_server_stop" ]; 28 + "running" -> "throttled" [ label = "dl_throttle" ]; 29 + "sleeping" [label = "sleeping"]; 30 + "sleeping" -> "ready" [ label = "sched_wakeup\ndl_replenish;reset(clk)" ]; 31 + "sleeping" -> "running" [ label = "sched_switch_in" ]; 32 + "sleeping" -> "sleeping" [ label = "dl_server_stop\ndl_server_idle" ]; 33 + "sleeping" -> "throttled" [ label = "dl_throttle;is_constr_dl == 1 || is_defer == 1" ]; 34 + "throttled" [label = "throttled"]; 35 + "throttled" -> "ready" [ label = "dl_replenish;reset(clk)" ]; 36 + "throttled" -> "throttled" [ label = "sched_switch_suspend\nsched_wakeup\ndl_server_idle\ndl_throttle" ]; 37 + { rank = min ; 38 + "__init_ready"; 39 + "ready"; 40 + } 41 + }
+1
tools/verification/models/rtapp/sleep.ltl
··· 5 5 6 6 RT_VALID_SLEEP_REASON = FUTEX_WAIT 7 7 or RT_FRIENDLY_NANOSLEEP 8 + or EPOLL_WAIT 8 9 9 10 RT_FRIENDLY_NANOSLEEP = CLOCK_NANOSLEEP 10 11 and NANOSLEEP_TIMER_ABSTIME
+7 -29
tools/verification/models/sched/opid.dot
··· 1 1 digraph state_automaton { 2 2 center = true; 3 3 size = "7,11"; 4 - {node [shape = plaintext, style=invis, label=""] "__init_disabled"}; 5 - {node [shape = circle] "disabled"}; 6 - {node [shape = doublecircle] "enabled"}; 7 - {node [shape = circle] "enabled"}; 8 - {node [shape = circle] "in_irq"}; 9 - {node [shape = circle] "irq_disabled"}; 10 - {node [shape = circle] "preempt_disabled"}; 11 - "__init_disabled" -> "disabled"; 12 - "disabled" [label = "disabled"]; 13 - "disabled" -> "disabled" [ label = "sched_need_resched\nsched_waking\nirq_entry" ]; 14 - "disabled" -> "irq_disabled" [ label = "preempt_enable" ]; 15 - "disabled" -> "preempt_disabled" [ label = "irq_enable" ]; 16 - "enabled" [label = "enabled", color = green3]; 17 - "enabled" -> "enabled" [ label = "preempt_enable" ]; 18 - "enabled" -> "irq_disabled" [ label = "irq_disable" ]; 19 - "enabled" -> "preempt_disabled" [ label = "preempt_disable" ]; 20 - "in_irq" [label = "in_irq"]; 21 - "in_irq" -> "enabled" [ label = "irq_enable" ]; 22 - "in_irq" -> "in_irq" [ label = "sched_need_resched\nsched_waking\nirq_entry" ]; 23 - "irq_disabled" [label = "irq_disabled"]; 24 - "irq_disabled" -> "disabled" [ label = "preempt_disable" ]; 25 - "irq_disabled" -> "enabled" [ label = "irq_enable" ]; 26 - "irq_disabled" -> "in_irq" [ label = "irq_entry" ]; 27 - "irq_disabled" -> "irq_disabled" [ label = "sched_need_resched" ]; 28 - "preempt_disabled" [label = "preempt_disabled"]; 29 - "preempt_disabled" -> "disabled" [ label = "irq_disable" ]; 30 - "preempt_disabled" -> "enabled" [ label = "preempt_enable" ]; 4 + {node [shape = plaintext, style=invis, label=""] "__init_any"}; 5 + {node [shape = doublecircle] "any"}; 6 + "__init_any" -> "any"; 7 + "any" [label = "any", color = green3]; 8 + "any" -> "any" [ label = "sched_need_resched;irq_off == 1\nsched_waking;irq_off == 1 && preempt_off == 1" ]; 31 9 { rank = min ; 32 - "__init_disabled"; 33 - "disabled"; 10 + "__init_any"; 11 + "any"; 34 12 } 35 13 }
+22
tools/verification/models/stall.dot
··· 1 + digraph state_automaton { 2 + center = true; 3 + size = "7,11"; 4 + {node [shape = circle] "enqueued"}; 5 + {node [shape = plaintext, style=invis, label=""] "__init_dequeued"}; 6 + {node [shape = doublecircle] "dequeued"}; 7 + {node [shape = circle] "running"}; 8 + "__init_dequeued" -> "dequeued"; 9 + "enqueued" [label = "enqueued\nclk < threshold_jiffies"]; 10 + "running" [label = "running"]; 11 + "dequeued" [label = "dequeued", color = green3]; 12 + "running" -> "running" [ label = "sched_switch_in\nsched_wakeup" ]; 13 + "enqueued" -> "enqueued" [ label = "sched_wakeup" ]; 14 + "enqueued" -> "running" [ label = "sched_switch_in" ]; 15 + "running" -> "dequeued" [ label = "sched_switch_wait" ]; 16 + "dequeued" -> "enqueued" [ label = "sched_wakeup;reset(clk)" ]; 17 + "running" -> "enqueued" [ label = "sched_switch_preempt;reset(clk)" ]; 18 + { rank = min ; 19 + "__init_dequeued"; 20 + "dequeued"; 21 + } 22 + }
+15 -12
tools/verification/rvgen/__main__.py
··· 9 9 # Documentation/trace/rv/da_monitor_synthesis.rst 10 10 11 11 if __name__ == '__main__': 12 - from rvgen.dot2k import dot2k 12 + from rvgen.dot2k import da2k, ha2k 13 13 from rvgen.generator import Monitor 14 14 from rvgen.container import Container 15 15 from rvgen.ltl2k import ltl2k 16 + from rvgen.automata import AutomataError 16 17 import argparse 17 18 import sys 18 19 ··· 29 28 monitor_parser.add_argument('-n', "--model_name", dest="model_name") 30 29 monitor_parser.add_argument("-p", "--parent", dest="parent", 31 30 required=False, help="Create a monitor nested to parent") 32 - monitor_parser.add_argument('-c', "--class", dest="monitor_class", 33 - help="Monitor class, either \"da\" or \"ltl\"") 34 - monitor_parser.add_argument('-s', "--spec", dest="spec", help="Monitor specification file") 35 - monitor_parser.add_argument('-t', "--monitor_type", dest="monitor_type", 31 + monitor_parser.add_argument('-c', "--class", dest="monitor_class", required=True, 32 + help="Monitor class, either \"da\", \"ha\" or \"ltl\"") 33 + monitor_parser.add_argument('-s', "--spec", dest="spec", required=True, 34 + help="Monitor specification file") 35 + monitor_parser.add_argument('-t', "--monitor_type", dest="monitor_type", required=True, 36 36 help=f"Available options: {', '.join(Monitor.monitor_types.keys())}") 37 37 38 38 container_parser = subparsers.add_parser("container") ··· 43 41 44 42 try: 45 43 if params.subcmd == "monitor": 46 - print("Opening and parsing the specification file %s" % params.spec) 44 + print(f"Opening and parsing the specification file {params.spec}") 47 45 if params.monitor_class == "da": 48 - monitor = dot2k(params.spec, params.monitor_type, vars(params)) 46 + monitor = da2k(params.spec, params.monitor_type, vars(params)) 47 + elif params.monitor_class == "ha": 48 + monitor = ha2k(params.spec, params.monitor_type, vars(params)) 49 49 elif params.monitor_class == "ltl": 50 50 monitor = ltl2k(params.spec, params.monitor_type, vars(params)) 51 51 else: ··· 55 51 sys.exit(1) 56 52 else: 57 53 monitor = Container(vars(params)) 58 - except Exception as e: 59 - print('Error: '+ str(e)) 60 - print("Sorry : :-(") 54 + except AutomataError as e: 55 + print(f"There was an error processing {params.spec}: {e}", file=sys.stderr) 61 56 sys.exit(1) 62 57 63 - print("Writing the monitor into the directory %s" % monitor.name) 58 + print(f"Writing the monitor into the directory {monitor.name}") 64 59 monitor.print_files() 65 60 print("Almost done, checklist") 66 61 if params.subcmd == "monitor": 67 - print(" - Edit the %s/%s.c to add the instrumentation" % (monitor.name, monitor.name)) 62 + print(f" - Edit the {monitor.name}/{monitor.name}.c to add the instrumentation") 68 63 print(monitor.fill_tracepoint_tooltip()) 69 64 print(monitor.fill_makefile_tooltip()) 70 65 print(monitor.fill_kconfig_tooltip())
-1
tools/verification/rvgen/dot2c
··· 16 16 if __name__ == '__main__': 17 17 from rvgen import dot2c 18 18 import argparse 19 - import sys 20 19 21 20 parser = argparse.ArgumentParser(description='dot2c: converts a .dot file into a C structure') 22 21 parser.add_argument('dot_file', help='The dot file to be converted')
+227 -65
tools/verification/rvgen/rvgen/automata.py
··· 3 3 # 4 4 # Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org> 5 5 # 6 - # Automata object: parse an automata in dot file digraph format into a python object 6 + # Automata class: parse an automaton in dot file digraph format into a python object 7 7 # 8 8 # For further information, see: 9 9 # Documentation/trace/rv/deterministic_automata.rst 10 10 11 11 import ntpath 12 + import re 13 + from typing import Iterator 14 + from itertools import islice 15 + 16 + class _ConstraintKey: 17 + """Base class for constraint keys.""" 18 + 19 + class _StateConstraintKey(_ConstraintKey, int): 20 + """Key for a state constraint. Under the hood just state_id.""" 21 + def __new__(cls, state_id: int): 22 + return super().__new__(cls, state_id) 23 + 24 + class _EventConstraintKey(_ConstraintKey, tuple): 25 + """Key for an event constraint. Under the hood just tuple(state_id,event_id).""" 26 + def __new__(cls, state_id: int, event_id: int): 27 + return super().__new__(cls, (state_id, event_id)) 28 + 29 + class AutomataError(Exception): 30 + """Exception raised for errors in automata parsing and validation. 31 + 32 + Raised when DOT file processing fails due to invalid format, I/O errors, 33 + or malformed automaton definitions. 34 + """ 12 35 13 36 class Automata: 14 - """Automata class: Reads a dot file and part it as an automata. 37 + """Automata class: Reads a dot file and parses it as an automaton. 38 + 39 + It supports both deterministic and hybrid automata. 15 40 16 41 Attributes: 17 42 dot_file: A dot file with an state_automaton definition. 18 43 """ 19 44 20 45 invalid_state_str = "INVALID_STATE" 46 + init_marker = "__init_" 47 + node_marker = "{node" 48 + # val can be numerical, uppercase (constant or macro), lowercase (parameter or function) 49 + # only numerical values should have units 50 + constraint_rule = re.compile(r""" 51 + ^ 52 + (?P<env>[a-zA-Z_][a-zA-Z0-9_]+) # C-like identifier for the env var 53 + (?P<op>[!<=>]{1,2}) # operator 54 + (?P<val> 55 + [0-9]+ | # numerical value 56 + [A-Z_]+\(\) | # macro 57 + [A-Z_]+ | # constant 58 + [a-z_]+\(\) | # function 59 + [a-z_]+ # parameter 60 + ) 61 + (?P<unit>[a-z]{1,2})? # optional unit for numerical values 62 + """, re.VERBOSE) 63 + constraint_reset = re.compile(r"^reset\((?P<env>[a-zA-Z_][a-zA-Z0-9_]+)\)") 21 64 22 65 def __init__(self, file_path, model_name=None): 23 66 self.__dot_path = file_path 24 67 self.name = model_name or self.__get_model_name() 25 68 self.__dot_lines = self.__open_dot() 26 69 self.states, self.initial_state, self.final_states = self.__get_state_variables() 27 - self.events = self.__get_event_variables() 28 - self.function = self.__create_matrix() 70 + self.env_types = {} 71 + self.env_stored = set() 72 + self.constraint_vars = set() 73 + self.self_loop_reset_events = set() 74 + self.events, self.envs = self.__get_event_variables() 75 + self.function, self.constraints = self.__create_matrix() 29 76 self.events_start, self.events_start_run = self.__store_init_events() 77 + self.env_stored = sorted(self.env_stored) 78 + self.constraint_vars = sorted(self.constraint_vars) 79 + self.self_loop_reset_events = sorted(self.self_loop_reset_events) 30 80 31 81 def __get_model_name(self) -> str: 32 82 basename = ntpath.basename(self.__dot_path) 33 83 if not basename.endswith(".dot") and not basename.endswith(".gv"): 34 84 print("not a dot file") 35 - raise Exception("not a dot file: %s" % self.__dot_path) 85 + raise AutomataError(f"not a dot file: {self.__dot_path}") 36 86 37 87 model_name = ntpath.splitext(basename)[0] 38 - if model_name.__len__() == 0: 39 - raise Exception("not a dot file: %s" % self.__dot_path) 88 + if not model_name: 89 + raise AutomataError(f"not a dot file: {self.__dot_path}") 40 90 41 91 return model_name 42 92 43 93 def __open_dot(self) -> list[str]: 44 - cursor = 0 45 94 dot_lines = [] 46 95 try: 47 - dot_file = open(self.__dot_path) 48 - except: 49 - raise Exception("Cannot open the file: %s" % self.__dot_path) 96 + with open(self.__dot_path) as dot_file: 97 + dot_lines = dot_file.readlines() 98 + except OSError as exc: 99 + raise AutomataError(exc.strerror) from exc 50 100 51 - dot_lines = dot_file.read().splitlines() 52 - dot_file.close() 101 + if not dot_lines: 102 + raise AutomataError(f"{self.__dot_path} is empty") 53 103 54 104 # checking the first line: 55 - line = dot_lines[cursor].split() 105 + line = dot_lines[0].split() 56 106 57 - if (line[0] != "digraph") and (line[1] != "state_automaton"): 58 - raise Exception("Not a valid .dot format: %s" % self.__dot_path) 59 - else: 60 - cursor += 1 107 + if len(line) < 2 or line[0] != "digraph" or line[1] != "state_automaton": 108 + raise AutomataError(f"Not a valid .dot format: {self.__dot_path}") 109 + 61 110 return dot_lines 62 111 63 112 def __get_cursor_begin_states(self) -> int: 64 - cursor = 0 65 - while self.__dot_lines[cursor].split()[0] != "{node": 66 - cursor += 1 67 - return cursor 113 + for cursor, line in enumerate(self.__dot_lines): 114 + split_line = line.split() 115 + 116 + if len(split_line) and split_line[0] == self.node_marker: 117 + return cursor 118 + 119 + raise AutomataError("Could not find a beginning state") 68 120 69 121 def __get_cursor_begin_events(self) -> int: 70 - cursor = 0 71 - while self.__dot_lines[cursor].split()[0] != "{node": 72 - cursor += 1 73 - while self.__dot_lines[cursor].split()[0] == "{node": 74 - cursor += 1 75 - # skip initial state transition 76 - cursor += 1 122 + state = 0 123 + cursor = 0 # make pyright happy 124 + 125 + for cursor, line in enumerate(self.__dot_lines): 126 + line = line.split() 127 + if not line: 128 + continue 129 + 130 + if state == 0: 131 + if line[0] == self.node_marker: 132 + state = 1 133 + elif line[0] != self.node_marker: 134 + break 135 + else: 136 + raise AutomataError("Could not find beginning event") 137 + 138 + cursor += 1 # skip initial state transition 139 + if cursor == len(self.__dot_lines): 140 + raise AutomataError("Dot file ended after event beginning") 141 + 77 142 return cursor 78 143 79 144 def __get_state_variables(self) -> tuple[list[str], str, list[str]]: 80 145 # wait for node declaration 81 146 states = [] 82 147 final_states = [] 148 + initial_state = "" 83 149 84 150 has_final_states = False 85 151 cursor = self.__get_cursor_begin_states() 86 152 87 153 # process nodes 88 - while self.__dot_lines[cursor].split()[0] == "{node": 89 - line = self.__dot_lines[cursor].split() 90 - raw_state = line[-1] 154 + for line in islice(self.__dot_lines, cursor, None): 155 + split_line = line.split() 156 + if not split_line or split_line[0] != self.node_marker: 157 + break 158 + 159 + raw_state = split_line[-1] 91 160 92 161 # "enabled_fired"}; -> enabled_fired 93 - state = raw_state.replace('"', '').replace('};', '').replace(',','_') 94 - if state[0:7] == "__init_": 95 - initial_state = state[7:] 162 + state = raw_state.replace('"', '').replace('};', '').replace(',', '_') 163 + if state.startswith(self.init_marker): 164 + initial_state = state[len(self.init_marker):] 96 165 else: 97 166 states.append(state) 98 - if "doublecircle" in self.__dot_lines[cursor]: 167 + if "doublecircle" in line: 99 168 final_states.append(state) 100 169 has_final_states = True 101 170 102 - if "ellipse" in self.__dot_lines[cursor]: 171 + if "ellipse" in line: 103 172 final_states.append(state) 104 173 has_final_states = True 105 174 106 - cursor += 1 175 + if not initial_state: 176 + raise AutomataError("The automaton doesn't have an initial state") 107 177 108 178 states = sorted(set(states)) 109 179 states.remove(initial_state) 110 180 111 - # Insert the initial state at the bein og the states 181 + # Insert the initial state at the beginning of the states 112 182 states.insert(0, initial_state) 113 183 114 184 if not has_final_states: ··· 186 116 187 117 return states, initial_state, final_states 188 118 189 - def __get_event_variables(self) -> list[str]: 119 + def __get_event_variables(self) -> tuple[list[str], list[str]]: 120 + events: list[str] = [] 121 + envs: list[str] = [] 190 122 # here we are at the begin of transitions, take a note, we will return later. 191 123 cursor = self.__get_cursor_begin_events() 192 124 193 - events = [] 194 - while self.__dot_lines[cursor].lstrip()[0] == '"': 125 + for line in map(str.lstrip, islice(self.__dot_lines, cursor, None)): 126 + if not line.startswith('"'): 127 + break 128 + 195 129 # transitions have the format: 196 130 # "all_fired" -> "both_fired" [ label = "disable_irq" ]; 197 131 # ------------ event is here ------------^^^^^ 198 - if self.__dot_lines[cursor].split()[1] == "->": 199 - line = self.__dot_lines[cursor].split() 200 - event = line[-2].replace('"','') 132 + split_line = line.split() 133 + if len(split_line) > 1 and split_line[1] == "->": 134 + event = "".join(split_line[split_line.index("label") + 2:-1]).replace('"', '') 201 135 202 - # when a transition has more than one lables, they are like this 136 + # when a transition has more than one label, they are like this 203 137 # "local_irq_enable\nhw_local_irq_enable_n" 204 138 # so split them. 205 139 206 - event = event.replace("\\n", " ") 207 - for i in event.split(): 208 - events.append(i) 209 - cursor += 1 140 + for i in event.split("\\n"): 141 + # if the event contains a constraint (hybrid automata), 142 + # it will be separated by a ";": 143 + # "sched_switch;x<1000;reset(x)" 144 + ev, *constr = i.split(";") 145 + if constr: 146 + if len(constr) > 2: 147 + raise AutomataError("Only 1 constraint and 1 reset are supported") 148 + envs += self.__extract_env_var(constr) 149 + events.append(ev) 150 + else: 151 + # state labels have the format: 152 + # "enable_fired" [label = "enable_fired\ncondition"]; 153 + # ----- label is here -----^^^^^ 154 + # label and node name must be the same, condition is optional 155 + state = line.split("label")[1].split('"')[1] 156 + _, *constr = state.split("\\n") 157 + if constr: 158 + if len(constr) > 1: 159 + raise AutomataError("Only 1 constraint is supported in the state") 160 + envs += self.__extract_env_var([constr[0].replace(" ", "")]) 210 161 211 - return sorted(set(events)) 162 + return sorted(set(events)), sorted(set(envs)) 212 163 213 - def __create_matrix(self) -> list[list[str]]: 164 + def _split_constraint_expr(self, constr: list[str]) -> Iterator[tuple[str, 165 + str | None]]: 166 + """ 167 + Get a list of strings of the type constr1 && constr2 and returns a list of 168 + constraints and separators: [[constr1,"&&"],[constr2,None]] 169 + """ 170 + exprs = [] 171 + seps = [] 172 + for c in constr: 173 + while "&&" in c or "||" in c: 174 + a = c.find("&&") 175 + o = c.find("||") 176 + pos = a if o < 0 or 0 < a < o else o 177 + exprs.append(c[:pos].replace(" ", "")) 178 + seps.append(c[pos:pos + 2].replace(" ", "")) 179 + c = c[pos + 2:].replace(" ", "") 180 + exprs.append(c) 181 + seps.append(None) 182 + return zip(exprs, seps) 183 + 184 + def __extract_env_var(self, constraint: list[str]) -> list[str]: 185 + env = [] 186 + for c, _ in self._split_constraint_expr(constraint): 187 + rule = self.constraint_rule.search(c) 188 + reset = self.constraint_reset.search(c) 189 + if rule: 190 + env.append(rule["env"]) 191 + if rule.groupdict().get("unit"): 192 + self.env_types[rule["env"]] = rule["unit"] 193 + if rule["val"][0].isalpha(): 194 + self.constraint_vars.add(rule["val"]) 195 + # try to infer unit from constants or parameters 196 + val_for_unit = rule["val"].lower().replace("()", "") 197 + if val_for_unit.endswith("_ns"): 198 + self.env_types[rule["env"]] = "ns" 199 + if val_for_unit.endswith("_jiffies"): 200 + self.env_types[rule["env"]] = "j" 201 + if reset: 202 + env.append(reset["env"]) 203 + # environment variables that are reset need a storage 204 + self.env_stored.add(reset["env"]) 205 + return env 206 + 207 + def __create_matrix(self) -> tuple[list[list[str]], dict[_ConstraintKey, list[str]]]: 214 208 # transform the array into a dictionary 215 209 events = self.events 216 210 states = self.states ··· 291 157 nr_state += 1 292 158 293 159 # declare the matrix.... 294 - matrix = [[ self.invalid_state_str for x in range(nr_event)] for y in range(nr_state)] 160 + matrix = [[self.invalid_state_str for _ in range(nr_event)] for _ in range(nr_state)] 161 + constraints: dict[_ConstraintKey, list[str]] = {} 295 162 296 163 # and we are back! Let's fill the matrix 297 164 cursor = self.__get_cursor_begin_events() 298 165 299 - while self.__dot_lines[cursor].lstrip()[0] == '"': 300 - if self.__dot_lines[cursor].split()[1] == "->": 301 - line = self.__dot_lines[cursor].split() 302 - origin_state = line[0].replace('"','').replace(',','_') 303 - dest_state = line[2].replace('"','').replace(',','_') 304 - possible_events = line[-2].replace('"','').replace("\\n", " ") 305 - for event in possible_events.split(): 306 - matrix[states_dict[origin_state]][events_dict[event]] = dest_state 307 - cursor += 1 166 + for line in map(str.lstrip, 167 + islice(self.__dot_lines, cursor, None)): 308 168 309 - return matrix 169 + if not line or line[0] != '"': 170 + break 171 + 172 + split_line = line.split() 173 + 174 + if len(split_line) > 2 and split_line[1] == "->": 175 + origin_state = split_line[0].replace('"', '').replace(',', '_') 176 + dest_state = split_line[2].replace('"', '').replace(',', '_') 177 + possible_events = "".join(split_line[split_line.index("label") + 2:-1]).replace('"', '') 178 + for event in possible_events.split("\\n"): 179 + event, *constr = event.split(";") 180 + if constr: 181 + key = _EventConstraintKey(states_dict[origin_state], events_dict[event]) 182 + constraints[key] = constr 183 + # those events reset also on self loops 184 + if origin_state == dest_state and "reset" in "".join(constr): 185 + self.self_loop_reset_events.add(event) 186 + matrix[states_dict[origin_state]][events_dict[event]] = dest_state 187 + else: 188 + state = line.split("label")[1].split('"')[1] 189 + state, *constr = state.replace(" ", "").split("\\n") 190 + if constr: 191 + constraints[_StateConstraintKey(states_dict[state])] = constr 192 + 193 + return matrix, constraints 310 194 311 195 def __store_init_events(self) -> tuple[list[bool], list[bool]]: 312 196 events_start = [False] * len(self.events) 313 197 events_start_run = [False] * len(self.events) 314 - for i, _ in enumerate(self.events): 198 + for i in range(len(self.events)): 315 199 curr_event_will_init = 0 316 200 curr_event_from_init = False 317 201 curr_event_used = 0 318 - for j, _ in enumerate(self.states): 202 + for j in range(len(self.states)): 319 203 if self.function[j][i] != self.invalid_state_str: 320 204 curr_event_used += 1 321 205 if self.function[j][i] == self.initial_state: ··· 356 204 if any(self.events_start): 357 205 return False 358 206 return self.events_start_run[self.events.index(event)] 207 + 208 + def is_hybrid_automata(self) -> bool: 209 + return bool(self.envs) 210 + 211 + def is_event_constraint(self, key: _ConstraintKey) -> bool: 212 + """ 213 + Given the key in self.constraints return true if it is an event 214 + constraint, false if it is a state constraint 215 + """ 216 + return isinstance(key, _EventConstraintKey)
+76 -29
tools/verification/rvgen/rvgen/dot2c.py
··· 3 3 # 4 4 # Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org> 5 5 # 6 - # dot2c: parse an automata in dot file digraph format into a C 6 + # dot2c: parse an automaton in dot file digraph format into a C 7 7 # 8 8 # This program was written in the development of this paper: 9 9 # de Oliveira, D. B. and Cucinotta, T. and de Oliveira, R. S. ··· 13 13 # For further information, see: 14 14 # Documentation/trace/rv/deterministic_automata.rst 15 15 16 - from .automata import Automata 16 + from .automata import Automata, AutomataError 17 17 18 18 class Dot2c(Automata): 19 19 enum_suffix = "" 20 20 enum_states_def = "states" 21 21 enum_events_def = "events" 22 + enum_envs_def = "envs" 22 23 struct_automaton_def = "automaton" 23 24 var_automaton_def = "aut" 24 25 ··· 29 28 30 29 def __get_enum_states_content(self) -> list[str]: 31 30 buff = [] 32 - buff.append("\t%s%s," % (self.initial_state, self.enum_suffix)) 31 + buff.append(f"\t{self.initial_state}{self.enum_suffix},") 33 32 for state in self.states: 34 33 if state != self.initial_state: 35 - buff.append("\t%s%s," % (state, self.enum_suffix)) 36 - buff.append("\tstate_max%s," % (self.enum_suffix)) 34 + buff.append(f"\t{state}{self.enum_suffix},") 35 + buff.append(f"\tstate_max{self.enum_suffix},") 37 36 38 37 return buff 39 38 40 39 def format_states_enum(self) -> list[str]: 41 40 buff = [] 42 - buff.append("enum %s {" % self.enum_states_def) 41 + buff.append(f"enum {self.enum_states_def} {{") 43 42 buff += self.__get_enum_states_content() 44 43 buff.append("};\n") 45 44 ··· 48 47 def __get_enum_events_content(self) -> list[str]: 49 48 buff = [] 50 49 for event in self.events: 51 - buff.append("\t%s%s," % (event, self.enum_suffix)) 50 + buff.append(f"\t{event}{self.enum_suffix},") 52 51 53 - buff.append("\tevent_max%s," % self.enum_suffix) 52 + buff.append(f"\tevent_max{self.enum_suffix},") 54 53 55 54 return buff 56 55 57 56 def format_events_enum(self) -> list[str]: 58 57 buff = [] 59 - buff.append("enum %s {" % self.enum_events_def) 58 + buff.append(f"enum {self.enum_events_def} {{") 60 59 buff += self.__get_enum_events_content() 61 60 buff.append("};\n") 62 61 63 62 return buff 64 63 64 + def __get_non_stored_envs(self) -> list[str]: 65 + return [e for e in self.envs if e not in self.env_stored] 66 + 67 + def __get_enum_envs_content(self) -> list[str]: 68 + buff = [] 69 + # We first place env variables that have a u64 storage. 70 + # Those are limited by MAX_HA_ENV_LEN, other variables 71 + # are read only and don't require a storage. 72 + unstored = self.__get_non_stored_envs() 73 + for env in list(self.env_stored) + unstored: 74 + buff.append(f"\t{env}{self.enum_suffix},") 75 + 76 + buff.append(f"\tenv_max{self.enum_suffix},") 77 + max_stored = unstored[0] if len(unstored) else "env_max" 78 + buff.append(f"\tenv_max_stored{self.enum_suffix} = {max_stored}{self.enum_suffix},") 79 + 80 + return buff 81 + 82 + def format_envs_enum(self) -> list[str]: 83 + buff = [] 84 + if self.is_hybrid_automata(): 85 + buff.append(f"enum {self.enum_envs_def} {{") 86 + buff += self.__get_enum_envs_content() 87 + buff.append("};\n") 88 + buff.append(f"_Static_assert(env_max_stored{self.enum_suffix} <= MAX_HA_ENV_LEN," 89 + ' "Not enough slots");') 90 + if {"ns", "us", "ms", "s"}.intersection(self.env_types.values()): 91 + buff.append("#define HA_CLK_NS") 92 + buff.append("") 93 + return buff 94 + 65 95 def get_minimun_type(self) -> str: 66 96 min_type = "unsigned char" 67 97 68 - if self.states.__len__() > 255: 98 + if len(self.states) > 255: 69 99 min_type = "unsigned short" 70 100 71 - if self.states.__len__() > 65535: 101 + if len(self.states) > 65535: 72 102 min_type = "unsigned int" 73 103 74 - if self.states.__len__() > 1000000: 75 - raise Exception("Too many states: %d" % self.states.__len__()) 104 + if len(self.states) > 1000000: 105 + raise AutomataError(f"Too many states: {len(self.states)}") 76 106 77 107 return min_type 78 108 79 109 def format_automaton_definition(self) -> list[str]: 80 110 min_type = self.get_minimun_type() 81 111 buff = [] 82 - buff.append("struct %s {" % self.struct_automaton_def) 83 - buff.append("\tchar *state_names[state_max%s];" % (self.enum_suffix)) 84 - buff.append("\tchar *event_names[event_max%s];" % (self.enum_suffix)) 85 - buff.append("\t%s function[state_max%s][event_max%s];" % (min_type, self.enum_suffix, self.enum_suffix)) 86 - buff.append("\t%s initial_state;" % min_type) 87 - buff.append("\tbool final_states[state_max%s];" % (self.enum_suffix)) 112 + buff.append(f"struct {self.struct_automaton_def} {{") 113 + buff.append(f"\tchar *state_names[state_max{self.enum_suffix}];") 114 + buff.append(f"\tchar *event_names[event_max{self.enum_suffix}];") 115 + if self.is_hybrid_automata(): 116 + buff.append(f"\tchar *env_names[env_max{self.enum_suffix}];") 117 + buff.append(f"\t{min_type} function[state_max{self.enum_suffix}][event_max{self.enum_suffix}];") 118 + buff.append(f"\t{min_type} initial_state;") 119 + buff.append(f"\tbool final_states[state_max{self.enum_suffix}];") 88 120 buff.append("};\n") 89 121 return buff 90 122 91 123 def format_aut_init_header(self) -> list[str]: 92 124 buff = [] 93 - buff.append("static const struct %s %s = {" % (self.struct_automaton_def, self.var_automaton_def)) 125 + buff.append(f"static const struct {self.struct_automaton_def} {self.var_automaton_def} = {{") 94 126 return buff 95 127 96 128 def __get_string_vector_per_line_content(self, entries: list[str]) -> str: ··· 147 113 148 114 return buff 149 115 116 + def format_aut_init_envs_string(self) -> list[str]: 117 + buff = [] 118 + if self.is_hybrid_automata(): 119 + buff.append("\t.env_names = {") 120 + # maintain consistent order with the enum 121 + ordered_envs = list(self.env_stored) + self.__get_non_stored_envs() 122 + buff.append(self.__get_string_vector_per_line_content(ordered_envs)) 123 + buff.append("\t},") 124 + 125 + return buff 126 + 150 127 def __get_max_strlen_of_states(self) -> int: 151 - max_state_name = max(self.states, key = len).__len__() 152 - return max(max_state_name, self.invalid_state_str.__len__()) 128 + max_state_name = len(max(self.states, key=len)) 129 + return max(max_state_name, len(self.invalid_state_str)) 153 130 154 131 def get_aut_init_function(self) -> str: 155 - nr_states = self.states.__len__() 156 - nr_events = self.events.__len__() 132 + nr_states = len(self.states) 133 + nr_events = len(self.events) 157 134 buff = [] 158 135 159 136 maxlen = self.__get_max_strlen_of_states() + len(self.enum_suffix) ··· 179 134 next_state = self.function[x][y] + self.enum_suffix 180 135 181 136 if linetoolong: 182 - line += "\t\t\t%s" % next_state 137 + line += f"\t\t\t{next_state}" 183 138 else: 184 - line += "%*s" % (maxlen, next_state) 185 - if y != nr_events-1: 139 + line += f"{next_state:>{maxlen}}" 140 + if y != nr_events - 1: 186 141 line += ",\n" if linetoolong else ", " 187 142 else: 188 143 line += ",\n\t\t}," if linetoolong else " }," ··· 225 180 226 181 def format_aut_init_final_states(self) -> list[str]: 227 182 buff = [] 228 - buff.append("\t.final_states = { %s }," % self.get_aut_init_final_states()) 183 + buff.append(f"\t.final_states = {{ {self.get_aut_init_final_states()} }},") 229 184 230 185 return buff 231 186 ··· 241 196 242 197 def format_invalid_state(self) -> list[str]: 243 198 buff = [] 244 - buff.append("#define %s state_max%s\n" % (self.invalid_state_str, self.enum_suffix)) 199 + buff.append(f"#define {self.invalid_state_str} state_max{self.enum_suffix}\n") 245 200 246 201 return buff 247 202 ··· 250 205 buff += self.format_states_enum() 251 206 buff += self.format_invalid_state() 252 207 buff += self.format_events_enum() 208 + buff += self.format_envs_enum() 253 209 buff += self.format_automaton_definition() 254 210 buff += self.format_aut_init_header() 255 211 buff += self.format_aut_init_states_string() 256 212 buff += self.format_aut_init_events_string() 213 + buff += self.format_aut_init_envs_string() 257 214 buff += self.format_aut_init_function() 258 215 buff += self.format_aut_init_initial_state() 259 216 buff += self.format_aut_init_final_states()
+502 -22
tools/verification/rvgen/rvgen/dot2k.py
··· 8 8 # For further information, see: 9 9 # Documentation/trace/rv/da_monitor_synthesis.rst 10 10 11 + from collections import deque 11 12 from .dot2c import Dot2c 12 13 from .generator import Monitor 14 + from .automata import _EventConstraintKey, _StateConstraintKey, AutomataError 13 15 14 16 15 17 class dot2k(Monitor, Dot2c): ··· 21 19 self.monitor_type = MonitorType 22 20 Monitor.__init__(self, extra_params) 23 21 Dot2c.__init__(self, file_path, extra_params.get("model_name")) 24 - self.enum_suffix = "_%s" % self.name 22 + self.enum_suffix = f"_{self.name}" 23 + self.enum_suffix = f"_{self.name}" 24 + self.monitor_class = extra_params["monitor_class"] 25 25 26 26 def fill_monitor_type(self) -> str: 27 - return self.monitor_type.upper() 27 + buff = [ self.monitor_type.upper() ] 28 + buff += self._fill_timer_type() 29 + if self.monitor_type == "per_obj": 30 + buff.append("typedef /* XXX: define the target type */ *monitor_target;") 31 + return "\n".join(buff) 28 32 29 33 def fill_tracepoint_handlers_skel(self) -> str: 30 34 buff = [] 35 + buff += self._fill_hybrid_definitions() 31 36 for event in self.events: 32 - buff.append("static void handle_%s(void *data, /* XXX: fill header */)" % event) 37 + buff.append(f"static void handle_{event}(void *data, /* XXX: fill header */)") 33 38 buff.append("{") 34 39 handle = "handle_event" 35 40 if self.is_start_event(event): ··· 46 37 buff.append("\t/* XXX: validate that this event is only valid in the initial state */") 47 38 handle = "handle_start_run_event" 48 39 if self.monitor_type == "per_task": 49 - buff.append("\tstruct task_struct *p = /* XXX: how do I get p? */;"); 50 - buff.append("\tda_%s(p, %s%s);" % (handle, event, self.enum_suffix)); 40 + buff.append("\tstruct task_struct *p = /* XXX: how do I get p? */;") 41 + buff.append(f"\tda_{handle}(p, {event}{self.enum_suffix});") 42 + elif self.monitor_type == "per_obj": 43 + buff.append("\tint id = /* XXX: how do I get the id? */;") 44 + buff.append("\tmonitor_target t = /* XXX: how do I get t? */;") 45 + buff.append(f"\tda_{handle}(id, t, {event}{self.enum_suffix});") 51 46 else: 52 - buff.append("\tda_%s(%s%s);" % (handle, event, self.enum_suffix)); 47 + buff.append(f"\tda_{handle}({event}{self.enum_suffix});") 53 48 buff.append("}") 54 49 buff.append("") 55 50 return '\n'.join(buff) ··· 61 48 def fill_tracepoint_attach_probe(self) -> str: 62 49 buff = [] 63 50 for event in self.events: 64 - buff.append("\trv_attach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_%s);" % (self.name, event)) 51 + buff.append(f"\trv_attach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_{event});") 65 52 return '\n'.join(buff) 66 53 67 54 def fill_tracepoint_detach_helper(self) -> str: 68 55 buff = [] 69 56 for event in self.events: 70 - buff.append("\trv_detach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_%s);" % (self.name, event)) 57 + buff.append(f"\trv_detach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_{event});") 71 58 return '\n'.join(buff) 72 59 73 60 def fill_model_h_header(self) -> list[str]: 74 61 buff = [] 75 62 buff.append("/* SPDX-License-Identifier: GPL-2.0 */") 76 63 buff.append("/*") 77 - buff.append(" * Automatically generated C representation of %s automaton" % (self.name)) 64 + buff.append(f" * Automatically generated C representation of {self.name} automaton") 78 65 buff.append(" * For further information about this format, see kernel documentation:") 79 66 buff.append(" * Documentation/trace/rv/deterministic_automata.rst") 80 67 buff.append(" */") 81 68 buff.append("") 82 - buff.append("#define MONITOR_NAME %s" % (self.name)) 69 + buff.append(f"#define MONITOR_NAME {self.name}") 83 70 buff.append("") 84 71 85 72 return buff ··· 88 75 # 89 76 # Adjust the definition names 90 77 # 91 - self.enum_states_def = "states_%s" % self.name 92 - self.enum_events_def = "events_%s" % self.name 93 - self.struct_automaton_def = "automaton_%s" % self.name 94 - self.var_automaton_def = "automaton_%s" % self.name 78 + self.enum_states_def = f"states_{self.name}" 79 + self.enum_events_def = f"events_{self.name}" 80 + self.enum_envs_def = f"envs_{self.name}" 81 + self.struct_automaton_def = f"automaton_{self.name}" 82 + self.var_automaton_def = f"automaton_{self.name}" 95 83 96 84 buff = self.fill_model_h_header() 97 85 buff += self.format_model() 98 86 99 87 return '\n'.join(buff) 100 88 89 + def _is_id_monitor(self) -> bool: 90 + return self.monitor_type in ("per_task", "per_obj") 91 + 101 92 def fill_monitor_class_type(self) -> str: 102 - if self.monitor_type == "per_task": 93 + if self._is_id_monitor(): 103 94 return "DA_MON_EVENTS_ID" 104 95 return "DA_MON_EVENTS_IMPLICIT" 105 96 106 97 def fill_monitor_class(self) -> str: 107 - if self.monitor_type == "per_task": 98 + if self._is_id_monitor(): 108 99 return "da_monitor_id" 109 100 return "da_monitor" 110 101 ··· 124 107 ("char *", "state"), 125 108 ("char *", "event"), 126 109 ] 110 + tp_args_error_env = tp_args_error + [("char *", "env")] 111 + tp_args_dict = { 112 + "event": tp_args_event, 113 + "error": tp_args_error, 114 + "error_env": tp_args_error_env 115 + } 127 116 tp_args_id = ("int ", "id") 128 - tp_args = tp_args_event if tp_type == "event" else tp_args_error 129 - if self.monitor_type == "per_task": 117 + tp_args = tp_args_dict[tp_type] 118 + if self._is_id_monitor(): 130 119 tp_args.insert(0, tp_args_id) 131 - tp_proto_c = ", ".join([a+b for a,b in tp_args]) 132 - tp_args_c = ", ".join([b for a,b in tp_args]) 133 - buff.append(" TP_PROTO(%s)," % tp_proto_c) 134 - buff.append(" TP_ARGS(%s)" % tp_args_c) 120 + tp_proto_c = ", ".join([a + b for a, b in tp_args]) 121 + tp_args_c = ", ".join([b for a, b in tp_args]) 122 + buff.append(f" TP_PROTO({tp_proto_c}),") 123 + buff.append(f" TP_ARGS({tp_args_c})") 135 124 return '\n'.join(buff) 125 + 126 + def _fill_hybrid_definitions(self) -> list: 127 + """Stub, not valid for deterministic automata""" 128 + return [] 129 + 130 + def _fill_timer_type(self) -> list: 131 + """Stub, not valid for deterministic automata""" 132 + return [] 136 133 137 134 def fill_main_c(self) -> str: 138 135 main_c = super().fill_main_c() ··· 158 127 main_c = main_c.replace("%%MIN_TYPE%%", min_type) 159 128 main_c = main_c.replace("%%NR_EVENTS%%", str(nr_events)) 160 129 main_c = main_c.replace("%%MONITOR_TYPE%%", monitor_type) 130 + main_c = main_c.replace("%%MONITOR_CLASS%%", self.monitor_class) 161 131 162 132 return main_c 133 + 134 + class da2k(dot2k): 135 + """Deterministic automata only""" 136 + def __init__(self, *args, **kwargs): 137 + super().__init__(*args, **kwargs) 138 + if self.is_hybrid_automata(): 139 + raise AutomataError("Detected hybrid automaton, use the 'ha' class") 140 + 141 + class ha2k(dot2k): 142 + """Hybrid automata only""" 143 + def __init__(self, *args, **kwargs): 144 + super().__init__(*args, **kwargs) 145 + if not self.is_hybrid_automata(): 146 + raise AutomataError("Detected deterministic automaton, use the 'da' class") 147 + self.trace_h = self._read_template_file("trace_hybrid.h") 148 + self.__parse_constraints() 149 + 150 + def fill_monitor_class_type(self) -> str: 151 + if self._is_id_monitor(): 152 + return "HA_MON_EVENTS_ID" 153 + return "HA_MON_EVENTS_IMPLICIT" 154 + 155 + def fill_monitor_class(self) -> str: 156 + """ 157 + Used for tracepoint classes, since they are shared we keep da 158 + instead of ha (also for the ha specific tracepoints). 159 + The tracepoint class is not visible to the tools. 160 + """ 161 + return super().fill_monitor_class() 162 + 163 + def __adjust_value(self, value: str | int, unit: str | None) -> str: 164 + """Adjust the value in ns""" 165 + try: 166 + value = int(value) 167 + except ValueError: 168 + # it's a constant, a parameter or a function 169 + if value.endswith("()"): 170 + return value.replace("()", "(ha_mon)") 171 + return value 172 + match unit: 173 + case "us": 174 + value *= 10**3 175 + case "ms": 176 + value *= 10**6 177 + case "s": 178 + value *= 10**9 179 + return str(value) + "ull" 180 + 181 + def __parse_single_constraint(self, rule: dict, value: str) -> str: 182 + return f"ha_get_env(ha_mon, {rule["env"]}{self.enum_suffix}, time_ns) {rule["op"]} {value}" 183 + 184 + def __get_constraint_env(self, constr: str) -> str: 185 + """Extract the second argument from an ha_ function""" 186 + env = constr.split("(")[1].split()[1].rstrip(")").rstrip(",") 187 + assert env.rstrip(f"_{self.name}") in self.envs 188 + return env 189 + 190 + def __start_to_invariant_check(self, constr: str) -> str: 191 + # by default assume the timer has ns expiration 192 + env = self.__get_constraint_env(constr) 193 + clock_type = "ns" 194 + if self.env_types.get(env.rstrip(f"_{self.name}")) == "j": 195 + clock_type = "jiffy" 196 + 197 + return f"return ha_check_invariant_{clock_type}(ha_mon, {env}, time_ns)" 198 + 199 + def __start_to_conv(self, constr: str) -> str: 200 + """ 201 + Undo the storage conversion done by ha_start_timer_ 202 + """ 203 + return "ha_inv_to_guard" + constr[constr.find("("):] 204 + 205 + def __parse_timer_constraint(self, rule: dict, value: str) -> str: 206 + # by default assume the timer has ns expiration 207 + clock_type = "ns" 208 + if self.env_types.get(rule["env"]) == "j": 209 + clock_type = "jiffy" 210 + 211 + return (f"ha_start_timer_{clock_type}(ha_mon, {rule["env"]}{self.enum_suffix}," 212 + f" {value}, time_ns)") 213 + 214 + def __format_guard_rules(self, rules: list[str]) -> list[str]: 215 + """ 216 + Merge guard constraints as a single C return statement. 217 + If the rules include a stored env, also check its validity. 218 + Break lines in a best effort way that tries to keep readability. 219 + """ 220 + if not rules: 221 + return [] 222 + 223 + invalid_checks = [f"ha_monitor_env_invalid(ha_mon, {env}{self.enum_suffix}) ||" 224 + for env in self.env_stored if any(env in rule for rule in rules)] 225 + if invalid_checks and len(rules) > 1: 226 + rules[0] = "(" + rules[0] 227 + rules[-1] = rules[-1] + ")" 228 + rules = invalid_checks + rules 229 + 230 + separator = "\n\t\t " if sum(len(r) for r in rules) > 80 else " " 231 + return ["res = " + separator.join(rules)] 232 + 233 + def __validate_constraint(self, key: tuple[int, int] | int, constr: str, 234 + rule, reset) -> None: 235 + # event constrains are tuples and allow both rules and reset 236 + # state constraints are only used for expirations (e.g. clk<N) 237 + if self.is_event_constraint(key): 238 + if not rule and not reset: 239 + raise AutomataError("Unrecognised event constraint " 240 + f"({self.states[key[0]]}/{self.events[key[1]]}: {constr})") 241 + if rule and (rule["env"] in self.env_types and 242 + rule["env"] not in self.env_stored): 243 + raise AutomataError("Clocks in hybrid automata always require a storage" 244 + f" ({rule["env"]})") 245 + else: 246 + if not rule: 247 + raise AutomataError("Unrecognised state constraint " 248 + f"({self.states[key]}: {constr})") 249 + if rule["env"] not in self.env_stored: 250 + raise AutomataError("State constraints always require a storage " 251 + f"({rule["env"]})") 252 + if rule["op"] not in ["<", "<="]: 253 + raise AutomataError("State constraints must be clock expirations like" 254 + f" clk<N ({rule.string})") 255 + 256 + def __parse_constraints(self) -> None: 257 + self.guards: dict[_EventConstraintKey, str] = {} 258 + self.invariants: dict[_StateConstraintKey, str] = {} 259 + for key, constraint in self.constraints.items(): 260 + rules = [] 261 + resets = [] 262 + for c, sep in self._split_constraint_expr(constraint): 263 + rule = self.constraint_rule.search(c) 264 + reset = self.constraint_reset.search(c) 265 + self.__validate_constraint(key, c, rule, reset) 266 + if rule: 267 + value = rule["val"] 268 + value_len = len(rule["val"]) 269 + unit = None 270 + if rule.groupdict().get("unit"): 271 + value_len += len(rule["unit"]) 272 + unit = rule["unit"] 273 + c = c[:-(value_len)] 274 + value = self.__adjust_value(value, unit) 275 + if self.is_event_constraint(key): 276 + c = self.__parse_single_constraint(rule, value) 277 + if sep: 278 + c += f" {sep}" 279 + else: 280 + c = self.__parse_timer_constraint(rule, value) 281 + rules.append(c) 282 + if reset: 283 + c = f"ha_reset_env(ha_mon, {reset["env"]}{self.enum_suffix}, time_ns)" 284 + resets.append(c) 285 + if self.is_event_constraint(key): 286 + res = self.__format_guard_rules(rules) + resets 287 + self.guards[key] = ";".join(res) 288 + else: 289 + self.invariants[key] = rules[0] 290 + 291 + def __fill_verify_invariants_func(self) -> list[str]: 292 + buff = [] 293 + if not self.invariants: 294 + return [] 295 + 296 + buff.append( 297 + f"""static inline bool ha_verify_invariants(struct ha_monitor *ha_mon, 298 + \t\t\t\t\tenum {self.enum_states_def} curr_state, enum {self.enum_events_def} event, 299 + \t\t\t\t\tenum {self.enum_states_def} next_state, u64 time_ns) 300 + {{""") 301 + 302 + _else = "" 303 + for state, constr in sorted(self.invariants.items()): 304 + check_str = self.__start_to_invariant_check(constr) 305 + buff.append(f"\t{_else}if (curr_state == {self.states[state]}{self.enum_suffix})") 306 + buff.append(f"\t\t{check_str};") 307 + _else = "else " 308 + 309 + buff.append("\treturn true;\n}\n") 310 + return buff 311 + 312 + def __fill_convert_inv_guard_func(self) -> list[str]: 313 + buff = [] 314 + if not self.invariants: 315 + return [] 316 + 317 + conflict_guards, conflict_invs = self.__find_inv_conflicts() 318 + if not conflict_guards and not conflict_invs: 319 + return [] 320 + 321 + buff.append( 322 + f"""static inline void ha_convert_inv_guard(struct ha_monitor *ha_mon, 323 + \t\t\t\t\tenum {self.enum_states_def} curr_state, enum {self.enum_events_def} event, 324 + \t\t\t\t\tenum {self.enum_states_def} next_state, u64 time_ns) 325 + {{""") 326 + buff.append("\tif (curr_state == next_state)\n\t\treturn;") 327 + 328 + _else = "" 329 + for state, constr in sorted(self.invariants.items()): 330 + # a state with invariant can reach us without reset 331 + # multiple conflicts must have the same invariant, otherwise we cannot 332 + # know how to reset the value 333 + conf_i = [start for start, end in conflict_invs if end == state] 334 + # we can reach a guard without reset 335 + conf_g = [e for s, e in conflict_guards if s == state] 336 + if not conf_i and not conf_g: 337 + continue 338 + buff.append(f"\t{_else}if (curr_state == {self.states[state]}{self.enum_suffix})") 339 + 340 + buff.append(f"\t\t{self.__start_to_conv(constr)};") 341 + _else = "else " 342 + 343 + buff.append("}\n") 344 + return buff 345 + 346 + def __fill_verify_guards_func(self) -> list[str]: 347 + buff = [] 348 + if not self.guards: 349 + return [] 350 + 351 + buff.append( 352 + f"""static inline bool ha_verify_guards(struct ha_monitor *ha_mon, 353 + \t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event, 354 + \t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns) 355 + {{ 356 + \tbool res = true; 357 + """) 358 + 359 + _else = "" 360 + for edge, constr in sorted(self.guards.items()): 361 + buff.append(f"\t{_else}if (curr_state == " 362 + f"{self.states[edge[0]]}{self.enum_suffix} && " 363 + f"event == {self.events[edge[1]]}{self.enum_suffix})") 364 + if constr.count(";") > 0: 365 + buff[-1] += " {" 366 + buff += [f"\t\t{c};" for c in constr.split(";")] 367 + if constr.count(";") > 0: 368 + _else = "} else " 369 + else: 370 + _else = "else " 371 + if _else[0] == "}": 372 + buff.append("\t}") 373 + buff.append("\treturn res;\n}\n") 374 + return buff 375 + 376 + def __find_inv_conflicts(self) -> tuple[set[tuple[int, _EventConstraintKey]], 377 + set[tuple[int, _StateConstraintKey]]]: 378 + """ 379 + Run a breadth first search from all states with an invariant. 380 + Find any conflicting constraints reachable from there, this can be 381 + another state with an invariant or an edge with a non-reset guard. 382 + Stop when we find a reset. 383 + 384 + Return the set of conflicting guards and invariants as tuples of 385 + conflicting state and constraint key. 386 + """ 387 + conflict_guards: set[tuple[int, _EventConstraintKey]] = set() 388 + conflict_invs: set[tuple[int, _StateConstraintKey]] = set() 389 + for start_idx in self.invariants: 390 + queue = deque([(start_idx, 0)]) # (state_idx, distance) 391 + env = self.__get_constraint_env(self.invariants[start_idx]) 392 + 393 + while queue: 394 + curr_idx, distance = queue.popleft() 395 + 396 + # Check state condition 397 + if curr_idx != start_idx and curr_idx in self.invariants: 398 + conflict_invs.add((start_idx, _StateConstraintKey(curr_idx))) 399 + continue 400 + 401 + # Check if we should stop 402 + if distance > len(self.states): 403 + break 404 + if curr_idx != start_idx and distance > 1: 405 + continue 406 + 407 + for event_idx, next_state_name in enumerate(self.function[curr_idx]): 408 + if next_state_name == self.invalid_state_str: 409 + continue 410 + curr_guard = self.guards.get((curr_idx, event_idx), "") 411 + if "reset" in curr_guard and env in curr_guard: 412 + continue 413 + 414 + if env in curr_guard: 415 + conflict_guards.add((start_idx, 416 + _EventConstraintKey(curr_idx, event_idx))) 417 + continue 418 + 419 + next_idx = self.states.index(next_state_name) 420 + queue.append((next_idx, distance + 1)) 421 + 422 + return conflict_guards, conflict_invs 423 + 424 + def __fill_setup_invariants_func(self) -> list[str]: 425 + buff = [] 426 + if not self.invariants: 427 + return [] 428 + 429 + buff.append( 430 + f"""static inline void ha_setup_invariants(struct ha_monitor *ha_mon, 431 + \t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event, 432 + \t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns) 433 + {{""") 434 + 435 + conditions = ["next_state == curr_state"] 436 + conditions += [f"event != {e}{self.enum_suffix}" 437 + for e in self.self_loop_reset_events] 438 + condition_str = " && ".join(conditions) 439 + buff.append(f"\tif ({condition_str})\n\t\treturn;") 440 + 441 + _else = "" 442 + for state, constr in sorted(self.invariants.items()): 443 + buff.append(f"\t{_else}if (next_state == {self.states[state]}{self.enum_suffix})") 444 + buff.append(f"\t\t{constr};") 445 + _else = "else " 446 + 447 + for state in self.invariants: 448 + buff.append(f"\telse if (curr_state == {self.states[state]}{self.enum_suffix})") 449 + buff.append("\t\tha_cancel_timer(ha_mon);") 450 + 451 + buff.append("}\n") 452 + return buff 453 + 454 + def __fill_constr_func(self) -> list[str]: 455 + buff = [] 456 + if not self.constraints: 457 + return [] 458 + 459 + buff.append( 460 + """/* 461 + * These functions are used to validate state transitions. 462 + * 463 + * They are generated by parsing the model, there is usually no need to change them. 464 + * If the monitor requires a timer, there are functions responsible to arm it when 465 + * the next state has a constraint, cancel it in any other case and to check 466 + * that it didn't expire before the callback run. Transitions to the same state 467 + * without a reset never affect timers. 468 + * Due to the different representations between invariants and guards, there is 469 + * a function to convert it in case invariants or guards are reachable from 470 + * another invariant without reset. Those are not present if not required in 471 + * the model. This is all automatic but is worth checking because it may show 472 + * errors in the model (e.g. missing resets). 473 + */""") 474 + 475 + buff += self.__fill_verify_invariants_func() 476 + inv_conflicts = self.__fill_convert_inv_guard_func() 477 + buff += inv_conflicts 478 + buff += self.__fill_verify_guards_func() 479 + buff += self.__fill_setup_invariants_func() 480 + 481 + buff.append( 482 + f"""static bool ha_verify_constraint(struct ha_monitor *ha_mon, 483 + \t\t\t\t enum {self.enum_states_def} curr_state, enum {self.enum_events_def} event, 484 + \t\t\t\t enum {self.enum_states_def} next_state, u64 time_ns) 485 + {{""") 486 + 487 + if self.invariants: 488 + buff.append("\tif (!ha_verify_invariants(ha_mon, curr_state, " 489 + "event, next_state, time_ns))\n\t\treturn false;\n") 490 + if inv_conflicts: 491 + buff.append("\tha_convert_inv_guard(ha_mon, curr_state, event, " 492 + "next_state, time_ns);\n") 493 + 494 + if self.guards: 495 + buff.append("\tif (!ha_verify_guards(ha_mon, curr_state, event, " 496 + "next_state, time_ns))\n\t\treturn false;\n") 497 + 498 + if self.invariants: 499 + buff.append("\tha_setup_invariants(ha_mon, curr_state, event, next_state, time_ns);\n") 500 + 501 + buff.append("\treturn true;\n}\n") 502 + return buff 503 + 504 + def __fill_env_getter(self, env: str) -> str: 505 + if env in self.env_types: 506 + match self.env_types[env]: 507 + case "ns" | "us" | "ms" | "s": 508 + return "ha_get_clk_ns(ha_mon, env, time_ns);" 509 + case "j": 510 + return "ha_get_clk_jiffy(ha_mon, env);" 511 + return f"/* XXX: how do I read {env}? */" 512 + 513 + def __fill_env_resetter(self, env: str) -> str: 514 + if env in self.env_types: 515 + match self.env_types[env]: 516 + case "ns" | "us" | "ms" | "s": 517 + return "ha_reset_clk_ns(ha_mon, env, time_ns);" 518 + case "j": 519 + return "ha_reset_clk_jiffy(ha_mon, env);" 520 + return f"/* XXX: how do I reset {env}? */" 521 + 522 + def __fill_hybrid_get_reset_functions(self) -> list[str]: 523 + buff = [] 524 + if self.is_hybrid_automata(): 525 + for var in self.constraint_vars: 526 + if var.endswith("()"): 527 + func_name = var.replace("()", "") 528 + if func_name.isupper(): 529 + buff.append(f"#define {func_name}(ha_mon) " 530 + f"/* XXX: what is {func_name}(ha_mon)? */\n") 531 + else: 532 + buff.append(f"static inline u64 {func_name}(struct ha_monitor *ha_mon)\n{{") 533 + buff.append(f"\treturn /* XXX: what is {func_name}(ha_mon)? */;") 534 + buff.append("}\n") 535 + elif var.isupper(): 536 + buff.append(f"#define {var} /* XXX: what is {var}? */\n") 537 + else: 538 + buff.append(f"static u64 {var} = /* XXX: default value */;") 539 + buff.append(f"module_param({var}, ullong, 0644);\n") 540 + buff.append("""/* 541 + * These functions define how to read and reset the environment variable. 542 + * 543 + * Common environment variables like ns-based and jiffy-based clocks have 544 + * pre-define getters and resetters you can use. The parser can infer the type 545 + * of the environment variable if you supply a measure unit in the constraint. 546 + * If you define your own functions, make sure to add appropriate memory 547 + * barriers if required. 548 + * Some environment variables don't require a storage as they read a system 549 + * state (e.g. preemption count). Those variables are never reset, so we don't 550 + * define a reset function on monitors only relying on this type of variables. 551 + */""") 552 + buff.append("static u64 ha_get_env(struct ha_monitor *ha_mon, " 553 + f"enum envs{self.enum_suffix} env, u64 time_ns)\n{{") 554 + _else = "" 555 + for env in self.envs: 556 + buff.append(f"\t{_else}if (env == {env}{self.enum_suffix})") 557 + buff.append(f"\t\treturn {self.__fill_env_getter(env)}") 558 + _else = "else " 559 + buff.append("\treturn ENV_INVALID_VALUE;\n}\n") 560 + if len(self.env_stored): 561 + buff.append("static void ha_reset_env(struct ha_monitor *ha_mon, " 562 + f"enum envs{self.enum_suffix} env, u64 time_ns)\n{{") 563 + _else = "" 564 + for env in self.env_stored: 565 + buff.append(f"\t{_else}if (env == {env}{self.enum_suffix})") 566 + buff.append(f"\t\t{self.__fill_env_resetter(env)}") 567 + _else = "else " 568 + buff.append("}\n") 569 + return buff 570 + 571 + def _fill_hybrid_definitions(self) -> list[str]: 572 + return self.__fill_hybrid_get_reset_functions() + self.__fill_constr_func() 573 + 574 + def _fill_timer_type(self) -> list: 575 + if self.invariants: 576 + return [ 577 + "/* XXX: If the monitor has several instances, consider HA_TIMER_WHEEL */", 578 + "#define HA_TIMER_TYPE HA_TIMER_HRTIMER" 579 + ] 580 + return []
+37 -56
tools/verification/rvgen/rvgen/generator.py
··· 3 3 # 4 4 # Copyright (C) 2019-2022 Red Hat, Inc. Daniel Bristot de Oliveira <bristot@kernel.org> 5 5 # 6 - # Abtract class for generating kernel runtime verification monitors from specification file 6 + # Abstract class for generating kernel runtime verification monitors from specification file 7 7 8 8 import platform 9 9 import os ··· 40 40 if platform.system() != "Linux": 41 41 raise OSError("I can only run on Linux.") 42 42 43 - kernel_path = os.path.join("/lib/modules/%s/build" % platform.release(), self.rv_dir) 43 + kernel_path = os.path.join(f"/lib/modules/{platform.release()}/build", self.rv_dir) 44 44 45 45 # if the current kernel is from a distro this may not be a full kernel tree 46 46 # verify that one of the files we are going to modify is available ··· 51 51 raise FileNotFoundError("Could not find the rv directory, do you have the kernel source installed?") 52 52 53 53 def _read_file(self, path): 54 - try: 55 - fd = open(path, 'r') 56 - except OSError: 57 - raise Exception("Cannot open the file: %s" % path) 58 - 59 - content = fd.read() 60 - 61 - fd.close() 54 + with open(path, 'r') as fd: 55 + content = fd.read() 62 56 return content 63 57 64 58 def _read_template_file(self, file): 65 59 try: 66 60 path = os.path.join(self.abs_template_dir, file) 67 61 return self._read_file(path) 68 - except Exception: 62 + except OSError: 69 63 # Specific template file not found. Try the generic template file in the template/ 70 64 # directory, which is one level up 71 65 path = os.path.join(self.abs_template_dir, "..", file) 72 66 return self._read_file(path) 73 67 74 68 def fill_parent(self): 75 - return "&rv_%s" % self.parent if self.parent else "NULL" 69 + return f"&rv_{self.parent}" if self.parent else "NULL" 76 70 77 71 def fill_include_parent(self): 78 72 if self.parent: 79 - return "#include <monitors/%s/%s.h>\n" % (self.parent, self.parent) 73 + return f"#include <monitors/{self.parent}/{self.parent}.h>\n" 80 74 return "" 81 75 82 76 def fill_tracepoint_handlers_skel(self): ··· 116 122 buff = [] 117 123 buff.append(" # XXX: add dependencies if there") 118 124 if self.parent: 119 - buff.append(" depends on RV_MON_%s" % self.parent.upper()) 125 + buff.append(f" depends on RV_MON_{self.parent.upper()}") 120 126 buff.append(" default y") 121 127 return '\n'.join(buff) 122 128 ··· 142 148 monitor_class_type = self.fill_monitor_class_type() 143 149 if self.auto_patch: 144 150 self._patch_file("rv_trace.h", 145 - "// Add new monitors based on CONFIG_%s here" % monitor_class_type, 146 - "#include <monitors/%s/%s_trace.h>" % (self.name, self.name)) 147 - return " - Patching %s/rv_trace.h, double check the result" % self.rv_dir 151 + f"// Add new monitors based on CONFIG_{monitor_class_type} here", 152 + f"#include <monitors/{self.name}/{self.name}_trace.h>") 153 + return f" - Patching {self.rv_dir}/rv_trace.h, double check the result" 148 154 149 - return """ - Edit %s/rv_trace.h: 150 - Add this line where other tracepoints are included and %s is defined: 151 - #include <monitors/%s/%s_trace.h> 152 - """ % (self.rv_dir, monitor_class_type, self.name, self.name) 155 + return f""" - Edit {self.rv_dir}/rv_trace.h: 156 + Add this line where other tracepoints are included and {monitor_class_type} is defined: 157 + #include <monitors/{self.name}/{self.name}_trace.h> 158 + """ 153 159 154 160 def _kconfig_marker(self, container=None) -> str: 155 - return "# Add new %smonitors here" % (container + " " 156 - if container else "") 161 + return f"# Add new {container + ' ' if container else ''}monitors here" 157 162 158 163 def fill_kconfig_tooltip(self): 159 164 if self.auto_patch: 160 165 # monitors with a container should stay together in the Kconfig 161 166 self._patch_file("Kconfig", 162 167 self._kconfig_marker(self.parent), 163 - "source \"kernel/trace/rv/monitors/%s/Kconfig\"" % (self.name)) 164 - return " - Patching %s/Kconfig, double check the result" % self.rv_dir 168 + f"source \"kernel/trace/rv/monitors/{self.name}/Kconfig\"") 169 + return f" - Patching {self.rv_dir}/Kconfig, double check the result" 165 170 166 - return """ - Edit %s/Kconfig: 171 + return f""" - Edit {self.rv_dir}/Kconfig: 167 172 Add this line where other monitors are included: 168 - source \"kernel/trace/rv/monitors/%s/Kconfig\" 169 - """ % (self.rv_dir, self.name) 173 + source \"kernel/trace/rv/monitors/{self.name}/Kconfig\" 174 + """ 170 175 171 176 def fill_makefile_tooltip(self): 172 177 name = self.name ··· 173 180 if self.auto_patch: 174 181 self._patch_file("Makefile", 175 182 "# Add new monitors here", 176 - "obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o" % (name_up, name, name)) 177 - return " - Patching %s/Makefile, double check the result" % self.rv_dir 183 + f"obj-$(CONFIG_RV_MON_{name_up}) += monitors/{name}/{name}.o") 184 + return f" - Patching {self.rv_dir}/Makefile, double check the result" 178 185 179 - return """ - Edit %s/Makefile: 186 + return f""" - Edit {self.rv_dir}/Makefile: 180 187 Add this line where other monitors are included: 181 - obj-$(CONFIG_RV_MON_%s) += monitors/%s/%s.o 182 - """ % (self.rv_dir, name_up, name, name) 188 + obj-$(CONFIG_RV_MON_{name_up}) += monitors/{name}/{name}.o 189 + """ 183 190 184 191 def fill_monitor_tooltip(self): 185 192 if self.auto_patch: 186 - return " - Monitor created in %s/monitors/%s" % (self.rv_dir, self. name) 187 - return " - Move %s/ to the kernel's monitor directory (%s/monitors)" % (self.name, self.rv_dir) 193 + return f" - Monitor created in {self.rv_dir}/monitors/{self.name}" 194 + return f" - Move {self.name}/ to the kernel's monitor directory ({self.rv_dir}/monitors)" 188 195 189 196 def __create_directory(self): 190 197 path = self.name ··· 194 201 os.mkdir(path) 195 202 except FileExistsError: 196 203 return 197 - except: 198 - print("Fail creating the output dir: %s" % self.name) 199 204 200 205 def __write_file(self, file_name, content): 201 - try: 202 - file = open(file_name, 'w') 203 - except: 204 - print("Fail writing to file: %s" % file_name) 205 - 206 - file.write(content) 207 - 208 - file.close() 206 + with open(file_name, 'w') as file: 207 + file.write(content) 209 208 210 209 def _create_file(self, file_name, content): 211 - path = "%s/%s" % (self.name, file_name) 210 + path = f"{self.name}/{file_name}" 212 211 if self.auto_patch: 213 212 path = os.path.join(self.rv_dir, "monitors", path) 214 213 self.__write_file(path, content) 215 - 216 - def __get_main_name(self): 217 - path = "%s/%s" % (self.name, "main.c") 218 - if not os.path.exists(path): 219 - return "main.c" 220 - return "__main.c" 221 214 222 215 def print_files(self): 223 216 main_c = self.fill_main_c() 224 217 225 218 self.__create_directory() 226 219 227 - path = "%s.c" % self.name 220 + path = f"{self.name}.c" 228 221 self._create_file(path, main_c) 229 222 230 223 model_h = self.fill_model_h() 231 - path = "%s.h" % self.name 224 + path = f"{self.name}.h" 232 225 self._create_file(path, model_h) 233 226 234 227 kconfig = self.fill_kconfig() ··· 222 243 223 244 224 245 class Monitor(RVGenerator): 225 - monitor_types = { "global" : 1, "per_cpu" : 2, "per_task" : 3 } 246 + monitor_types = {"global": 1, "per_cpu": 2, "per_task": 3, "per_obj": 4} 226 247 227 248 def __init__(self, extra_params={}): 228 249 super().__init__(extra_params) ··· 234 255 monitor_class_type = self.fill_monitor_class_type() 235 256 tracepoint_args_skel_event = self.fill_tracepoint_args_skel("event") 236 257 tracepoint_args_skel_error = self.fill_tracepoint_args_skel("error") 258 + tracepoint_args_skel_error_env = self.fill_tracepoint_args_skel("error_env") 237 259 trace_h = trace_h.replace("%%MODEL_NAME%%", self.name) 238 260 trace_h = trace_h.replace("%%MODEL_NAME_UP%%", self.name.upper()) 239 261 trace_h = trace_h.replace("%%MONITOR_CLASS%%", monitor_class) 240 262 trace_h = trace_h.replace("%%MONITOR_CLASS_TYPE%%", monitor_class_type) 241 263 trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_EVENT%%", tracepoint_args_skel_event) 242 264 trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_ERROR%%", tracepoint_args_skel_error) 265 + trace_h = trace_h.replace("%%TRACEPOINT_ARGS_SKEL_ERROR_ENV%%", tracepoint_args_skel_error_env) 243 266 return trace_h 244 267 245 268 def print_files(self): 246 269 super().print_files() 247 270 trace_h = self.fill_trace_h() 248 - path = "%s_trace.h" % self.name 271 + path = f"{self.name}_trace.h" 249 272 self._create_file(path, trace_h)
+6 -5
tools/verification/rvgen/rvgen/ltl2ba.py
··· 9 9 10 10 from ply.lex import lex 11 11 from ply.yacc import yacc 12 + from .automata import AutomataError 12 13 13 14 # Grammar: 14 15 # ltl ::= opd | ( ltl ) | ltl binop ltl | unop ltl ··· 63 62 t_ignore = ' \t\n' 64 63 65 64 def t_error(t): 66 - raise ValueError(f"Illegal character '{t.value[0]}'") 65 + raise AutomataError(f"Illegal character '{t.value[0]}'") 67 66 68 67 lexer = lex() 69 68 ··· 395 394 @staticmethod 396 395 def expand(n: ASTNode, node: GraphNode, node_set) -> set[GraphNode]: 397 396 for f in node.old: 398 - if isinstance(f, NotOp) and f.op.child is n: 397 + if isinstance(f.op, NotOp) and f.op.child is n: 399 398 return node_set 400 399 node.old |= {n} 401 400 return node.expand(node_set) ··· 488 487 elif p[1] == "not": 489 488 op = NotOp(p[2]) 490 489 else: 491 - raise ValueError(f"Invalid unary operator {p[1]}") 490 + raise AutomataError(f"Invalid unary operator {p[1]}") 492 491 493 492 p[0] = ASTNode(op) 494 493 ··· 508 507 elif p[2] == "imply": 509 508 op = ImplyOp(p[1], p[3]) 510 509 else: 511 - raise ValueError(f"Invalid binary operator {p[2]}") 510 + raise AutomataError(f"Invalid binary operator {p[2]}") 512 511 513 512 p[0] = ASTNode(op) 514 513 ··· 527 526 subexpr[assign[0]] = assign[1] 528 527 529 528 if rule is None: 530 - raise ValueError("Please define your specification in the \"RULE = <LTL spec>\" format") 529 + raise AutomataError("Please define your specification in the \"RULE = <LTL spec>\" format") 531 530 532 531 for node in rule: 533 532 if not isinstance(node.op, Variable):
+30 -24
tools/verification/rvgen/rvgen/ltl2k.py
··· 4 4 from pathlib import Path 5 5 from . import generator 6 6 from . import ltl2ba 7 + from .automata import AutomataError 7 8 8 9 COLUMN_LIMIT = 100 9 10 ··· 44 43 skip = ["is", "by", "or", "and"] 45 44 return '_'.join([x[:2] for x in s.lower().split('_') if x not in skip]) 46 45 47 - abbrs = [] 48 - for atom in atoms: 46 + def find_share_length(atom: str) -> int: 49 47 for i in range(len(atom), -1, -1): 50 48 if sum(a.startswith(atom[:i]) for a in atoms) > 1: 51 - break 52 - share = atom[:i] 53 - unique = atom[i:] 49 + return i 50 + return 0 51 + 52 + abbrs = [] 53 + for atom in atoms: 54 + share_len = find_share_length(atom) 55 + share = atom[:share_len] 56 + unique = atom[share_len:] 54 57 abbrs.append((shorten(share) + shorten(unique))) 55 58 return abbrs 56 59 ··· 65 60 if MonitorType != "per_task": 66 61 raise NotImplementedError("Only per_task monitor is supported for LTL") 67 62 super().__init__(extra_params) 68 - with open(file_path) as f: 69 - self.atoms, self.ba, self.ltl = ltl2ba.create_graph(f.read()) 63 + try: 64 + with open(file_path) as f: 65 + self.atoms, self.ba, self.ltl = ltl2ba.create_graph(f.read()) 66 + except OSError as exc: 67 + raise AutomataError(exc.strerror) from exc 70 68 self.atoms_abbr = abbreviate_atoms(self.atoms) 71 69 self.name = extra_params.get("model_name") 72 70 if not self.name: 73 71 self.name = Path(file_path).stem 74 72 75 - def _fill_states(self) -> str: 73 + def _fill_states(self) -> list[str]: 76 74 buf = [ 77 75 "enum ltl_buchi_state {", 78 76 ] 79 77 80 78 for node in self.ba: 81 - buf.append("\tS%i," % node.id) 79 + buf.append(f"\tS{node.id},") 82 80 buf.append("\tRV_NUM_BA_STATES") 83 81 buf.append("};") 84 82 buf.append("static_assert(RV_NUM_BA_STATES <= RV_MAX_BA_STATES);") ··· 90 82 def _fill_atoms(self): 91 83 buf = ["enum ltl_atom {"] 92 84 for a in sorted(self.atoms): 93 - buf.append("\tLTL_%s," % a) 85 + buf.append(f"\tLTL_{a},") 94 86 buf.append("\tLTL_NUM_ATOM") 95 87 buf.append("};") 96 88 buf.append("static_assert(LTL_NUM_ATOM <= RV_MAX_LTL_ATOM);") ··· 104 96 ] 105 97 106 98 for name in self.atoms_abbr: 107 - buf.append("\t\t\"%s\"," % name) 99 + buf.append(f"\t\t\"{name}\",") 108 100 109 101 buf.extend([ 110 102 "\t};", ··· 121 113 continue 122 114 123 115 if isinstance(node.op, ltl2ba.AndOp): 124 - buf.append("\tbool %s = %s && %s;" % (node, node.op.left, node.op.right)) 116 + buf.append(f"\tbool {node} = {node.op.left} && {node.op.right};") 125 117 required_values |= {str(node.op.left), str(node.op.right)} 126 118 elif isinstance(node.op, ltl2ba.OrOp): 127 - buf.append("\tbool %s = %s || %s;" % (node, node.op.left, node.op.right)) 119 + buf.append(f"\tbool {node} = {node.op.left} || {node.op.right};") 128 120 required_values |= {str(node.op.left), str(node.op.right)} 129 121 elif isinstance(node.op, ltl2ba.NotOp): 130 - buf.append("\tbool %s = !%s;" % (node, node.op.child)) 122 + buf.append(f"\tbool {node} = !{node.op.child};") 131 123 required_values.add(str(node.op.child)) 132 124 133 125 for atom in self.atoms: 134 126 if atom.lower() not in required_values: 135 127 continue 136 - buf.append("\tbool %s = test_bit(LTL_%s, mon->atoms);" % (atom.lower(), atom)) 128 + buf.append(f"\tbool {atom.lower()} = test_bit(LTL_{atom}, mon->atoms);") 137 129 138 130 buf.reverse() 139 131 ··· 161 153 ]) 162 154 163 155 for node in self.ba: 164 - buf.append("\tcase S%i:" % node.id) 156 + buf.append(f"\tcase S{node.id}:") 165 157 166 158 for o in sorted(node.outgoing): 167 159 line = "\t\tif " ··· 171 163 lines = break_long_line(line, indent) 172 164 buf.extend(lines) 173 165 174 - buf.append("\t\t\t__set_bit(S%i, next);" % o.id) 166 + buf.append(f"\t\t\t__set_bit(S{o.id}, next);") 175 167 buf.append("\t\tbreak;") 176 168 buf.extend([ 177 169 "\t}", ··· 205 197 lines = break_long_line(line, indent) 206 198 buf.extend(lines) 207 199 208 - buf.append("\t\t__set_bit(S%i, mon->states);" % node.id) 200 + buf.append(f"\t\t__set_bit(S{node.id}, mon->states);") 209 201 buf.append("}") 210 202 return buf 211 203 ··· 213 205 buff = [] 214 206 buff.append("static void handle_example_event(void *data, /* XXX: fill header */)") 215 207 buff.append("{") 216 - buff.append("\tltl_atom_update(task, LTL_%s, true/false);" % self.atoms[0]) 208 + buff.append(f"\tltl_atom_update(task, LTL_{self.atoms[0]}, true/false);") 217 209 buff.append("}") 218 210 buff.append("") 219 211 return '\n'.join(buff) 220 212 221 213 def fill_tracepoint_attach_probe(self): 222 - return "\trv_attach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_example_event);" \ 223 - % self.name 214 + return f"\trv_attach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_example_event);" 224 215 225 216 def fill_tracepoint_detach_helper(self): 226 - return "\trv_detach_trace_probe(\"%s\", /* XXX: tracepoint */, handle_sample_event);" \ 227 - % self.name 217 + return f"\trv_detach_trace_probe(\"{self.name}\", /* XXX: tracepoint */, handle_sample_event);" 228 218 229 219 def fill_atoms_init(self): 230 220 buff = [] 231 221 for a in self.atoms: 232 - buff.append("\tltl_atom_set(mon, LTL_%s, true/false);" % a) 222 + buff.append(f"\tltl_atom_set(mon, LTL_{a}, true/false);") 233 223 return '\n'.join(buff) 234 224 235 225 def fill_model_h(self):
+1 -1
tools/verification/rvgen/rvgen/templates/dot2k/main.c
··· 21 21 */ 22 22 #define RV_MON_TYPE RV_MON_%%MONITOR_TYPE%% 23 23 #include "%%MODEL_NAME%%.h" 24 - #include <rv/da_monitor.h> 24 + #include <rv/%%MONITOR_CLASS%%_monitor.h> 25 25 26 26 /* 27 27 * This is the instrumentation part of the monitor.
+16
tools/verification/rvgen/rvgen/templates/dot2k/trace_hybrid.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + /* 4 + * Snippet to be included in rv_trace.h 5 + */ 6 + 7 + #ifdef CONFIG_RV_MON_%%MODEL_NAME_UP%% 8 + DEFINE_EVENT(event_%%MONITOR_CLASS%%, event_%%MODEL_NAME%%, 9 + %%TRACEPOINT_ARGS_SKEL_EVENT%%); 10 + 11 + DEFINE_EVENT(error_%%MONITOR_CLASS%%, error_%%MODEL_NAME%%, 12 + %%TRACEPOINT_ARGS_SKEL_ERROR%%); 13 + 14 + DEFINE_EVENT(error_env_%%MONITOR_CLASS%%, error_env_%%MODEL_NAME%%, 15 + %%TRACEPOINT_ARGS_SKEL_ERROR_ENV%%); 16 + #endif /* CONFIG_RV_MON_%%MODEL_NAME_UP%% */