we (web engine): Experimental web browser project to understand the limits of Claude
2
fork

Configure Feed

Select the types of activity you want to include in your feed.

Implement baseline JIT compiler for hot bytecode functions

Add a single-pass baseline JIT that compiles frequently-called bytecode
functions to AArch64 machine code. The compiler translates each bytecode
instruction to a call to a corresponding extern "C" helper function,
eliminating interpreter dispatch overhead while keeping implementation
simple and correct.

Key components:
- jit/compiler.rs: BaselineJit walks bytecode, emits native call sequences
and direct branches for control flow (Jump, JumpIfTrue, JumpIfFalse)
- jit/helpers.rs: extern "C" helper functions for all supported opcodes
including arithmetic, comparisons, property access with IC integration,
function calls, closures, and exception handling
- VM integration: per-function call counting (threshold=100), lazy JIT
buffer allocation, compiled code caching per GcRef, re-entrancy guard
to prevent recursive JIT dispatch, and run_to_depth for synchronous
callee execution from JIT code

Supported opcodes: LoadConst, LoadNull, LoadUndefined, LoadTrue, LoadFalse,
Move, LoadInt8, LoadGlobal, StoreGlobal, Add, Sub, Mul, Div, Rem, Neg,
BitAnd/Or/Xor, shifts, all comparisons, LogicalNot, TypeOf, Void, Jump,
JumpIfTrue/False/Nullish, Call, Return, Throw, CreateClosure, GetProperty,
SetProperty, GetPropertyByName, SetPropertyByName, CreateObject, CreateArray,
NewCell, CellLoad, CellStore, LoadUpvalue, StoreUpvalue, exception handlers.

Unsupported opcodes (Exp, InstanceOf, In, Delete, ForIn, Yield, Spread,
Await) bail out to the interpreter for correct execution.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

+2388 -6
+913
crates/js/src/jit/compiler.rs
··· 1 + //! Baseline JIT compiler: translates bytecode → AArch64 machine code. 2 + //! 3 + //! This is a simple single-pass compiler. For each bytecode instruction, it emits 4 + //! a call to the corresponding helper function in `jit::helpers`. Control flow 5 + //! (jumps, branches) is translated directly to native branches. 6 + //! 7 + //! The compiled function has the signature: 8 + //! `extern "C" fn(vm: *mut Vm) -> u64` 9 + //! 10 + //! where `vm` is passed in X0. The function uses callee-saved registers: 11 + //! X19 = *mut Vm (preserved across helper calls) 12 + //! 13 + //! Return values: 14 + //! 0 = normal completion (return value already written to frame's return_reg) 15 + //! 1 = exception thrown 16 + //! 2 = bail out to interpreter (unsupported opcode) 17 + 18 + use std::collections::HashMap; 19 + 20 + use super::assembler::{self, Assembler, Label, X0, X1, X19, X2, X3, X4}; 21 + use super::buffer::{CodePtr, JitBuffer}; 22 + use super::memory::MemoryError; 23 + use crate::bytecode::{Function, Op}; 24 + 25 + // ── Helper function references ─────────────────────────────────────────────── 26 + // 27 + // Each helper is an `extern "C"` function in `jit::helpers`. We load its address 28 + // as a 64-bit immediate and call via BLR. 29 + 30 + use super::helpers::*; 31 + 32 + /// Convert a function pointer to a usize address (properly via *const ()). 33 + macro_rules! fn_addr { 34 + ($f:expr) => { 35 + $f as *const () as usize 36 + }; 37 + } 38 + 39 + // ── Compile error ──────────────────────────────────────────────────────────── 40 + 41 + #[derive(Debug)] 42 + pub enum CompileError { 43 + Memory(MemoryError), 44 + UnsupportedOpcode(u8), 45 + } 46 + 47 + impl From<MemoryError> for CompileError { 48 + fn from(e: MemoryError) -> Self { 49 + CompileError::Memory(e) 50 + } 51 + } 52 + 53 + // ── JIT compilation threshold ──────────────────────────────────────────────── 54 + 55 + /// Number of calls before a function is JIT-compiled. 56 + pub const JIT_THRESHOLD: u32 = 100; 57 + 58 + // ── Baseline JIT compiler ──────────────────────────────────────────────────── 59 + 60 + pub struct BaselineJit { 61 + asm: Assembler, 62 + /// Map from bytecode offset → assembler label (for jump targets). 63 + labels: HashMap<usize, Label>, 64 + } 65 + 66 + impl BaselineJit { 67 + /// Compile a bytecode function to native code. 68 + pub fn compile(func: &Function, buffer: &mut JitBuffer) -> Result<CodePtr, CompileError> { 69 + // Don't JIT generators or async functions — they have complex control flow. 70 + if func.is_generator || func.is_async { 71 + return Err(CompileError::UnsupportedOpcode(0)); 72 + } 73 + 74 + let mut jit = Self { 75 + asm: Assembler::new(), 76 + labels: HashMap::new(), 77 + }; 78 + 79 + // First pass: scan bytecode to find all jump targets and pre-create labels. 80 + jit.scan_jump_targets(func); 81 + 82 + // Emit the compiled function. 83 + jit.emit_function(func)?; 84 + 85 + // Finalize and write to executable memory. 86 + Ok(jit.asm.finalize(buffer)?) 87 + } 88 + 89 + /// Scan bytecode to find all jump targets and create labels for them. 90 + fn scan_jump_targets(&mut self, func: &Function) { 91 + let code = &func.code; 92 + let mut ip = 0; 93 + while ip < code.len() { 94 + let opcode = code[ip]; 95 + ip += 1; 96 + 97 + match Op::from_byte(opcode) { 98 + Some(Op::Jump) => { 99 + let offset = read_i32(code, &mut ip); 100 + let target = (ip as i64 + offset as i64) as usize; 101 + self.get_or_create_label(target); 102 + } 103 + Some(Op::JumpIfTrue) | Some(Op::JumpIfFalse) | Some(Op::JumpIfNullish) => { 104 + ip += 1; // reg 105 + let offset = read_i32(code, &mut ip); 106 + let target = (ip as i64 + offset as i64) as usize; 107 + self.get_or_create_label(target); 108 + } 109 + Some(Op::PushExceptionHandler) => { 110 + ip += 1; // catch_reg 111 + let offset = read_i32(code, &mut ip); 112 + let target = (ip as i64 + offset as i64) as usize; 113 + self.get_or_create_label(target); 114 + } 115 + Some(op) => { 116 + ip += instruction_operand_size(op); 117 + } 118 + None => { 119 + // Unknown opcode — skip (will bail at emit time) 120 + break; 121 + } 122 + } 123 + } 124 + } 125 + 126 + fn get_or_create_label(&mut self, bc_offset: usize) -> Label { 127 + if let Some(&label) = self.labels.get(&bc_offset) { 128 + label 129 + } else { 130 + let label = self.asm.new_label(); 131 + self.labels.insert(bc_offset, label); 132 + label 133 + } 134 + } 135 + 136 + /// Emit the function body. 137 + fn emit_function(&mut self, func: &Function) -> Result<(), CompileError> { 138 + // ── Prologue ──────────────────────────────────────────── 139 + self.asm.emit_prologue(); 140 + 141 + // X0 = *mut Vm on entry; save to callee-saved X19 142 + self.asm.mov_reg(X19, X0); 143 + 144 + // Create a label for the bail-out path and the exception path 145 + let bail_label = self.asm.new_label(); 146 + let exception_label = self.asm.new_label(); 147 + 148 + // ── Body: walk bytecode and emit native code ──────────── 149 + let code = &func.code; 150 + let mut ip = 0; 151 + 152 + while ip < code.len() { 153 + // Bind label if this bytecode offset is a jump target. 154 + if let Some(&label) = self.labels.get(&ip) { 155 + self.asm.bind_label(label); 156 + } 157 + 158 + let opcode = code[ip]; 159 + ip += 1; 160 + 161 + let Some(op) = Op::from_byte(opcode) else { 162 + return Err(CompileError::UnsupportedOpcode(opcode)); 163 + }; 164 + 165 + match op { 166 + // ── Register loads ──────────────────────────────── 167 + Op::LoadConst => { 168 + let dst = read_u8(code, &mut ip); 169 + let idx = read_u16(code, &mut ip); 170 + self.emit_call3(fn_addr!(jit_helper_load_const), dst as u64, idx as u64, 0); 171 + self.emit_check_result(exception_label); 172 + } 173 + Op::LoadNull => { 174 + let dst = read_u8(code, &mut ip); 175 + self.emit_call2(fn_addr!(jit_helper_load_null), dst as u64, 0); 176 + } 177 + Op::LoadUndefined => { 178 + let dst = read_u8(code, &mut ip); 179 + self.emit_call2(fn_addr!(jit_helper_load_undefined), dst as u64, 0); 180 + } 181 + Op::LoadTrue => { 182 + let dst = read_u8(code, &mut ip); 183 + self.emit_call2(fn_addr!(jit_helper_load_true), dst as u64, 0); 184 + } 185 + Op::LoadFalse => { 186 + let dst = read_u8(code, &mut ip); 187 + self.emit_call2(fn_addr!(jit_helper_load_false), dst as u64, 0); 188 + } 189 + Op::Move => { 190 + let dst = read_u8(code, &mut ip); 191 + let src = read_u8(code, &mut ip); 192 + self.emit_call3(fn_addr!(jit_helper_move), dst as u64, src as u64, 0); 193 + } 194 + Op::LoadInt8 => { 195 + let dst = read_u8(code, &mut ip); 196 + let val = read_u8(code, &mut ip) as i8; 197 + self.emit_call3( 198 + fn_addr!(jit_helper_load_int8), 199 + dst as u64, 200 + val as i32 as u32 as u64, 201 + 0, 202 + ); 203 + } 204 + 205 + // ── Global variable access ──────────────────────── 206 + Op::LoadGlobal => { 207 + let dst = read_u8(code, &mut ip); 208 + let name_idx = read_u16(code, &mut ip); 209 + self.emit_call3( 210 + fn_addr!(jit_helper_load_global), 211 + dst as u64, 212 + name_idx as u64, 213 + 0, 214 + ); 215 + } 216 + Op::StoreGlobal => { 217 + let name_idx = read_u16(code, &mut ip); 218 + let src = read_u8(code, &mut ip); 219 + self.emit_call3( 220 + fn_addr!(jit_helper_store_global), 221 + name_idx as u64, 222 + src as u64, 223 + 0, 224 + ); 225 + } 226 + 227 + // ── Arithmetic ──────────────────────────────────── 228 + Op::Add => { 229 + let dst = read_u8(code, &mut ip); 230 + let lhs = read_u8(code, &mut ip); 231 + let rhs = read_u8(code, &mut ip); 232 + self.emit_call4(fn_addr!(jit_helper_add), dst as u64, lhs as u64, rhs as u64); 233 + self.emit_check_result(exception_label); 234 + } 235 + Op::Sub => { 236 + let dst = read_u8(code, &mut ip); 237 + let lhs = read_u8(code, &mut ip); 238 + let rhs = read_u8(code, &mut ip); 239 + self.emit_call4(fn_addr!(jit_helper_sub), dst as u64, lhs as u64, rhs as u64); 240 + } 241 + Op::Mul => { 242 + let dst = read_u8(code, &mut ip); 243 + let lhs = read_u8(code, &mut ip); 244 + let rhs = read_u8(code, &mut ip); 245 + self.emit_call4(fn_addr!(jit_helper_mul), dst as u64, lhs as u64, rhs as u64); 246 + } 247 + Op::Div => { 248 + let dst = read_u8(code, &mut ip); 249 + let lhs = read_u8(code, &mut ip); 250 + let rhs = read_u8(code, &mut ip); 251 + self.emit_call4(fn_addr!(jit_helper_div), dst as u64, lhs as u64, rhs as u64); 252 + } 253 + Op::Rem => { 254 + let dst = read_u8(code, &mut ip); 255 + let lhs = read_u8(code, &mut ip); 256 + let rhs = read_u8(code, &mut ip); 257 + self.emit_call4(fn_addr!(jit_helper_rem), dst as u64, lhs as u64, rhs as u64); 258 + } 259 + Op::Neg => { 260 + let dst = read_u8(code, &mut ip); 261 + let src = read_u8(code, &mut ip); 262 + self.emit_call3(fn_addr!(jit_helper_neg), dst as u64, src as u64, 0); 263 + } 264 + Op::Exp => { 265 + // Bail out — exponentiation is rare and complex 266 + self.emit_bail(bail_label); 267 + // Still need to skip operands to keep IP aligned 268 + ip += 3; // dst, lhs, rhs 269 + } 270 + 271 + // ── Bitwise ─────────────────────────────────────── 272 + Op::BitAnd => { 273 + let dst = read_u8(code, &mut ip); 274 + let lhs = read_u8(code, &mut ip); 275 + let rhs = read_u8(code, &mut ip); 276 + self.emit_call4( 277 + fn_addr!(jit_helper_bit_and), 278 + dst as u64, 279 + lhs as u64, 280 + rhs as u64, 281 + ); 282 + } 283 + Op::BitOr => { 284 + let dst = read_u8(code, &mut ip); 285 + let lhs = read_u8(code, &mut ip); 286 + let rhs = read_u8(code, &mut ip); 287 + self.emit_call4( 288 + fn_addr!(jit_helper_bit_or), 289 + dst as u64, 290 + lhs as u64, 291 + rhs as u64, 292 + ); 293 + } 294 + Op::BitXor => { 295 + let dst = read_u8(code, &mut ip); 296 + let lhs = read_u8(code, &mut ip); 297 + let rhs = read_u8(code, &mut ip); 298 + self.emit_call4( 299 + fn_addr!(jit_helper_bit_xor), 300 + dst as u64, 301 + lhs as u64, 302 + rhs as u64, 303 + ); 304 + } 305 + Op::ShiftLeft => { 306 + let dst = read_u8(code, &mut ip); 307 + let lhs = read_u8(code, &mut ip); 308 + let rhs = read_u8(code, &mut ip); 309 + self.emit_call4( 310 + fn_addr!(jit_helper_shift_left), 311 + dst as u64, 312 + lhs as u64, 313 + rhs as u64, 314 + ); 315 + } 316 + Op::ShiftRight => { 317 + let dst = read_u8(code, &mut ip); 318 + let lhs = read_u8(code, &mut ip); 319 + let rhs = read_u8(code, &mut ip); 320 + self.emit_call4( 321 + fn_addr!(jit_helper_shift_right), 322 + dst as u64, 323 + lhs as u64, 324 + rhs as u64, 325 + ); 326 + } 327 + Op::UShiftRight => { 328 + let dst = read_u8(code, &mut ip); 329 + let lhs = read_u8(code, &mut ip); 330 + let rhs = read_u8(code, &mut ip); 331 + self.emit_call4( 332 + fn_addr!(jit_helper_ushift_right), 333 + dst as u64, 334 + lhs as u64, 335 + rhs as u64, 336 + ); 337 + } 338 + Op::BitNot => { 339 + let dst = read_u8(code, &mut ip); 340 + let src = read_u8(code, &mut ip); 341 + self.emit_call3(fn_addr!(jit_helper_bit_not), dst as u64, src as u64, 0); 342 + } 343 + 344 + // ── Comparison ──────────────────────────────────── 345 + Op::Eq => { 346 + let dst = read_u8(code, &mut ip); 347 + let lhs = read_u8(code, &mut ip); 348 + let rhs = read_u8(code, &mut ip); 349 + self.emit_call4(fn_addr!(jit_helper_eq), dst as u64, lhs as u64, rhs as u64); 350 + } 351 + Op::StrictEq => { 352 + let dst = read_u8(code, &mut ip); 353 + let lhs = read_u8(code, &mut ip); 354 + let rhs = read_u8(code, &mut ip); 355 + self.emit_call4( 356 + fn_addr!(jit_helper_strict_eq), 357 + dst as u64, 358 + lhs as u64, 359 + rhs as u64, 360 + ); 361 + } 362 + Op::NotEq => { 363 + let dst = read_u8(code, &mut ip); 364 + let lhs = read_u8(code, &mut ip); 365 + let rhs = read_u8(code, &mut ip); 366 + self.emit_call4( 367 + fn_addr!(jit_helper_not_eq), 368 + dst as u64, 369 + lhs as u64, 370 + rhs as u64, 371 + ); 372 + } 373 + Op::StrictNotEq => { 374 + let dst = read_u8(code, &mut ip); 375 + let lhs = read_u8(code, &mut ip); 376 + let rhs = read_u8(code, &mut ip); 377 + self.emit_call4( 378 + fn_addr!(jit_helper_strict_not_eq), 379 + dst as u64, 380 + lhs as u64, 381 + rhs as u64, 382 + ); 383 + } 384 + Op::LessThan => { 385 + let dst = read_u8(code, &mut ip); 386 + let lhs = read_u8(code, &mut ip); 387 + let rhs = read_u8(code, &mut ip); 388 + self.emit_call4( 389 + fn_addr!(jit_helper_less_than), 390 + dst as u64, 391 + lhs as u64, 392 + rhs as u64, 393 + ); 394 + } 395 + Op::LessEq => { 396 + let dst = read_u8(code, &mut ip); 397 + let lhs = read_u8(code, &mut ip); 398 + let rhs = read_u8(code, &mut ip); 399 + self.emit_call4( 400 + fn_addr!(jit_helper_less_eq), 401 + dst as u64, 402 + lhs as u64, 403 + rhs as u64, 404 + ); 405 + } 406 + Op::GreaterThan => { 407 + let dst = read_u8(code, &mut ip); 408 + let lhs = read_u8(code, &mut ip); 409 + let rhs = read_u8(code, &mut ip); 410 + self.emit_call4( 411 + fn_addr!(jit_helper_greater_than), 412 + dst as u64, 413 + lhs as u64, 414 + rhs as u64, 415 + ); 416 + } 417 + Op::GreaterEq => { 418 + let dst = read_u8(code, &mut ip); 419 + let lhs = read_u8(code, &mut ip); 420 + let rhs = read_u8(code, &mut ip); 421 + self.emit_call4( 422 + fn_addr!(jit_helper_greater_eq), 423 + dst as u64, 424 + lhs as u64, 425 + rhs as u64, 426 + ); 427 + } 428 + 429 + // ── Logical ─────────────────────────────────────── 430 + Op::LogicalNot => { 431 + let dst = read_u8(code, &mut ip); 432 + let src = read_u8(code, &mut ip); 433 + self.emit_call3(fn_addr!(jit_helper_logical_not), dst as u64, src as u64, 0); 434 + } 435 + Op::TypeOf => { 436 + let dst = read_u8(code, &mut ip); 437 + let src = read_u8(code, &mut ip); 438 + self.emit_call3(fn_addr!(jit_helper_typeof), dst as u64, src as u64, 0); 439 + } 440 + Op::Void => { 441 + let dst = read_u8(code, &mut ip); 442 + let _src = read_u8(code, &mut ip); 443 + self.emit_call2(fn_addr!(jit_helper_void), dst as u64, 0); 444 + } 445 + Op::InstanceOf | Op::In => { 446 + // Bail — complex semantics 447 + self.emit_bail(bail_label); 448 + ip += 3; 449 + } 450 + 451 + // ── Control flow ────────────────────────────────── 452 + Op::Jump => { 453 + let offset = read_i32(code, &mut ip); 454 + let target = (ip as i64 + offset as i64) as usize; 455 + let label = self.get_or_create_label(target); 456 + self.asm.b(label); 457 + } 458 + Op::JumpIfTrue => { 459 + let reg_idx = read_u8(code, &mut ip); 460 + let offset = read_i32(code, &mut ip); 461 + let target = (ip as i64 + offset as i64) as usize; 462 + let label = self.get_or_create_label(target); 463 + 464 + // Call is_truthy helper: returns 1 (truthy) or 0 (falsy) in X0 465 + self.emit_call2(fn_addr!(jit_helper_is_truthy), reg_idx as u64, 0); 466 + // Branch if X0 != 0 467 + self.asm.cbnz(X0, label); 468 + } 469 + Op::JumpIfFalse => { 470 + let reg_idx = read_u8(code, &mut ip); 471 + let offset = read_i32(code, &mut ip); 472 + let target = (ip as i64 + offset as i64) as usize; 473 + let label = self.get_or_create_label(target); 474 + 475 + self.emit_call2(fn_addr!(jit_helper_is_truthy), reg_idx as u64, 0); 476 + // Branch if X0 == 0 (falsy) 477 + self.asm.cbz(X0, label); 478 + } 479 + Op::JumpIfNullish => { 480 + let reg_idx = read_u8(code, &mut ip); 481 + let offset = read_i32(code, &mut ip); 482 + let target = (ip as i64 + offset as i64) as usize; 483 + let label = self.get_or_create_label(target); 484 + 485 + self.emit_call2(fn_addr!(jit_helper_is_nullish), reg_idx as u64, 0); 486 + self.asm.cbnz(X0, label); 487 + } 488 + 489 + // ── Functions / calls ───────────────────────────── 490 + Op::Call => { 491 + let dst = read_u8(code, &mut ip); 492 + let func_reg = read_u8(code, &mut ip); 493 + let args_start = read_u8(code, &mut ip); 494 + let arg_count = read_u8(code, &mut ip); 495 + self.emit_call5( 496 + fn_addr!(jit_helper_call), 497 + dst as u64, 498 + func_reg as u64, 499 + args_start as u64, 500 + arg_count as u64, 501 + ); 502 + self.emit_check_result(exception_label); 503 + } 504 + Op::Return => { 505 + let src = read_u8(code, &mut ip); 506 + self.emit_call2(fn_addr!(jit_helper_return), src as u64, 0); 507 + // Return 0 (success) to caller 508 + self.asm.movz(X0, 0, 0); 509 + self.asm.emit_epilogue(); 510 + } 511 + Op::Throw => { 512 + let src = read_u8(code, &mut ip); 513 + self.emit_call2(fn_addr!(jit_helper_throw), src as u64, 0); 514 + self.emit_check_result(exception_label); 515 + } 516 + Op::CreateClosure => { 517 + let dst = read_u8(code, &mut ip); 518 + let func_idx = read_u16(code, &mut ip); 519 + self.emit_call3( 520 + fn_addr!(jit_helper_create_closure), 521 + dst as u64, 522 + func_idx as u64, 523 + 0, 524 + ); 525 + self.emit_check_result(exception_label); 526 + } 527 + 528 + // ── Property access ─────────────────────────────── 529 + Op::GetPropertyByName => { 530 + let dst = read_u8(code, &mut ip); 531 + let obj = read_u8(code, &mut ip); 532 + let name_idx = read_u16(code, &mut ip); 533 + let ic_idx = read_u16(code, &mut ip); 534 + self.emit_call5( 535 + fn_addr!(jit_helper_get_prop_name), 536 + dst as u64, 537 + obj as u64, 538 + name_idx as u64, 539 + ic_idx as u64, 540 + ); 541 + } 542 + Op::SetPropertyByName => { 543 + let obj = read_u8(code, &mut ip); 544 + let name_idx = read_u16(code, &mut ip); 545 + let val = read_u8(code, &mut ip); 546 + let ic_idx = read_u16(code, &mut ip); 547 + self.emit_call5( 548 + fn_addr!(jit_helper_set_prop_name), 549 + obj as u64, 550 + name_idx as u64, 551 + val as u64, 552 + ic_idx as u64, 553 + ); 554 + } 555 + Op::GetProperty => { 556 + let dst = read_u8(code, &mut ip); 557 + let obj = read_u8(code, &mut ip); 558 + let key = read_u8(code, &mut ip); 559 + self.emit_call4( 560 + fn_addr!(jit_helper_get_property), 561 + dst as u64, 562 + obj as u64, 563 + key as u64, 564 + ); 565 + } 566 + Op::SetProperty => { 567 + let obj = read_u8(code, &mut ip); 568 + let key = read_u8(code, &mut ip); 569 + let val = read_u8(code, &mut ip); 570 + self.emit_call4( 571 + fn_addr!(jit_helper_set_property), 572 + obj as u64, 573 + key as u64, 574 + val as u64, 575 + ); 576 + } 577 + Op::CreateObject => { 578 + let dst = read_u8(code, &mut ip); 579 + self.emit_call2(fn_addr!(jit_helper_create_object), dst as u64, 0); 580 + } 581 + Op::CreateArray => { 582 + let dst = read_u8(code, &mut ip); 583 + self.emit_call2(fn_addr!(jit_helper_create_array), dst as u64, 0); 584 + } 585 + 586 + // ── Cell / upvalue operations ───────────────────── 587 + Op::NewCell => { 588 + let dst = read_u8(code, &mut ip); 589 + self.emit_call2(fn_addr!(jit_helper_new_cell), dst as u64, 0); 590 + } 591 + Op::CellLoad => { 592 + let dst = read_u8(code, &mut ip); 593 + let cell = read_u8(code, &mut ip); 594 + self.emit_call3(fn_addr!(jit_helper_cell_load), dst as u64, cell as u64, 0); 595 + self.emit_check_result(bail_label); 596 + } 597 + Op::CellStore => { 598 + let cell = read_u8(code, &mut ip); 599 + let src = read_u8(code, &mut ip); 600 + self.emit_call3(fn_addr!(jit_helper_cell_store), cell as u64, src as u64, 0); 601 + self.emit_check_result(bail_label); 602 + } 603 + Op::LoadUpvalue => { 604 + let dst = read_u8(code, &mut ip); 605 + let idx = read_u8(code, &mut ip); 606 + self.emit_call3(fn_addr!(jit_helper_load_upvalue), dst as u64, idx as u64, 0); 607 + self.emit_check_result(bail_label); 608 + } 609 + Op::StoreUpvalue => { 610 + let idx = read_u8(code, &mut ip); 611 + let src = read_u8(code, &mut ip); 612 + self.emit_call3( 613 + fn_addr!(jit_helper_store_upvalue), 614 + idx as u64, 615 + src as u64, 616 + 0, 617 + ); 618 + self.emit_check_result(bail_label); 619 + } 620 + 621 + // ── Exception handling ──────────────────────────── 622 + Op::PushExceptionHandler => { 623 + let catch_reg = read_u8(code, &mut ip); 624 + let offset = read_i32(code, &mut ip); 625 + let target = (ip as i64 + offset as i64) as usize; 626 + self.emit_call3( 627 + fn_addr!(jit_helper_push_exception_handler), 628 + catch_reg as u64, 629 + target as u64, 630 + 0, 631 + ); 632 + } 633 + Op::PopExceptionHandler => { 634 + self.emit_call1(fn_addr!(jit_helper_pop_exception_handler)); 635 + } 636 + 637 + // ── Unsupported — bail to interpreter ───────────── 638 + Op::Delete 639 + | Op::ForInInit 640 + | Op::ForInNext 641 + | Op::SetPrototype 642 + | Op::GetPrototype 643 + | Op::Yield 644 + | Op::Spread 645 + | Op::Await => { 646 + self.emit_bail(bail_label); 647 + ip += instruction_operand_size(op); 648 + } 649 + } 650 + } 651 + 652 + // ── Implicit return undefined (end of function) ───────── 653 + // Bind any label pointing to end-of-code 654 + if let Some(&label) = self.labels.get(&code.len()) { 655 + self.asm.bind_label(label); 656 + } 657 + self.asm.mov_reg(X0, X19); // vm 658 + self.asm.movz(X1, 0, 0); // src = reg 0 (unused, return undefined) 659 + // Write undefined return 660 + self.asm 661 + .mov_imm64(X2, fn_addr!(jit_helper_load_undefined) as u64); 662 + // Actually, just call jit_helper_return with an undefined value. 663 + // Simpler: emit movz x0, #0 and epilogue 664 + self.asm.movz(X0, 0, 0); 665 + self.asm.emit_epilogue(); 666 + 667 + // ── Exception path ────────────────────────────────────── 668 + self.asm.bind_label(exception_label); 669 + self.asm.movz(X0, 1, 0); // return 1 = exception 670 + self.asm.emit_epilogue(); 671 + 672 + // ── Bail-out path ─────────────────────────────────────── 673 + self.asm.bind_label(bail_label); 674 + self.asm.movz(X0, 2, 0); // return 2 = bail to interpreter 675 + self.asm.emit_epilogue(); 676 + 677 + Ok(()) 678 + } 679 + 680 + // ── Code emission helpers ──────────────────────────────────────────────── 681 + 682 + /// Emit a call to a helper function with 1 arg (just vm). 683 + fn emit_call1(&mut self, fn_addr: usize) { 684 + self.asm.mov_reg(X0, X19); // arg0 = vm 685 + self.asm.mov_imm64(X4, fn_addr as u64); 686 + self.asm.blr(X4); 687 + } 688 + 689 + /// Emit a call to a helper function: fn(vm, arg1). 690 + /// The second u64 parameter is unused padding for uniform emit_call* API. 691 + fn emit_call2(&mut self, fn_addr: usize, arg1: u64, _unused: u64) { 692 + self.asm.mov_reg(X0, X19); // arg0 = vm 693 + self.asm.mov_imm64(X1, arg1); 694 + self.asm.mov_imm64(X4, fn_addr as u64); 695 + self.asm.blr(X4); 696 + } 697 + 698 + /// Emit a call to a helper: fn(vm, arg1, arg2). 699 + fn emit_call3(&mut self, fn_addr: usize, arg1: u64, arg2: u64, _unused: u64) { 700 + self.asm.mov_reg(X0, X19); 701 + self.asm.mov_imm64(X1, arg1); 702 + self.asm.mov_imm64(X2, arg2); 703 + self.asm.mov_imm64(X4, fn_addr as u64); 704 + self.asm.blr(X4); 705 + } 706 + 707 + /// Emit a call to a helper: fn(vm, arg1, arg2, arg3). 708 + fn emit_call4(&mut self, fn_addr: usize, arg1: u64, arg2: u64, arg3: u64) { 709 + self.asm.mov_reg(X0, X19); 710 + self.asm.mov_imm64(X1, arg1); 711 + self.asm.mov_imm64(X2, arg2); 712 + self.asm.mov_imm64(X3, arg3); 713 + self.asm.mov_imm64(X4, fn_addr as u64); 714 + self.asm.blr(X4); 715 + } 716 + 717 + /// Emit a call to a helper: fn(vm, arg1, arg2, arg3, arg4). 718 + fn emit_call5(&mut self, fn_addr: usize, arg1: u64, arg2: u64, arg3: u64, arg4: u64) { 719 + self.asm.mov_reg(X0, X19); 720 + self.asm.mov_imm64(X1, arg1); 721 + self.asm.mov_imm64(X2, arg2); 722 + self.asm.mov_imm64(X3, arg3); 723 + self.asm.mov_imm64(assembler::X4, arg4); 724 + // arg4 goes in X4, but we already used X4 for the function address. 725 + // Use X5 for the function pointer instead. 726 + self.asm.mov_imm64(assembler::Reg::new(5), fn_addr as u64); 727 + self.asm.blr(assembler::Reg::new(5)); 728 + } 729 + 730 + /// After a helper call, check if X0 != 0 and branch to the error label. 731 + fn emit_check_result(&mut self, error_label: Label) { 732 + self.asm.cbnz(X0, error_label); 733 + } 734 + 735 + /// Emit an unconditional jump to the bail-out path. 736 + fn emit_bail(&mut self, bail_label: Label) { 737 + self.asm.b(bail_label); 738 + } 739 + } 740 + 741 + // ── Bytecode reading helpers ───────────────────────────────────────────────── 742 + 743 + fn read_u8(code: &[u8], ip: &mut usize) -> u8 { 744 + let b = code[*ip]; 745 + *ip += 1; 746 + b 747 + } 748 + 749 + fn read_u16(code: &[u8], ip: &mut usize) -> u16 { 750 + let lo = code[*ip]; 751 + let hi = code[*ip + 1]; 752 + *ip += 2; 753 + u16::from_le_bytes([lo, hi]) 754 + } 755 + 756 + fn read_i32(code: &[u8], ip: &mut usize) -> i32 { 757 + let bytes = [code[*ip], code[*ip + 1], code[*ip + 2], code[*ip + 3]]; 758 + *ip += 4; 759 + i32::from_le_bytes(bytes) 760 + } 761 + 762 + /// Return the number of operand bytes for a given opcode (excluding the opcode byte itself). 763 + fn instruction_operand_size(op: Op) -> usize { 764 + match op { 765 + // 0 operand bytes 766 + Op::PopExceptionHandler => 0, 767 + 768 + // 1 byte: dst or src 769 + Op::LoadNull 770 + | Op::LoadUndefined 771 + | Op::LoadTrue 772 + | Op::LoadFalse 773 + | Op::Return 774 + | Op::Throw 775 + | Op::CreateObject 776 + | Op::CreateArray 777 + | Op::NewCell => 1, 778 + 779 + // 2 bytes: dst+src or dst+u8 780 + Op::Move 781 + | Op::Neg 782 + | Op::BitNot 783 + | Op::LogicalNot 784 + | Op::TypeOf 785 + | Op::Void 786 + | Op::LoadInt8 787 + | Op::CellLoad 788 + | Op::CellStore 789 + | Op::LoadUpvalue 790 + | Op::StoreUpvalue 791 + | Op::Yield 792 + | Op::Await => 2, 793 + 794 + // 3 bytes: dst+idx(u16) or dst+lhs+rhs 795 + Op::LoadConst 796 + | Op::LoadGlobal 797 + | Op::CreateClosure 798 + | Op::Add 799 + | Op::Sub 800 + | Op::Mul 801 + | Op::Div 802 + | Op::Rem 803 + | Op::Exp 804 + | Op::BitAnd 805 + | Op::BitOr 806 + | Op::BitXor 807 + | Op::ShiftLeft 808 + | Op::ShiftRight 809 + | Op::UShiftRight 810 + | Op::Eq 811 + | Op::StrictEq 812 + | Op::NotEq 813 + | Op::StrictNotEq 814 + | Op::LessThan 815 + | Op::LessEq 816 + | Op::GreaterThan 817 + | Op::GreaterEq 818 + | Op::InstanceOf 819 + | Op::In 820 + | Op::GetProperty 821 + | Op::SetProperty 822 + | Op::Delete 823 + | Op::StoreGlobal 824 + | Op::Spread 825 + | Op::SetPrototype 826 + | Op::GetPrototype => 3, 827 + 828 + // 4 bytes: Call dst+func+args_start+arg_count, or ForInNext 829 + Op::Call | Op::ForInNext => 4, 830 + 831 + // 5 bytes: reg + i32 offset 832 + Op::JumpIfTrue | Op::JumpIfFalse | Op::JumpIfNullish | Op::PushExceptionHandler => 5, 833 + 834 + // 4 bytes: i32 offset 835 + Op::Jump => 4, 836 + 837 + // 2 bytes: dst + reg 838 + Op::ForInInit => 2, 839 + 840 + // GetPropertyByName: dst(1) + obj(1) + name_idx(2) + ic_idx(2) = 6 841 + Op::GetPropertyByName => 6, 842 + // SetPropertyByName: obj(1) + name_idx(2) + val(1) + ic_idx(2) = 6 843 + Op::SetPropertyByName => 6, 844 + } 845 + } 846 + 847 + #[cfg(test)] 848 + mod tests { 849 + use super::*; 850 + use crate::bytecode::{BytecodeBuilder, Function, Op}; 851 + 852 + /// Helper: build a simple function and try to compile it. 853 + fn compile_func(func: &Function) -> Result<CodePtr, CompileError> { 854 + let mut buffer = JitBuffer::new().expect("buffer creation failed"); 855 + BaselineJit::compile(func, &mut buffer) 856 + } 857 + 858 + #[test] 859 + fn compile_empty_function() { 860 + let func = Function::new("test".to_string(), 0); 861 + let result = compile_func(&func); 862 + assert!(result.is_ok()); 863 + } 864 + 865 + #[test] 866 + fn compile_simple_return() { 867 + let mut builder = BytecodeBuilder::new("test".to_string(), 0); 868 + builder.func.register_count = 1; 869 + builder.emit_load_int8(0, 42); 870 + builder.emit_reg(Op::Return, 0); 871 + let func = builder.finish(); 872 + 873 + let result = compile_func(&func); 874 + assert!(result.is_ok()); 875 + } 876 + 877 + #[test] 878 + fn compile_arithmetic() { 879 + let mut builder = BytecodeBuilder::new("test".to_string(), 0); 880 + builder.func.register_count = 4; 881 + builder.emit_load_int8(0, 10); 882 + builder.emit_load_int8(1, 20); 883 + builder.emit_reg3(Op::Add, 2, 0, 1); 884 + builder.emit_reg(Op::Return, 2); 885 + let func = builder.finish(); 886 + 887 + let result = compile_func(&func); 888 + assert!(result.is_ok()); 889 + } 890 + 891 + #[test] 892 + fn compile_with_jump() { 893 + let mut builder = BytecodeBuilder::new("test".to_string(), 0); 894 + builder.func.register_count = 2; 895 + builder.emit_load_int8(0, 1); 896 + let jump_pos = builder.emit_jump(Op::Jump); 897 + builder.emit_load_int8(1, 99); // should be skipped 898 + builder.patch_jump(jump_pos); 899 + builder.emit_reg(Op::Return, 0); 900 + 901 + let func = builder.finish(); 902 + let result = compile_func(&func); 903 + assert!(result.is_ok()); 904 + } 905 + 906 + #[test] 907 + fn generator_function_not_compiled() { 908 + let mut func = Function::new("gen".to_string(), 0); 909 + func.is_generator = true; 910 + let result = compile_func(&func); 911 + assert!(matches!(result, Err(CompileError::UnsupportedOpcode(_)))); 912 + } 913 + }
+652
crates/js/src/jit/helpers.rs
··· 1 + //! Helper functions called by JIT-compiled code via `extern "C"` ABI. 2 + //! 3 + //! Each helper operates on the VM's register file through the current call frame. 4 + //! They return 0 on success. Non-zero indicates an error or condition: 5 + //! 0 = success 6 + //! 1 = exception thrown (handled by VM exception mechanism) 7 + //! 2 = deoptimize / bail out to interpreter 8 + //! 9 + //! For conditional branches, `jit_helper_is_truthy` returns 1/0 directly. 10 + //! 11 + //! # Safety 12 + //! 13 + //! These functions are called from JIT-compiled native code which passes a raw 14 + //! `*mut Vm` pointer. The pointer is always valid because: 15 + //! 1. The Vm is alive for the duration of JIT execution. 16 + //! 2. Only one JIT function executes at a time per VM (no concurrent access). 17 + //! 3. The `jit_depth` guard prevents re-entrant JIT execution. 18 + #![allow(clippy::not_unsafe_ptr_arg_deref)] 19 + 20 + use crate::bytecode::Constant; 21 + use crate::vm::{Value, Vm}; 22 + 23 + // ── Status codes ───────────────────────────────────────────────────────────── 24 + 25 + const OK: u64 = 0; 26 + const EXCEPTION: u64 = 1; 27 + const BAIL: u64 = 2; 28 + 29 + // ── Register access helpers ────────────────────────────────────────────────── 30 + 31 + /// Read a value from the current frame's register file. 32 + #[inline] 33 + unsafe fn reg(vm: &Vm, idx: u32) -> &Value { 34 + let base = vm.current_frame_base(); 35 + &vm.registers()[base + idx as usize] 36 + } 37 + 38 + /// Write a value to the current frame's register file. 39 + #[inline] 40 + unsafe fn set_reg(vm: &mut Vm, idx: u32, val: Value) { 41 + let base = vm.current_frame_base(); 42 + vm.registers_mut()[base + idx as usize] = val; 43 + } 44 + 45 + // ── Load helpers ───────────────────────────────────────────────────────────── 46 + 47 + #[no_mangle] 48 + pub extern "C" fn jit_helper_load_const(vm: *mut Vm, dst: u32, const_idx: u32) -> u64 { 49 + let vm = unsafe { &mut *vm }; 50 + let val = { 51 + let func = vm.current_frame_func(); 52 + match &func.constants[const_idx as usize] { 53 + Constant::Number(n) => Value::Number(*n), 54 + Constant::String(s) => Value::String(s.clone()), 55 + } 56 + }; 57 + unsafe { set_reg(vm, dst, val) }; 58 + OK 59 + } 60 + 61 + #[no_mangle] 62 + pub extern "C" fn jit_helper_load_null(vm: *mut Vm, dst: u32) -> u64 { 63 + let vm = unsafe { &mut *vm }; 64 + unsafe { set_reg(vm, dst, Value::Null) }; 65 + OK 66 + } 67 + 68 + #[no_mangle] 69 + pub extern "C" fn jit_helper_load_undefined(vm: *mut Vm, dst: u32) -> u64 { 70 + let vm = unsafe { &mut *vm }; 71 + unsafe { set_reg(vm, dst, Value::Undefined) }; 72 + OK 73 + } 74 + 75 + #[no_mangle] 76 + pub extern "C" fn jit_helper_load_true(vm: *mut Vm, dst: u32) -> u64 { 77 + let vm = unsafe { &mut *vm }; 78 + unsafe { set_reg(vm, dst, Value::Boolean(true)) }; 79 + OK 80 + } 81 + 82 + #[no_mangle] 83 + pub extern "C" fn jit_helper_load_false(vm: *mut Vm, dst: u32) -> u64 { 84 + let vm = unsafe { &mut *vm }; 85 + unsafe { set_reg(vm, dst, Value::Boolean(false)) }; 86 + OK 87 + } 88 + 89 + #[no_mangle] 90 + pub extern "C" fn jit_helper_load_int8(vm: *mut Vm, dst: u32, val: i32) -> u64 { 91 + let vm = unsafe { &mut *vm }; 92 + unsafe { set_reg(vm, dst, Value::Number(val as f64)) }; 93 + OK 94 + } 95 + 96 + #[no_mangle] 97 + pub extern "C" fn jit_helper_move(vm: *mut Vm, dst: u32, src: u32) -> u64 { 98 + let vm = unsafe { &mut *vm }; 99 + let val = unsafe { reg(vm, src).clone() }; 100 + unsafe { set_reg(vm, dst, val) }; 101 + OK 102 + } 103 + 104 + // ── Global variable access ─────────────────────────────────────────────────── 105 + 106 + #[no_mangle] 107 + pub extern "C" fn jit_helper_load_global(vm: *mut Vm, dst: u32, name_idx: u32) -> u64 { 108 + let vm = unsafe { &mut *vm }; 109 + let name = vm.current_frame_func().names[name_idx as usize].clone(); 110 + let val = vm.globals().get(&name).cloned().unwrap_or(Value::Undefined); 111 + unsafe { set_reg(vm, dst, val) }; 112 + OK 113 + } 114 + 115 + #[no_mangle] 116 + pub extern "C" fn jit_helper_store_global(vm: *mut Vm, name_idx: u32, src: u32) -> u64 { 117 + let vm = unsafe { &mut *vm }; 118 + let name = vm.current_frame_func().names[name_idx as usize].clone(); 119 + let val = unsafe { reg(vm, src).clone() }; 120 + vm.globals_mut().insert(name, val); 121 + OK 122 + } 123 + 124 + // ── Arithmetic helpers ─────────────────────────────────────────────────────── 125 + 126 + #[no_mangle] 127 + pub extern "C" fn jit_helper_add(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 128 + let vm = unsafe { &mut *vm }; 129 + let lv = unsafe { reg(vm, lhs).clone() }; 130 + let rv = unsafe { reg(vm, rhs).clone() }; 131 + let result = match (&lv, &rv) { 132 + (Value::Number(a), Value::Number(b)) => Value::Number(a + b), 133 + (Value::String(a), Value::String(b)) => Value::String(format!("{a}{b}")), 134 + (Value::String(a), Value::Number(b)) => Value::String(format!("{a}{}", format_number(*b))), 135 + (Value::Number(a), Value::String(b)) => Value::String(format!("{}{b}", format_number(*a))), 136 + (Value::String(a), _) => Value::String(format!("{a}{}", rv.to_js_string_helper(vm))), 137 + (_, Value::String(b)) => Value::String(format!("{}{b}", lv.to_js_string_helper(vm))), 138 + (Value::Number(a), _) => Value::Number(a + rv.to_number()), 139 + (_, Value::Number(b)) => Value::Number(lv.to_number() + b), 140 + _ => Value::Number(lv.to_number() + rv.to_number()), 141 + }; 142 + unsafe { set_reg(vm, dst, result) }; 143 + OK 144 + } 145 + 146 + #[no_mangle] 147 + pub extern "C" fn jit_helper_sub(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 148 + let vm = unsafe { &mut *vm }; 149 + let a = unsafe { reg(vm, lhs).to_number() }; 150 + let b = unsafe { reg(vm, rhs).to_number() }; 151 + unsafe { set_reg(vm, dst, Value::Number(a - b)) }; 152 + OK 153 + } 154 + 155 + #[no_mangle] 156 + pub extern "C" fn jit_helper_mul(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 157 + let vm = unsafe { &mut *vm }; 158 + let a = unsafe { reg(vm, lhs).to_number() }; 159 + let b = unsafe { reg(vm, rhs).to_number() }; 160 + unsafe { set_reg(vm, dst, Value::Number(a * b)) }; 161 + OK 162 + } 163 + 164 + #[no_mangle] 165 + pub extern "C" fn jit_helper_div(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 166 + let vm = unsafe { &mut *vm }; 167 + let a = unsafe { reg(vm, lhs).to_number() }; 168 + let b = unsafe { reg(vm, rhs).to_number() }; 169 + unsafe { set_reg(vm, dst, Value::Number(a / b)) }; 170 + OK 171 + } 172 + 173 + #[no_mangle] 174 + pub extern "C" fn jit_helper_rem(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 175 + let vm = unsafe { &mut *vm }; 176 + let a = unsafe { reg(vm, lhs).to_number() }; 177 + let b = unsafe { reg(vm, rhs).to_number() }; 178 + unsafe { set_reg(vm, dst, Value::Number(a % b)) }; 179 + OK 180 + } 181 + 182 + #[no_mangle] 183 + pub extern "C" fn jit_helper_neg(vm: *mut Vm, dst: u32, src: u32) -> u64 { 184 + let vm = unsafe { &mut *vm }; 185 + let a = unsafe { reg(vm, src).to_number() }; 186 + unsafe { set_reg(vm, dst, Value::Number(-a)) }; 187 + OK 188 + } 189 + 190 + // ── Bitwise helpers ────────────────────────────────────────────────────────── 191 + 192 + #[no_mangle] 193 + pub extern "C" fn jit_helper_bit_and(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 194 + let vm = unsafe { &mut *vm }; 195 + let a = unsafe { reg(vm, lhs).to_number() } as i32; 196 + let b = unsafe { reg(vm, rhs).to_number() } as i32; 197 + unsafe { set_reg(vm, dst, Value::Number((a & b) as f64)) }; 198 + OK 199 + } 200 + 201 + #[no_mangle] 202 + pub extern "C" fn jit_helper_bit_or(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 203 + let vm = unsafe { &mut *vm }; 204 + let a = unsafe { reg(vm, lhs).to_number() } as i32; 205 + let b = unsafe { reg(vm, rhs).to_number() } as i32; 206 + unsafe { set_reg(vm, dst, Value::Number((a | b) as f64)) }; 207 + OK 208 + } 209 + 210 + #[no_mangle] 211 + pub extern "C" fn jit_helper_bit_xor(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 212 + let vm = unsafe { &mut *vm }; 213 + let a = unsafe { reg(vm, lhs).to_number() } as i32; 214 + let b = unsafe { reg(vm, rhs).to_number() } as i32; 215 + unsafe { set_reg(vm, dst, Value::Number((a ^ b) as f64)) }; 216 + OK 217 + } 218 + 219 + #[no_mangle] 220 + pub extern "C" fn jit_helper_shift_left(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 221 + let vm = unsafe { &mut *vm }; 222 + let a = unsafe { reg(vm, lhs).to_number() } as i32; 223 + let b = (unsafe { reg(vm, rhs).to_number() } as u32) & 0x1f; 224 + unsafe { set_reg(vm, dst, Value::Number((a << b) as f64)) }; 225 + OK 226 + } 227 + 228 + #[no_mangle] 229 + pub extern "C" fn jit_helper_shift_right(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 230 + let vm = unsafe { &mut *vm }; 231 + let a = unsafe { reg(vm, lhs).to_number() } as i32; 232 + let b = (unsafe { reg(vm, rhs).to_number() } as u32) & 0x1f; 233 + unsafe { set_reg(vm, dst, Value::Number((a >> b) as f64)) }; 234 + OK 235 + } 236 + 237 + #[no_mangle] 238 + pub extern "C" fn jit_helper_ushift_right(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 239 + let vm = unsafe { &mut *vm }; 240 + let a = unsafe { reg(vm, lhs).to_number() } as u32; 241 + let b = (unsafe { reg(vm, rhs).to_number() } as u32) & 0x1f; 242 + unsafe { set_reg(vm, dst, Value::Number((a >> b) as f64)) }; 243 + OK 244 + } 245 + 246 + #[no_mangle] 247 + pub extern "C" fn jit_helper_bit_not(vm: *mut Vm, dst: u32, src: u32) -> u64 { 248 + let vm = unsafe { &mut *vm }; 249 + let a = unsafe { reg(vm, src).to_number() } as i32; 250 + unsafe { set_reg(vm, dst, Value::Number((!a) as f64)) }; 251 + OK 252 + } 253 + 254 + // ── Comparison helpers ─────────────────────────────────────────────────────── 255 + 256 + #[no_mangle] 257 + pub extern "C" fn jit_helper_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 258 + let vm = unsafe { &mut *vm }; 259 + let result = unsafe { abstract_eq(reg(vm, lhs), reg(vm, rhs)) }; 260 + unsafe { set_reg(vm, dst, Value::Boolean(result)) }; 261 + OK 262 + } 263 + 264 + #[no_mangle] 265 + pub extern "C" fn jit_helper_strict_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 266 + let vm = unsafe { &mut *vm }; 267 + let result = unsafe { strict_eq(reg(vm, lhs), reg(vm, rhs)) }; 268 + unsafe { set_reg(vm, dst, Value::Boolean(result)) }; 269 + OK 270 + } 271 + 272 + #[no_mangle] 273 + pub extern "C" fn jit_helper_not_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 274 + let vm = unsafe { &mut *vm }; 275 + let result = unsafe { !abstract_eq(reg(vm, lhs), reg(vm, rhs)) }; 276 + unsafe { set_reg(vm, dst, Value::Boolean(result)) }; 277 + OK 278 + } 279 + 280 + #[no_mangle] 281 + pub extern "C" fn jit_helper_strict_not_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 282 + let vm = unsafe { &mut *vm }; 283 + let result = unsafe { !strict_eq(reg(vm, lhs), reg(vm, rhs)) }; 284 + unsafe { set_reg(vm, dst, Value::Boolean(result)) }; 285 + OK 286 + } 287 + 288 + #[no_mangle] 289 + pub extern "C" fn jit_helper_less_than(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 290 + let vm = unsafe { &mut *vm }; 291 + let a = unsafe { reg(vm, lhs).to_number() }; 292 + let b = unsafe { reg(vm, rhs).to_number() }; 293 + unsafe { set_reg(vm, dst, Value::Boolean(a < b)) }; 294 + OK 295 + } 296 + 297 + #[no_mangle] 298 + pub extern "C" fn jit_helper_less_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 299 + let vm = unsafe { &mut *vm }; 300 + let a = unsafe { reg(vm, lhs).to_number() }; 301 + let b = unsafe { reg(vm, rhs).to_number() }; 302 + unsafe { set_reg(vm, dst, Value::Boolean(a <= b)) }; 303 + OK 304 + } 305 + 306 + #[no_mangle] 307 + pub extern "C" fn jit_helper_greater_than(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 308 + let vm = unsafe { &mut *vm }; 309 + let a = unsafe { reg(vm, lhs).to_number() }; 310 + let b = unsafe { reg(vm, rhs).to_number() }; 311 + unsafe { set_reg(vm, dst, Value::Boolean(a > b)) }; 312 + OK 313 + } 314 + 315 + #[no_mangle] 316 + pub extern "C" fn jit_helper_greater_eq(vm: *mut Vm, dst: u32, lhs: u32, rhs: u32) -> u64 { 317 + let vm = unsafe { &mut *vm }; 318 + let a = unsafe { reg(vm, lhs).to_number() }; 319 + let b = unsafe { reg(vm, rhs).to_number() }; 320 + unsafe { set_reg(vm, dst, Value::Boolean(a >= b)) }; 321 + OK 322 + } 323 + 324 + // ── Logical helpers ────────────────────────────────────────────────────────── 325 + 326 + #[no_mangle] 327 + pub extern "C" fn jit_helper_logical_not(vm: *mut Vm, dst: u32, src: u32) -> u64 { 328 + let vm = unsafe { &mut *vm }; 329 + let val = unsafe { !reg(vm, src).to_boolean() }; 330 + unsafe { set_reg(vm, dst, Value::Boolean(val)) }; 331 + OK 332 + } 333 + 334 + /// Returns 1 if the value in `reg_idx` is truthy, 0 if falsy. 335 + /// Used by JumpIfTrue/JumpIfFalse to decide branch direction. 336 + #[no_mangle] 337 + pub extern "C" fn jit_helper_is_truthy(vm: *mut Vm, reg_idx: u32) -> u64 { 338 + let vm = unsafe { &*vm }; 339 + let val = unsafe { reg(vm, reg_idx) }; 340 + if val.to_boolean() { 341 + 1 342 + } else { 343 + 0 344 + } 345 + } 346 + 347 + /// Returns 1 if the value in `reg_idx` is null or undefined, 0 otherwise. 348 + #[no_mangle] 349 + pub extern "C" fn jit_helper_is_nullish(vm: *mut Vm, reg_idx: u32) -> u64 { 350 + let vm = unsafe { &*vm }; 351 + let val = unsafe { reg(vm, reg_idx) }; 352 + match val { 353 + Value::Null | Value::Undefined => 1, 354 + _ => 0, 355 + } 356 + } 357 + 358 + // ── Property access ────────────────────────────────────────────────────────── 359 + 360 + #[no_mangle] 361 + pub extern "C" fn jit_helper_get_prop_name( 362 + vm: *mut Vm, 363 + dst: u32, 364 + obj: u32, 365 + name_idx: u32, 366 + ic_idx: u32, 367 + ) -> u64 { 368 + let vm = unsafe { &mut *vm }; 369 + // This delegates to the VM's property access logic with inline cache support. 370 + vm.jit_get_property_by_name(dst as u8, obj as u8, name_idx as u16, ic_idx as u16); 371 + OK 372 + } 373 + 374 + #[no_mangle] 375 + pub extern "C" fn jit_helper_set_prop_name( 376 + vm: *mut Vm, 377 + obj: u32, 378 + name_idx: u32, 379 + val: u32, 380 + ic_idx: u32, 381 + ) -> u64 { 382 + let vm = unsafe { &mut *vm }; 383 + vm.jit_set_property_by_name(obj as u8, name_idx as u16, val as u8, ic_idx as u16); 384 + OK 385 + } 386 + 387 + #[no_mangle] 388 + pub extern "C" fn jit_helper_get_property(vm: *mut Vm, dst: u32, obj: u32, key: u32) -> u64 { 389 + let vm = unsafe { &mut *vm }; 390 + vm.jit_get_property(dst as u8, obj as u8, key as u8); 391 + OK 392 + } 393 + 394 + #[no_mangle] 395 + pub extern "C" fn jit_helper_set_property(vm: *mut Vm, obj: u32, key: u32, val: u32) -> u64 { 396 + let vm = unsafe { &mut *vm }; 397 + vm.jit_set_property(obj as u8, key as u8, val as u8); 398 + OK 399 + } 400 + 401 + // ── Object/Array creation ──────────────────────────────────────────────────── 402 + 403 + #[no_mangle] 404 + pub extern "C" fn jit_helper_create_object(vm: *mut Vm, dst: u32) -> u64 { 405 + let vm = unsafe { &mut *vm }; 406 + let obj = vm.create_object_value(); 407 + unsafe { set_reg(vm, dst, obj) }; 408 + OK 409 + } 410 + 411 + #[no_mangle] 412 + pub extern "C" fn jit_helper_create_array(vm: *mut Vm, dst: u32) -> u64 { 413 + let vm = unsafe { &mut *vm }; 414 + let arr = vm.create_array_value(); 415 + unsafe { set_reg(vm, dst, arr) }; 416 + OK 417 + } 418 + 419 + // ── Function calls ─────────────────────────────────────────────────────────── 420 + 421 + /// Execute a function call from JIT code. This handles both native and bytecode 422 + /// callees, including JIT→JIT and JIT→interpreter transitions. 423 + /// 424 + /// Returns 0 on success (result in dst register), 1 on exception, 2 to bail out. 425 + #[no_mangle] 426 + pub extern "C" fn jit_helper_call( 427 + vm: *mut Vm, 428 + dst: u32, 429 + func_reg: u32, 430 + args_start: u32, 431 + arg_count: u32, 432 + ) -> u64 { 433 + let vm = unsafe { &mut *vm }; 434 + match vm.jit_call(dst as u8, func_reg as u8, args_start as u8, arg_count as u8) { 435 + Ok(()) => OK, 436 + Err(_) => EXCEPTION, 437 + } 438 + } 439 + 440 + // ── Return ─────────────────────────────────────────────────────────────────── 441 + 442 + /// Write the return value and signal that the JIT function is done. 443 + #[no_mangle] 444 + pub extern "C" fn jit_helper_return(vm: *mut Vm, src: u32) -> u64 { 445 + let vm = unsafe { &mut *vm }; 446 + let val = unsafe { reg(vm, src).clone() }; 447 + vm.jit_return(val); 448 + OK 449 + } 450 + 451 + // ── Closure creation ───────────────────────────────────────────────────────── 452 + 453 + #[no_mangle] 454 + pub extern "C" fn jit_helper_create_closure(vm: *mut Vm, dst: u32, func_idx: u32) -> u64 { 455 + let vm = unsafe { &mut *vm }; 456 + match vm.jit_create_closure(dst as u8, func_idx as u16) { 457 + Ok(()) => OK, 458 + Err(_) => EXCEPTION, 459 + } 460 + } 461 + 462 + // ── Exception handling ─────────────────────────────────────────────────────── 463 + 464 + #[no_mangle] 465 + pub extern "C" fn jit_helper_throw(vm: *mut Vm, src: u32) -> u64 { 466 + let vm = unsafe { &mut *vm }; 467 + let val = unsafe { reg(vm, src).clone() }; 468 + if vm.handle_exception_pub(val) { 469 + OK 470 + } else { 471 + EXCEPTION 472 + } 473 + } 474 + 475 + // ── Typeof ─────────────────────────────────────────────────────────────────── 476 + 477 + #[no_mangle] 478 + pub extern "C" fn jit_helper_typeof(vm: *mut Vm, dst: u32, src: u32) -> u64 { 479 + let vm = unsafe { &mut *vm }; 480 + let type_str = match unsafe { reg(vm, src) } { 481 + Value::Undefined => "undefined", 482 + Value::Null => "object", 483 + Value::Boolean(_) => "boolean", 484 + Value::Number(_) => "number", 485 + Value::String(_) => "string", 486 + Value::Function(_) => "function", 487 + Value::Object(_) => "object", 488 + }; 489 + unsafe { set_reg(vm, dst, Value::String(type_str.to_string())) }; 490 + OK 491 + } 492 + 493 + // ── Void ───────────────────────────────────────────────────────────────────── 494 + 495 + #[no_mangle] 496 + pub extern "C" fn jit_helper_void(vm: *mut Vm, dst: u32) -> u64 { 497 + let vm = unsafe { &mut *vm }; 498 + unsafe { set_reg(vm, dst, Value::Undefined) }; 499 + OK 500 + } 501 + 502 + // ── Upvalue access ─────────────────────────────────────────────────────────── 503 + 504 + #[no_mangle] 505 + pub extern "C" fn jit_helper_load_upvalue(vm: *mut Vm, dst: u32, idx: u32) -> u64 { 506 + let vm = unsafe { &mut *vm }; 507 + match vm.jit_load_upvalue(dst as u8, idx as u8) { 508 + Ok(()) => OK, 509 + Err(_) => EXCEPTION, 510 + } 511 + } 512 + 513 + #[no_mangle] 514 + pub extern "C" fn jit_helper_store_upvalue(vm: *mut Vm, idx: u32, src: u32) -> u64 { 515 + let vm = unsafe { &mut *vm }; 516 + match vm.jit_store_upvalue(idx as u8, src as u8) { 517 + Ok(()) => OK, 518 + Err(_) => EXCEPTION, 519 + } 520 + } 521 + 522 + // ── Cell operations ────────────────────────────────────────────────────────── 523 + 524 + #[no_mangle] 525 + pub extern "C" fn jit_helper_new_cell(vm: *mut Vm, dst: u32) -> u64 { 526 + let vm = unsafe { &mut *vm }; 527 + vm.jit_new_cell(dst as u8); 528 + OK 529 + } 530 + 531 + #[no_mangle] 532 + pub extern "C" fn jit_helper_cell_load(vm: *mut Vm, dst: u32, cell: u32) -> u64 { 533 + let vm = unsafe { &mut *vm }; 534 + match vm.jit_cell_load(dst as u8, cell as u8) { 535 + Ok(()) => OK, 536 + Err(_) => BAIL, 537 + } 538 + } 539 + 540 + #[no_mangle] 541 + pub extern "C" fn jit_helper_cell_store(vm: *mut Vm, cell: u32, src: u32) -> u64 { 542 + let vm = unsafe { &mut *vm }; 543 + match vm.jit_cell_store(cell as u8, src as u8) { 544 + Ok(()) => OK, 545 + Err(_) => BAIL, 546 + } 547 + } 548 + 549 + // ── Exception handler management ───────────────────────────────────────────── 550 + 551 + #[no_mangle] 552 + pub extern "C" fn jit_helper_push_exception_handler( 553 + vm: *mut Vm, 554 + catch_reg: u32, 555 + catch_ip: u32, 556 + ) -> u64 { 557 + let vm = unsafe { &mut *vm }; 558 + vm.jit_push_exception_handler(catch_reg as u8, catch_ip as usize); 559 + OK 560 + } 561 + 562 + #[no_mangle] 563 + pub extern "C" fn jit_helper_pop_exception_handler(vm: *mut Vm) -> u64 { 564 + let vm = unsafe { &mut *vm }; 565 + vm.jit_pop_exception_handler(); 566 + OK 567 + } 568 + 569 + // ── Internal comparison functions ──────────────────────────────────────────── 570 + 571 + fn abstract_eq(a: &Value, b: &Value) -> bool { 572 + match (a, b) { 573 + (Value::Undefined, Value::Null) | (Value::Null, Value::Undefined) => true, 574 + (Value::Undefined, Value::Undefined) | (Value::Null, Value::Null) => true, 575 + (Value::Number(x), Value::Number(y)) => x == y, 576 + (Value::String(x), Value::String(y)) => x == y, 577 + (Value::Boolean(x), Value::Boolean(y)) => x == y, 578 + (Value::Object(x), Value::Object(y)) | (Value::Function(x), Value::Function(y)) => x == y, 579 + (Value::Number(x), Value::String(y)) => { 580 + if let Ok(n) = y.parse::<f64>() { 581 + *x == n 582 + } else { 583 + false 584 + } 585 + } 586 + (Value::String(x), Value::Number(y)) => { 587 + if let Ok(n) = x.parse::<f64>() { 588 + n == *y 589 + } else { 590 + false 591 + } 592 + } 593 + (Value::Boolean(x), _) => { 594 + let n = if *x { 1.0 } else { 0.0 }; 595 + abstract_eq(&Value::Number(n), b) 596 + } 597 + (_, Value::Boolean(y)) => { 598 + let n = if *y { 1.0 } else { 0.0 }; 599 + abstract_eq(a, &Value::Number(n)) 600 + } 601 + _ => false, 602 + } 603 + } 604 + 605 + fn strict_eq(a: &Value, b: &Value) -> bool { 606 + match (a, b) { 607 + (Value::Undefined, Value::Undefined) | (Value::Null, Value::Null) => true, 608 + (Value::Number(x), Value::Number(y)) => x == y, 609 + (Value::String(x), Value::String(y)) => x == y, 610 + (Value::Boolean(x), Value::Boolean(y)) => x == y, 611 + (Value::Object(x), Value::Object(y)) | (Value::Function(x), Value::Function(y)) => x == y, 612 + _ => false, 613 + } 614 + } 615 + 616 + /// Format a number for string concatenation (matching JS semantics). 617 + fn format_number(n: f64) -> String { 618 + if n.is_nan() { 619 + "NaN".to_string() 620 + } else if n.is_infinite() { 621 + if n > 0.0 { 622 + "Infinity".to_string() 623 + } else { 624 + "-Infinity".to_string() 625 + } 626 + } else if n == 0.0 { 627 + "0".to_string() 628 + } else if n == (n as i64) as f64 && n.abs() < 1e15 { 629 + format!("{}", n as i64) 630 + } else { 631 + format!("{n}") 632 + } 633 + } 634 + 635 + /// Extension trait on Value for helper use. 636 + trait ValueJitExt { 637 + fn to_js_string_helper(&self, vm: &Vm) -> String; 638 + } 639 + 640 + impl ValueJitExt for Value { 641 + fn to_js_string_helper(&self, _vm: &Vm) -> String { 642 + match self { 643 + Value::Undefined => "undefined".to_string(), 644 + Value::Null => "null".to_string(), 645 + Value::Boolean(b) => b.to_string(), 646 + Value::Number(n) => format_number(*n), 647 + Value::String(s) => s.clone(), 648 + Value::Object(_) => "[object Object]".to_string(), 649 + Value::Function(_) => "function () { [native code] }".to_string(), 650 + } 651 + } 652 + }
+6 -2
crates/js/src/jit/mod.rs
··· 1 1 //! JIT compiler infrastructure for AArch64. 2 2 //! 3 - //! This module provides the low-level building blocks for JIT compilation: 3 + //! This module provides: 4 4 //! - Executable memory allocation with W^X protection 5 5 //! - AArch64 machine code assembler 6 - //! - Entry/exit stubs for VM ↔ JIT transitions 6 + //! - Baseline JIT compiler (bytecode → native code) 7 + //! - Helper functions called by JIT-compiled code 7 8 8 9 pub mod assembler; 9 10 pub mod buffer; 11 + pub mod compiler; 12 + pub mod helpers; 10 13 pub mod memory; 11 14 12 15 pub use assembler::Assembler; 13 16 pub use buffer::{CodePtr, JitBuffer}; 17 + pub use compiler::{BaselineJit, CompileError, JIT_THRESHOLD}; 14 18 pub use memory::ExecutableMemory;
+817 -4
crates/js/src/vm.rs
··· 7 7 8 8 use crate::bytecode::{Constant, Function, Op, Reg}; 9 9 use crate::gc::{Gc, GcRef, Traceable}; 10 + use crate::jit::buffer::CodePtr; 11 + use crate::jit::{BaselineJit, JitBuffer, JIT_THRESHOLD}; 10 12 use crate::shape::{PropertyAttrs, ShapeId, ShapeTable}; 11 13 use std::cell::RefCell; 12 14 use std::collections::HashMap; ··· 1145 1147 console_output: Box<dyn ConsoleOutput>, 1146 1148 /// DOM bridge for JS-DOM interop (set via `attach_document`). 1147 1149 pub(crate) dom_bridge: Option<Rc<DomBridge>>, 1150 + /// JIT code buffer for compiled native code. 1151 + jit_buffer: Option<JitBuffer>, 1152 + /// Per-function call counts (keyed by GcRef of the function). 1153 + jit_call_counts: HashMap<GcRef, u32>, 1154 + /// Compiled JIT code per function GcRef. 1155 + jit_compiled: HashMap<GcRef, CodePtr>, 1156 + /// JIT re-entrancy guard: > 0 when running JIT-compiled code. 1157 + /// Prevents the interpreter from re-entering JIT dispatch for nested calls. 1158 + jit_depth: u32, 1148 1159 } 1149 1160 1150 1161 /// Maximum register file size. ··· 1172 1183 promise_prototype: None, 1173 1184 console_output: Box::new(StdConsoleOutput), 1174 1185 dom_bridge: None, 1186 + jit_buffer: None, 1187 + jit_call_counts: HashMap::new(), 1188 + jit_compiled: HashMap::new(), 1189 + jit_depth: 0, 1175 1190 }; 1176 1191 crate::builtins::init_builtins(&mut vm); 1177 1192 vm ··· 2759 2774 2760 2775 /// Main dispatch loop. 2761 2776 fn run(&mut self) -> Result<Value, RuntimeError> { 2777 + self.run_inner(1) 2778 + } 2779 + 2780 + /// Run the interpreter until the frame count drops below `min_frames`. 2781 + /// Used by `jit_call` to execute a callee and return when it's done. 2782 + fn run_to_depth(&mut self, min_frames: usize) -> Result<(), RuntimeError> { 2783 + self.run_inner(min_frames)?; 2784 + Ok(()) 2785 + } 2786 + 2787 + fn run_inner(&mut self, min_frames: usize) -> Result<Value, RuntimeError> { 2762 2788 loop { 2763 2789 let fi = self.frames.len() - 1; 2764 2790 2765 2791 // Check if we've reached the end of bytecode. 2766 2792 if self.frames[fi].ip >= self.frames[fi].func.code.len() { 2767 - if self.frames.len() == 1 { 2768 - self.frames.pop(); 2793 + if self.frames.len() <= min_frames { 2794 + let old = self.frames.pop().unwrap(); 2795 + if !self.frames.is_empty() { 2796 + self.registers[old.return_reg] = Value::Undefined; 2797 + } 2769 2798 return Ok(Value::Undefined); 2770 2799 } 2771 2800 let old = self.frames.pop().unwrap(); ··· 3531 3560 continue; 3532 3561 } 3533 3562 3563 + // ── JIT: check if function should be compiled/executed ── 3564 + if self.jit_should_compile(func_gc_ref) { 3565 + // Set up registers for the callee just like the interpreter. 3566 + let callee_base = 3567 + base + self.frames[fi].func.register_count as usize; 3568 + let callee_regs = callee_func.register_count as usize; 3569 + self.ensure_registers(callee_base + callee_regs); 3570 + for i in 0..callee_func.param_count.min(arg_count) { 3571 + self.registers[callee_base + i as usize] = 3572 + args[i as usize].clone(); 3573 + } 3574 + for i in arg_count..callee_func.param_count { 3575 + self.registers[callee_base + i as usize] = Value::Undefined; 3576 + } 3577 + self.frames.push(CallFrame { 3578 + func: callee_func.clone(), 3579 + ip: 0, 3580 + base: callee_base, 3581 + return_reg: base + dst as usize, 3582 + exception_handlers: Vec::new(), 3583 + upvalues: callee_upvalues.clone(), 3584 + }); 3585 + 3586 + if let Some(result) = self.try_jit_execute(func_gc_ref) { 3587 + match result { 3588 + Ok(()) => { 3589 + // JIT executed successfully; frame was popped 3590 + // by jit_return, result is in return_reg. 3591 + continue; 3592 + } 3593 + Err(_) => { 3594 + // JIT bailed out — re-run via interpreter. 3595 + // The frame is still on the stack; reset IP 3596 + // and let the interpreter pick it up. 3597 + let fi_new = self.frames.len() - 1; 3598 + self.frames[fi_new].ip = 0; 3599 + continue; 3600 + } 3601 + } 3602 + } else { 3603 + // Compilation failed — interpreter will run this 3604 + // frame normally on the next loop iteration. 3605 + continue; 3606 + } 3607 + } 3608 + 3534 3609 let callee_base = base + self.frames[fi].func.register_count as usize; 3535 3610 let callee_regs = callee_func.register_count as usize; 3536 3611 self.ensure_registers(callee_base + callee_regs); ··· 3560 3635 let base = self.frames[fi].base; 3561 3636 let val = self.registers[base + reg as usize].clone(); 3562 3637 3563 - if self.frames.len() == 1 { 3564 - self.frames.pop(); 3638 + if self.frames.len() <= min_frames { 3639 + let old = self.frames.pop().unwrap(); 3640 + // Write return value if there's still a caller frame. 3641 + if !self.frames.is_empty() { 3642 + self.registers[old.return_reg] = val.clone(); 3643 + } 3565 3644 return Ok(val); 3566 3645 } 3567 3646 ··· 4316 4395 pub fn remove_global(&mut self, name: &str) { 4317 4396 self.globals.remove(name); 4318 4397 } 4398 + 4399 + // ── JIT integration ───────────────────────────────────────── 4400 + 4401 + /// Get the base register offset of the current (top) call frame. 4402 + pub fn current_frame_base(&self) -> usize { 4403 + self.frames.last().map_or(0, |f| f.base) 4404 + } 4405 + 4406 + /// Get a reference to the current frame's bytecode function. 4407 + pub fn current_frame_func(&self) -> &Function { 4408 + &self.frames.last().expect("no call frame").func 4409 + } 4410 + 4411 + /// Immutable access to the register file (used by JIT helpers). 4412 + pub fn registers(&self) -> &[Value] { 4413 + &self.registers 4414 + } 4415 + 4416 + /// Mutable access to the register file (used by JIT helpers). 4417 + pub fn registers_mut(&mut self) -> &mut [Value] { 4418 + &mut self.registers 4419 + } 4420 + 4421 + /// Immutable access to globals (used by JIT helpers). 4422 + pub fn globals(&self) -> &HashMap<String, Value> { 4423 + &self.globals 4424 + } 4425 + 4426 + /// Mutable access to globals (used by JIT helpers). 4427 + pub fn globals_mut(&mut self) -> &mut HashMap<String, Value> { 4428 + &mut self.globals 4429 + } 4430 + 4431 + /// Try to JIT-compile and execute a function. Called when a bytecode function 4432 + /// has been called enough times to exceed the JIT threshold. 4433 + /// 4434 + /// Returns `Some(result)` if JIT execution completed, `None` if we need to 4435 + /// fall back to the interpreter (compilation failed or bail-out). 4436 + fn try_jit_execute(&mut self, func_gc_ref: GcRef) -> Option<Result<(), RuntimeError>> { 4437 + // Check if we have cached compiled code. 4438 + if let Some(&code_ptr) = self.jit_compiled.get(&func_gc_ref) { 4439 + return Some(self.run_jit_code(code_ptr)); 4440 + } 4441 + 4442 + // Try to compile. 4443 + let func = match self.gc.get(func_gc_ref) { 4444 + Some(HeapObject::Function(fdata)) => match &fdata.kind { 4445 + FunctionKind::Bytecode(bc) => bc.func.clone(), 4446 + _ => return None, 4447 + }, 4448 + _ => return None, 4449 + }; 4450 + 4451 + // Initialize JIT buffer lazily. 4452 + if self.jit_buffer.is_none() { 4453 + self.jit_buffer = JitBuffer::new().ok(); 4454 + } 4455 + let buffer = self.jit_buffer.as_mut()?; 4456 + 4457 + let code_ptr = match BaselineJit::compile(&func, buffer) { 4458 + Ok(ptr) => ptr, 4459 + Err(_) => return None, // Compilation failed — fall back to interpreter 4460 + }; 4461 + 4462 + self.jit_compiled.insert(func_gc_ref, code_ptr); 4463 + Some(self.run_jit_code(code_ptr)) 4464 + } 4465 + 4466 + /// Execute JIT-compiled code. The call frame must already be set up. 4467 + fn run_jit_code(&mut self, code_ptr: CodePtr) -> Result<(), RuntimeError> { 4468 + self.jit_depth += 1; 4469 + let vm_ptr = self as *mut Vm; 4470 + let result = unsafe { 4471 + let jit_fn: extern "C" fn(*mut Vm) -> u64 = std::mem::transmute(code_ptr.as_ptr()); 4472 + jit_fn(vm_ptr) 4473 + }; 4474 + self.jit_depth -= 1; 4475 + 4476 + match result { 4477 + 0 => Ok(()), // Normal return 4478 + 1 => Err(RuntimeError { 4479 + kind: ErrorKind::Error, 4480 + message: "exception in JIT code".into(), 4481 + }), 4482 + _ => Err(RuntimeError { 4483 + kind: ErrorKind::Error, 4484 + message: "JIT bail-out".into(), 4485 + }), 4486 + } 4487 + } 4488 + 4489 + /// Increment JIT call counter for a function and return true if it should be JIT'd. 4490 + /// Returns false if we're already inside JIT-compiled code (re-entrancy guard). 4491 + fn jit_should_compile(&mut self, func_gc_ref: GcRef) -> bool { 4492 + if self.jit_depth > 0 { 4493 + return false; 4494 + } 4495 + let count = self.jit_call_counts.entry(func_gc_ref).or_insert(0); 4496 + *count += 1; 4497 + *count >= JIT_THRESHOLD 4498 + } 4499 + 4500 + // ── JIT helper methods (called by extern "C" helpers) ──────── 4501 + 4502 + /// Property access with inline cache support (for JIT helpers). 4503 + pub fn jit_get_property_by_name(&mut self, dst: u8, obj_r: u8, name_idx: u16, ic_idx: u16) { 4504 + let fi = self.frames.len() - 1; 4505 + let base = self.frames[fi].base; 4506 + 4507 + // IC fast path 4508 + let mut ic_hit = false; 4509 + if let Value::Object(gc_ref) = self.registers[base + obj_r as usize] { 4510 + if let Some(HeapObject::Object(data)) = self.gc.get(gc_ref) { 4511 + if let ObjectStorage::Shaped { shape, slots } = &data.storage { 4512 + if let Some(slot_idx) = 4513 + self.frames[fi].func.inline_caches[ic_idx as usize].lookup(*shape) 4514 + { 4515 + self.registers[base + dst as usize] = slots[slot_idx as usize].clone(); 4516 + ic_hit = true; 4517 + } 4518 + } 4519 + } 4520 + } 4521 + 4522 + if !ic_hit { 4523 + let key = self.frames[fi].func.names[name_idx as usize].clone(); 4524 + let obj_gc_ref = match self.registers[base + obj_r as usize] { 4525 + Value::Object(r) | Value::Function(r) => Some(r), 4526 + _ => None, 4527 + }; 4528 + let val = match self.registers[base + obj_r as usize] { 4529 + Value::Object(gc_ref) => { 4530 + let (val, own_slot) = gc_get_property_ic(&self.gc, gc_ref, &key, &self.shapes); 4531 + if let Some((shape, slot_index)) = own_slot { 4532 + self.frames[fi].func.inline_caches[ic_idx as usize] 4533 + .update(shape, slot_index); 4534 + } 4535 + val 4536 + } 4537 + Value::Function(gc_ref) => gc_get_property(&self.gc, gc_ref, &key, &self.shapes), 4538 + Value::String(ref s) => { 4539 + let v = string_get_property(s, &key); 4540 + if matches!(v, Value::Undefined) { 4541 + self.string_prototype 4542 + .map(|p| gc_get_property(&self.gc, p, &key, &self.shapes)) 4543 + .unwrap_or(Value::Undefined) 4544 + } else { 4545 + v 4546 + } 4547 + } 4548 + Value::Number(_) => self 4549 + .number_prototype 4550 + .map(|p| gc_get_property(&self.gc, p, &key, &self.shapes)) 4551 + .unwrap_or(Value::Undefined), 4552 + Value::Boolean(_) => self 4553 + .boolean_prototype 4554 + .map(|p| gc_get_property(&self.gc, p, &key, &self.shapes)) 4555 + .unwrap_or(Value::Undefined), 4556 + _ => Value::Undefined, 4557 + }; 4558 + let val = match val { 4559 + Value::Undefined if obj_gc_ref.is_some() => self 4560 + .resolve_dom_property(obj_gc_ref.unwrap(), &key) 4561 + .unwrap_or(Value::Undefined), 4562 + other => other, 4563 + }; 4564 + self.registers[base + dst as usize] = val; 4565 + } 4566 + } 4567 + 4568 + /// Property set with inline cache support (for JIT helpers). 4569 + pub fn jit_set_property_by_name(&mut self, obj_r: u8, name_idx: u16, val_r: u8, ic_idx: u16) { 4570 + let fi = self.frames.len() - 1; 4571 + let base = self.frames[fi].base; 4572 + 4573 + let mut ic_hit = false; 4574 + if let Value::Object(gc_ref) = self.registers[base + obj_r as usize] { 4575 + let slot_idx_opt = if let Some(HeapObject::Object(data)) = self.gc.get(gc_ref) { 4576 + if let ObjectStorage::Shaped { shape, .. } = &data.storage { 4577 + self.frames[fi].func.inline_caches[ic_idx as usize].lookup(*shape) 4578 + } else { 4579 + None 4580 + } 4581 + } else { 4582 + None 4583 + }; 4584 + if let Some(slot_idx) = slot_idx_opt { 4585 + let val = self.registers[base + val_r as usize].clone(); 4586 + if let Some(HeapObject::Object(data)) = self.gc.get_mut(gc_ref) { 4587 + if let ObjectStorage::Shaped { slots, .. } = &mut data.storage { 4588 + slots[slot_idx as usize] = val; 4589 + ic_hit = true; 4590 + } 4591 + } 4592 + } 4593 + } 4594 + 4595 + if !ic_hit { 4596 + let key = self.frames[fi].func.names[name_idx as usize].clone(); 4597 + let val = self.registers[base + val_r as usize].clone(); 4598 + let obj_gc = match self.registers[base + obj_r as usize] { 4599 + Value::Object(r) | Value::Function(r) => Some(r), 4600 + _ => None, 4601 + }; 4602 + let dom_handled = obj_gc 4603 + .map(|r| self.handle_dom_property_set(r, &key, &val)) 4604 + .unwrap_or(false); 4605 + if !dom_handled { 4606 + if let Some(gc_ref) = obj_gc { 4607 + let shape_before = match self.gc.get(gc_ref) { 4608 + Some(HeapObject::Object(data)) => { 4609 + if let ObjectStorage::Shaped { shape, .. } = &data.storage { 4610 + Some(*shape) 4611 + } else { 4612 + None 4613 + } 4614 + } 4615 + _ => None, 4616 + }; 4617 + match self.gc.get_mut(gc_ref) { 4618 + Some(HeapObject::Object(data)) => { 4619 + data.insert_property(key, Property::builtin(val), &mut self.shapes); 4620 + // Update IC 4621 + if let ObjectStorage::Shaped { shape, .. } = &data.storage { 4622 + let _ = shape_before; // suppress warning 4623 + let prop_name = &self.frames[fi].func.names[name_idx as usize]; 4624 + if let Some(slot) = self.shapes.lookup(*shape, prop_name) { 4625 + self.frames[fi].func.inline_caches[ic_idx as usize] 4626 + .update(*shape, slot.index); 4627 + } 4628 + } 4629 + } 4630 + Some(HeapObject::Function(fdata)) => { 4631 + fdata.properties.insert(key, Property::builtin(val)); 4632 + } 4633 + _ => {} 4634 + } 4635 + } 4636 + } 4637 + } 4638 + } 4639 + 4640 + /// Dynamic property get (for JIT helpers). 4641 + pub fn jit_get_property(&mut self, dst: u8, obj_r: u8, key_r: u8) { 4642 + let fi = self.frames.len() - 1; 4643 + let base = self.frames[fi].base; 4644 + let key = match &self.registers[base + key_r as usize] { 4645 + Value::String(s) => s.clone(), 4646 + Value::Number(n) => { 4647 + if *n == (*n as u32) as f64 { 4648 + (*n as u32).to_string() 4649 + } else { 4650 + format!("{n}") 4651 + } 4652 + } 4653 + other => other.to_js_string(&self.gc), 4654 + }; 4655 + let val = match self.registers[base + obj_r as usize] { 4656 + Value::Object(r) | Value::Function(r) => { 4657 + gc_get_property(&self.gc, r, &key, &self.shapes) 4658 + } 4659 + Value::String(ref s) => { 4660 + let v = string_get_property(s, &key); 4661 + if matches!(v, Value::Undefined) { 4662 + self.string_prototype 4663 + .map(|p| gc_get_property(&self.gc, p, &key, &self.shapes)) 4664 + .unwrap_or(Value::Undefined) 4665 + } else { 4666 + v 4667 + } 4668 + } 4669 + _ => Value::Undefined, 4670 + }; 4671 + self.registers[base + dst as usize] = val; 4672 + } 4673 + 4674 + /// Dynamic property set (for JIT helpers). 4675 + pub fn jit_set_property(&mut self, obj_r: u8, key_r: u8, val_r: u8) { 4676 + let fi = self.frames.len() - 1; 4677 + let base = self.frames[fi].base; 4678 + let key = match &self.registers[base + key_r as usize] { 4679 + Value::String(s) => s.clone(), 4680 + Value::Number(n) => { 4681 + if *n == (*n as u32) as f64 { 4682 + (*n as u32).to_string() 4683 + } else { 4684 + format!("{n}") 4685 + } 4686 + } 4687 + other => other.to_js_string(&self.gc), 4688 + }; 4689 + let val = self.registers[base + val_r as usize].clone(); 4690 + match self.registers[base + obj_r as usize] { 4691 + Value::Object(gc_ref) | Value::Function(gc_ref) => match self.gc.get_mut(gc_ref) { 4692 + Some(HeapObject::Object(data)) => { 4693 + data.insert_property(key, Property::builtin(val), &mut self.shapes); 4694 + } 4695 + Some(HeapObject::Function(fdata)) => { 4696 + fdata.properties.insert(key, Property::builtin(val)); 4697 + } 4698 + _ => {} 4699 + }, 4700 + _ => {} 4701 + } 4702 + } 4703 + 4704 + /// Execute a function call from JIT code. 4705 + pub fn jit_call( 4706 + &mut self, 4707 + dst: u8, 4708 + func_r: u8, 4709 + args_start: u8, 4710 + arg_count: u8, 4711 + ) -> Result<(), RuntimeError> { 4712 + let fi = self.frames.len() - 1; 4713 + let base = self.frames[fi].base; 4714 + 4715 + let func_gc_ref = match self.registers[base + func_r as usize] { 4716 + Value::Function(r) => r, 4717 + _ => { 4718 + let desc = self.registers[base + func_r as usize].to_js_string(&self.gc); 4719 + return Err(RuntimeError::type_error(format!( 4720 + "{desc} is not a function" 4721 + ))); 4722 + } 4723 + }; 4724 + 4725 + let mut args = Vec::with_capacity(arg_count as usize); 4726 + for i in 0..arg_count { 4727 + args.push(self.registers[base + (args_start + i) as usize].clone()); 4728 + } 4729 + 4730 + let call_info = { 4731 + match self.gc.get(func_gc_ref) { 4732 + Some(HeapObject::Function(fdata)) => match &fdata.kind { 4733 + FunctionKind::Native(n) => CallInfo::Native(n.callback), 4734 + FunctionKind::Bytecode(bc) => { 4735 + CallInfo::Bytecode(Box::new(bc.func.clone()), fdata.upvalues.clone()) 4736 + } 4737 + }, 4738 + _ => return Err(RuntimeError::type_error("not a function")), 4739 + } 4740 + }; 4741 + 4742 + match call_info { 4743 + CallInfo::Native(callback) => { 4744 + let this = self 4745 + .globals 4746 + .get("this") 4747 + .cloned() 4748 + .unwrap_or(Value::Undefined); 4749 + let dom_ref = self.dom_bridge.as_deref(); 4750 + let mut ctx = NativeContext { 4751 + gc: &mut self.gc, 4752 + shapes: &mut self.shapes, 4753 + this, 4754 + console_output: &*self.console_output, 4755 + dom_bridge: dom_ref, 4756 + }; 4757 + match callback(&args, &mut ctx) { 4758 + Ok(val) => { 4759 + self.registers[base + dst as usize] = val; 4760 + Ok(()) 4761 + } 4762 + Err(e) => Err(e), 4763 + } 4764 + } 4765 + CallInfo::Bytecode(callee_func, callee_upvalues) => { 4766 + let callee_func = *callee_func; 4767 + if callee_func.is_generator || callee_func.is_async { 4768 + // Generators and async functions need special handling. 4769 + // For now, fall back to creating generator objects. 4770 + if callee_func.is_async && !callee_func.is_generator { 4771 + let gen_ref = 4772 + self.create_raw_generator(callee_func, callee_upvalues, &args); 4773 + let result_promise = crate::builtins::create_promise_object_pub( 4774 + &mut self.gc, 4775 + &mut self.shapes, 4776 + ); 4777 + self.drive_async_step(gen_ref, result_promise, Value::Undefined, false); 4778 + self.registers[base + dst as usize] = Value::Object(result_promise); 4779 + return Ok(()); 4780 + } 4781 + if callee_func.is_generator { 4782 + let gen_obj = 4783 + self.create_generator_object(callee_func, callee_upvalues, &args); 4784 + self.registers[base + dst as usize] = Value::Object(gen_obj); 4785 + return Ok(()); 4786 + } 4787 + } 4788 + 4789 + if self.frames.len() >= MAX_CALL_DEPTH { 4790 + return Err(RuntimeError::range_error( 4791 + "Maximum call stack size exceeded", 4792 + )); 4793 + } 4794 + 4795 + let callee_base = base + self.frames[fi].func.register_count as usize; 4796 + let callee_regs = callee_func.register_count as usize; 4797 + self.ensure_registers(callee_base + callee_regs); 4798 + 4799 + for i in 0..callee_func.param_count.min(arg_count) { 4800 + self.registers[callee_base + i as usize] = args[i as usize].clone(); 4801 + } 4802 + for i in arg_count..callee_func.param_count { 4803 + self.registers[callee_base + i as usize] = Value::Undefined; 4804 + } 4805 + 4806 + self.frames.push(CallFrame { 4807 + func: callee_func, 4808 + ip: 0, 4809 + base: callee_base, 4810 + return_reg: base + dst as usize, 4811 + exception_handlers: Vec::new(), 4812 + upvalues: callee_upvalues, 4813 + }); 4814 + 4815 + // Run the callee via the interpreter until its frame returns. 4816 + // min_frames = current frame count (after push) so the 4817 + // interpreter stops when this callee's frame is popped. 4818 + let min_frames = self.frames.len(); 4819 + self.run_to_depth(min_frames)?; 4820 + Ok(()) 4821 + } 4822 + } 4823 + } 4824 + 4825 + /// Write a return value from JIT code and pop the current frame. 4826 + pub fn jit_return(&mut self, val: Value) { 4827 + if self.frames.len() <= 1 { 4828 + // Top-level frame — just store the value 4829 + if let Some(frame) = self.frames.last() { 4830 + self.registers[frame.return_reg] = val; 4831 + } 4832 + return; 4833 + } 4834 + let old = self.frames.pop().unwrap(); 4835 + self.registers[old.return_reg] = val; 4836 + } 4837 + 4838 + /// Create a closure from JIT code. 4839 + pub fn jit_create_closure(&mut self, dst: u8, func_idx: u16) -> Result<(), RuntimeError> { 4840 + let fi = self.frames.len() - 1; 4841 + let base = self.frames[fi].base; 4842 + let inner_func = self.frames[fi].func.functions[func_idx as usize].clone(); 4843 + let name = inner_func.name.clone(); 4844 + 4845 + let mut upvalues = Vec::with_capacity(inner_func.upvalue_defs.len()); 4846 + for def in &inner_func.upvalue_defs { 4847 + let cell_ref = if def.is_local { 4848 + match &self.registers[base + def.index as usize] { 4849 + Value::Object(r) => *r, 4850 + _ => return Err(RuntimeError::type_error("expected cell for upvalue")), 4851 + } 4852 + } else { 4853 + self.frames[fi].upvalues[def.index as usize] 4854 + }; 4855 + upvalues.push(cell_ref); 4856 + } 4857 + 4858 + let gc_ref = self.gc.alloc(HeapObject::Function(Box::new(FunctionData { 4859 + name, 4860 + kind: FunctionKind::Bytecode(BytecodeFunc { func: inner_func }), 4861 + prototype_obj: None, 4862 + properties: HashMap::new(), 4863 + upvalues, 4864 + }))); 4865 + self.registers[base + dst as usize] = Value::Function(gc_ref); 4866 + Ok(()) 4867 + } 4868 + 4869 + /// Load from upvalue (closure cell). 4870 + pub fn jit_load_upvalue(&mut self, dst: u8, idx: u8) -> Result<(), RuntimeError> { 4871 + let fi = self.frames.len() - 1; 4872 + let base = self.frames[fi].base; 4873 + let cell_ref = self.frames[fi].upvalues[idx as usize]; 4874 + let val = match self.gc.get(cell_ref) { 4875 + Some(HeapObject::Cell(v)) => v.clone(), 4876 + _ => return Err(RuntimeError::type_error("expected cell")), 4877 + }; 4878 + self.registers[base + dst as usize] = val; 4879 + Ok(()) 4880 + } 4881 + 4882 + /// Store into upvalue (closure cell). 4883 + pub fn jit_store_upvalue(&mut self, idx: u8, src: u8) -> Result<(), RuntimeError> { 4884 + let fi = self.frames.len() - 1; 4885 + let base = self.frames[fi].base; 4886 + let cell_ref = self.frames[fi].upvalues[idx as usize]; 4887 + let val = self.registers[base + src as usize].clone(); 4888 + match self.gc.get_mut(cell_ref) { 4889 + Some(HeapObject::Cell(v)) => { 4890 + *v = val; 4891 + Ok(()) 4892 + } 4893 + _ => Err(RuntimeError::type_error("expected cell")), 4894 + } 4895 + } 4896 + 4897 + /// Create a new cell (for closure captures). 4898 + pub fn jit_new_cell(&mut self, dst: u8) { 4899 + let fi = self.frames.len() - 1; 4900 + let base = self.frames[fi].base; 4901 + let cell_ref = self.gc.alloc(HeapObject::Cell(Value::Undefined)); 4902 + self.registers[base + dst as usize] = Value::Object(cell_ref); 4903 + } 4904 + 4905 + /// Load value from a cell. 4906 + pub fn jit_cell_load(&mut self, dst: u8, cell_r: u8) -> Result<(), RuntimeError> { 4907 + let fi = self.frames.len() - 1; 4908 + let base = self.frames[fi].base; 4909 + let cell_ref = match &self.registers[base + cell_r as usize] { 4910 + Value::Object(r) => *r, 4911 + _ => return Err(RuntimeError::type_error("expected cell")), 4912 + }; 4913 + let val = match self.gc.get(cell_ref) { 4914 + Some(HeapObject::Cell(v)) => v.clone(), 4915 + _ => return Err(RuntimeError::type_error("expected cell")), 4916 + }; 4917 + self.registers[base + dst as usize] = val; 4918 + Ok(()) 4919 + } 4920 + 4921 + /// Store value into a cell. 4922 + pub fn jit_cell_store(&mut self, cell_r: u8, src: u8) -> Result<(), RuntimeError> { 4923 + let fi = self.frames.len() - 1; 4924 + let base = self.frames[fi].base; 4925 + let cell_ref = match &self.registers[base + cell_r as usize] { 4926 + Value::Object(r) => *r, 4927 + _ => return Err(RuntimeError::type_error("expected cell")), 4928 + }; 4929 + let val = self.registers[base + src as usize].clone(); 4930 + match self.gc.get_mut(cell_ref) { 4931 + Some(HeapObject::Cell(v)) => { 4932 + *v = val; 4933 + Ok(()) 4934 + } 4935 + _ => Err(RuntimeError::type_error("expected cell")), 4936 + } 4937 + } 4938 + 4939 + /// Create a new plain object. 4940 + pub fn create_object_value(&mut self) -> Value { 4941 + let obj = ObjectData::new(); 4942 + let gc_ref = self.gc.alloc(HeapObject::Object(obj)); 4943 + Value::Object(gc_ref) 4944 + } 4945 + 4946 + /// Create a new array object. 4947 + pub fn create_array_value(&mut self) -> Value { 4948 + let mut obj = ObjectData::new(); 4949 + obj.prototype = self.array_prototype; 4950 + obj.insert_property( 4951 + "length".to_string(), 4952 + Property { 4953 + value: Value::Number(0.0), 4954 + writable: true, 4955 + enumerable: false, 4956 + configurable: false, 4957 + }, 4958 + &mut self.shapes, 4959 + ); 4960 + let gc_ref = self.gc.alloc(HeapObject::Object(obj)); 4961 + Value::Object(gc_ref) 4962 + } 4963 + 4964 + /// Push an exception handler from JIT code. 4965 + pub fn jit_push_exception_handler(&mut self, catch_reg: u8, catch_ip: usize) { 4966 + let fi = self.frames.len() - 1; 4967 + self.frames[fi].exception_handlers.push(ExceptionHandler { 4968 + catch_ip, 4969 + catch_reg, 4970 + }); 4971 + } 4972 + 4973 + /// Pop an exception handler from JIT code. 4974 + pub fn jit_pop_exception_handler(&mut self) { 4975 + let fi = self.frames.len() - 1; 4976 + self.frames[fi].exception_handlers.pop(); 4977 + } 4978 + 4979 + /// Public wrapper for handle_exception (for JIT helpers). 4980 + pub fn handle_exception_pub(&mut self, value: Value) -> bool { 4981 + self.handle_exception(value) 4982 + } 4319 4983 } 4320 4984 4321 4985 impl Default for Vm { ··· 8800 9464 match result.unwrap() { 8801 9465 Value::Number(n) => assert_eq!(n, 60.0), 8802 9466 v => panic!("expected 60, got {v:?}"), 9467 + } 9468 + } 9469 + 9470 + // ── JIT tests ─────────────────────────────────────────────── 9471 + 9472 + #[test] 9473 + fn test_jit_hot_function_arithmetic() { 9474 + // Call a function > JIT_THRESHOLD times to trigger JIT compilation. 9475 + // The function does simple arithmetic that the JIT should handle. 9476 + let result = eval( 9477 + "function add(a, b) { return a + b; } 9478 + var result = 0; 9479 + for (var i = 0; i < 200; i++) { 9480 + result = add(result, 1); 9481 + } 9482 + result", 9483 + ); 9484 + match result.unwrap() { 9485 + Value::Number(n) => assert_eq!(n, 200.0), 9486 + v => panic!("expected 200, got {v:?}"), 9487 + } 9488 + } 9489 + 9490 + #[test] 9491 + fn test_jit_hot_function_comparison() { 9492 + // JIT with comparison and conditional logic. 9493 + let result = eval( 9494 + "function max(a, b) { 9495 + if (a > b) { return a; } 9496 + return b; 9497 + } 9498 + var m = 0; 9499 + for (var i = 0; i < 200; i++) { 9500 + m = max(m, i); 9501 + } 9502 + m", 9503 + ); 9504 + match result.unwrap() { 9505 + Value::Number(n) => assert_eq!(n, 199.0), 9506 + v => panic!("expected 199, got {v:?}"), 9507 + } 9508 + } 9509 + 9510 + #[test] 9511 + fn test_jit_hot_function_property_access() { 9512 + // JIT with property access (tests inline cache integration). 9513 + let result = eval( 9514 + "function getX(obj) { return obj.x; } 9515 + var o = {x: 42}; 9516 + var r = 0; 9517 + for (var i = 0; i < 200; i++) { 9518 + r = getX(o); 9519 + } 9520 + r", 9521 + ); 9522 + match result.unwrap() { 9523 + Value::Number(n) => assert_eq!(n, 42.0), 9524 + v => panic!("expected 42, got {v:?}"), 9525 + } 9526 + } 9527 + 9528 + #[test] 9529 + fn test_jit_hot_function_nested_calls() { 9530 + // JIT with nested function calls. 9531 + let result = eval( 9532 + "function double(x) { return x * 2; } 9533 + function quadruple(x) { return double(double(x)); } 9534 + var r = 1; 9535 + for (var i = 0; i < 200; i++) { 9536 + r = quadruple(i); 9537 + } 9538 + r", 9539 + ); 9540 + match result.unwrap() { 9541 + Value::Number(n) => assert_eq!(n, 796.0), // 199 * 4 9542 + v => panic!("expected 796, got {v:?}"), 9543 + } 9544 + } 9545 + 9546 + #[test] 9547 + fn test_jit_call_count_tracking() { 9548 + // Verify that call_count tracking works: a function called < threshold 9549 + // times should NOT be JIT'd (this just verifies the result is correct 9550 + // whether interpreted or JIT'd). 9551 + let result = eval( 9552 + "function small(x) { return x + 1; } 9553 + var r = 0; 9554 + for (var i = 0; i < 50; i++) { 9555 + r = small(r); 9556 + } 9557 + r", 9558 + ); 9559 + match result.unwrap() { 9560 + Value::Number(n) => assert_eq!(n, 50.0), 9561 + v => panic!("expected 50, got {v:?}"), 9562 + } 9563 + } 9564 + 9565 + #[test] 9566 + fn test_jit_fibonacci() { 9567 + // Recursive fibonacci — exercises Call, Return, comparison, arithmetic. 9568 + let result = eval( 9569 + "function fib(n) { 9570 + if (n <= 1) return n; 9571 + return fib(n - 1) + fib(n - 2); 9572 + } 9573 + fib(15)", 9574 + ); 9575 + match result.unwrap() { 9576 + Value::Number(n) => assert_eq!(n, 610.0), 9577 + v => panic!("expected 610, got {v:?}"), 9578 + } 9579 + } 9580 + 9581 + #[test] 9582 + fn test_jit_string_concatenation() { 9583 + // Test that Add with strings works correctly through JIT. 9584 + let result = eval( 9585 + "function greet(name) { return 'Hello, ' + name + '!'; } 9586 + var r = ''; 9587 + for (var i = 0; i < 200; i++) { 9588 + r = greet('world'); 9589 + } 9590 + r", 9591 + ); 9592 + match result.unwrap() { 9593 + Value::String(s) => assert_eq!(s, "Hello, world!"), 9594 + v => panic!("expected 'Hello, world!', got {v:?}"), 9595 + } 9596 + } 9597 + 9598 + #[test] 9599 + fn test_jit_closure() { 9600 + // Test closures work correctly through JIT. 9601 + let result = eval( 9602 + "function makeCounter() { 9603 + var count = 0; 9604 + return function() { count = count + 1; return count; }; 9605 + } 9606 + var counter = makeCounter(); 9607 + var r = 0; 9608 + for (var i = 0; i < 200; i++) { 9609 + r = counter(); 9610 + } 9611 + r", 9612 + ); 9613 + match result.unwrap() { 9614 + Value::Number(n) => assert_eq!(n, 200.0), 9615 + v => panic!("expected 200, got {v:?}"), 8803 9616 } 8804 9617 } 8805 9618 }