A 5e storytelling engine with an LLM DM
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Collapse ToolContext into a real singleton

The DM prompt has been showing "[Day 1, 06:30]" forever even though the
log on disk was clearly advancing through the afternoon. Tracked it
down to start_mcp_server: every caller (engine, seed_world, plot_arc,
the background advancement evaluator, the tick_world ticker) was
constructing its own CampaignLog / EntityIndex / VectorIndex and
calling init_ctx, which clobbered the global _ctx each time. After the
advancement evaluator first fired (around turn 5), set_scene was
mutating the evaluator's stale CampaignLog while the engine's display
kept reading its own original instance — frozen at whatever time it
was when the swap happened. Same hazard applied to the entity_index
cache and the initiative tracker; we just hadn't noticed yet.

Storied only ever serves one campaign at a time, so the honest fix is
to make the singleton actually a singleton. Add get_or_create_ctx as
the production entry point: first caller wins, subsequent callers get
the same instance back, mismatched world_id/player_id raises rather
than silently rebinding. start_mcp_server, the engine, planner,
advancement, and the ticker all stop building their own slices and
read from the shared ctx. The engine pulls _campaign_log straight off
the handle so display reads can't drift from tool mutations.

init_ctx survives as the test-only override the conftest fixture
already uses (paired with reset_ctx for teardown).

With the slices truly shared across the engine thread and the various
MCP uvicorn worker threads, the previously-implicit "only one writer
at a time" assumption no longer holds. Add per-instance RLocks to:

- CampaignLog.append_entry (read-modify-write on current_time and
the day-file save), plus format_for_context / get_recent_entries /
get_all_entries / time_since_rest so list iteration can't see a
half-mutated current_entries.
- VectorIndex around the shared sqlite connection — check_same_thread
is already off, but concurrent execute calls from different MCP
threads can still race on commit boundaries.
- InitiativeTracker around every public mutator and format_for_context.

EntityIndex stays unlocked: it's pure dict get/set with no
read-modify-write, so the GIL is enough.

Two test_engine.py fixtures had a hand-rolled fake Ctx that was
missing campaign_log; added it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

+519 -385
+3 -6
src/storied/advancement.py
··· 10 10 from storied.character import format_character_context, load_character 11 11 from storied.claude import run_with_tools 12 12 from storied.engine import load_prompt 13 - from storied.log import CampaignLog 14 13 from storied.mcp_server import start_server as start_mcp_server 15 14 from storied.paths import data_home 16 15 from storied.session import load_session 16 + from storied.tools._context import get_or_create_ctx 17 17 18 18 19 19 @dataclass ··· 46 46 parts.append(char_context) 47 47 48 48 # Campaign log — entries since last level-up 49 - log = CampaignLog(world_id) 49 + log = get_or_create_ctx(world_id, player_id).campaign_log 50 50 parts.append(f"## Campaign Time: {log.get_current_time()}") 51 51 52 52 entries_since_level = log.get_entries_since_tag("level") ··· 116 116 117 117 system_prompt = load_prompt("xp-evaluator") 118 118 119 - campaign_log = CampaignLog(world_id) 120 - mcp = start_mcp_server( 121 - world_id, player_id, "advancement", campaign_log, 122 - ) 119 + mcp = start_mcp_server(world_id, player_id, "advancement") 123 120 124 121 progress(f"Evaluating advancement with {model}...") 125 122
+10 -6
src/storied/engine.py
··· 19 19 stream_with_tools, 20 20 ) 21 21 from storied.content import ContentResolver 22 - from storied.log import CampaignLog, TranscriptLog 22 + from storied.log import TranscriptLog 23 23 from storied.mcp_server import start_server as start_mcp_server 24 24 from storied.notification_formatters import ( 25 25 DEFERRED_FORMATTERS, ··· 101 101 if transcript_path: 102 102 transcript_path.parent.mkdir(parents=True, exist_ok=True) 103 103 104 - # Campaign log for time tracking (world-scoped, shared with MCP server) 105 - self._campaign_log = CampaignLog(self.world_id) 106 - 107 104 # Transcript log for conversation history 108 105 self._transcript = TranscriptLog(self.world_id) 109 106 110 - # Start in-process MCP server (shares CampaignLog with engine) 107 + # Start in-process MCP server. start_server resolves the singleton 108 + # ToolContext (constructing it on first call), so seed_world, 109 + # plot_arc, the engine, and the background ticker/advancement 110 + # threads all share the same CampaignLog, EntityIndex, VectorIndex, 111 + # and InitiativeTracker. 111 112 self._mcp = start_mcp_server( 112 113 world_id=self.world_id, 113 114 player_id=self.player_id, 114 115 tool_set="dm", 115 - campaign_log=self._campaign_log, 116 116 ) 117 + 118 + # Read the campaign log straight off the singleton so display 119 + # reads always see the same instance the tools mutate. 120 + self._campaign_log = self._mcp.ctx.campaign_log 117 121 118 122 # Build system prompt with full context 119 123 self._prompt_name = prompt_name
+203 -160
src/storied/initiative.py
··· 7 7 a circular import). 8 8 """ 9 9 10 + import threading 10 11 from dataclasses import dataclass, field 11 12 12 13 ··· 50 51 self.combatants: list[Combatant] = [] 51 52 self.current_index: int = 0 52 53 self.round: int = 0 54 + # Single per-instance lock guarding all combatant-state mutations. 55 + # The tracker is a process-wide singleton accessed from the engine 56 + # main thread (display reads) and from MCP server threads (tool 57 + # mutations), so naive iteration over self.combatants can race 58 + # with insert/remove. RLock so format_for_context can be called 59 + # from inside other locked methods if that ever comes up. 60 + self._lock = threading.RLock() 53 61 54 62 @property 55 63 def current_combatant(self) -> Combatant | None: 56 - if not self.active or not self.combatants: 57 - return None 58 - return self.combatants[self.current_index] 64 + with self._lock: 65 + if not self.active or not self.combatants: 66 + return None 67 + return self.combatants[self.current_index] 59 68 60 69 def begin(self, combatants: list[Combatant]) -> str: 61 70 """Enter initiative mode with the given combatants in turn order.""" 62 - self.combatants = list(combatants) 63 - self.current_index = 0 64 - self.round = 1 65 - self.active = True 71 + with self._lock: 72 + self.combatants = list(combatants) 73 + self.current_index = 0 74 + self.round = 1 75 + self.active = True 66 76 67 - first = self.combatants[0] 68 - lines = [f"Initiative started — Round 1", ""] 69 - lines.append(self._format_order()) 70 - lines.append("") 71 - lines.append(f"**{first.name}** goes first.") 72 - return "\n".join(lines) 77 + first = self.combatants[0] 78 + lines = [f"Initiative started — Round 1", ""] 79 + lines.append(self._format_order()) 80 + lines.append("") 81 + lines.append(f"**{first.name}** goes first.") 82 + return "\n".join(lines) 73 83 74 84 def next_turn(self) -> str: 75 85 """Advance to the next combatant's turn.""" 76 - if not self.active: 77 - return "Initiative is not active." 86 + with self._lock: 87 + if not self.active: 88 + return "Initiative is not active." 78 89 79 - # Process end-of-turn effects for the combatant we're leaving 80 - leaving = self.combatants[self.current_index] 81 - expired = self._process_effects(leaving.name, "end") 90 + # Process end-of-turn effects for the combatant we're leaving 91 + leaving = self.combatants[self.current_index] 92 + expired = self._process_effects(leaving.name, "end") 82 93 83 - # Advance to next living combatant 84 - self.current_index = self._next_living_index(self.current_index) 94 + # Advance to next living combatant 95 + self.current_index = self._next_living_index(self.current_index) 85 96 86 - current = self.combatants[self.current_index] 87 - started_turn = self._process_effects(current.name, "start") 97 + current = self.combatants[self.current_index] 98 + started_turn = self._process_effects(current.name, "start") 88 99 89 - parts = [] 90 - if expired: 91 - parts.append(f"Expired: {', '.join(expired)}") 92 - if started_turn: 93 - parts.append(f"Expired: {', '.join(started_turn)}") 100 + parts = [] 101 + if expired: 102 + parts.append(f"Expired: {', '.join(expired)}") 103 + if started_turn: 104 + parts.append(f"Expired: {', '.join(started_turn)}") 94 105 95 - parts.append( 96 - f"Round {self.round} — **{current.name}**'s turn " 97 - f"({current.hp}/{current.hp_max} HP, AC {current.ac})" 98 - ) 106 + parts.append( 107 + f"Round {self.round} — **{current.name}**'s turn " 108 + f"({current.hp}/{current.hp_max} HP, AC {current.ac})" 109 + ) 99 110 100 - if current.conditions: 101 - cond_str = ", ".join(self._format_condition(c) for c in current.conditions) 102 - parts.append(f"Conditions: {cond_str}") 111 + if current.conditions: 112 + cond_str = ", ".join( 113 + self._format_condition(c) for c in current.conditions 114 + ) 115 + parts.append(f"Conditions: {cond_str}") 103 116 104 - # Check if only one side remains 105 - hint = self._one_side_hint() 106 - if hint: 107 - parts.append(hint) 117 + # Check if only one side remains 118 + hint = self._one_side_hint() 119 + if hint: 120 + parts.append(hint) 108 121 109 - return "\n".join(parts) 122 + return "\n".join(parts) 110 123 111 124 def apply_damage(self, target: str, amount: int) -> str: 112 125 """Deal damage to a combatant.""" 113 - combatant = self._find(target) 114 - if combatant is None: 115 - return f"Combatant '{target}' not found." 126 + with self._lock: 127 + combatant = self._find(target) 128 + if combatant is None: 129 + return f"Combatant '{target}' not found." 116 130 117 - old_hp = combatant.hp 118 - combatant.hp = max(0, combatant.hp - amount) 119 - result = f"{target} takes {amount} damage ({old_hp} \u2192 {combatant.hp}/{combatant.hp_max} HP)" 131 + old_hp = combatant.hp 132 + combatant.hp = max(0, combatant.hp - amount) 133 + result = ( 134 + f"{target} takes {amount} damage " 135 + f"({old_hp} \u2192 {combatant.hp}/{combatant.hp_max} HP)" 136 + ) 120 137 121 - if combatant.hp == 0: 122 - combatant.defeated = True 123 - result += " \u2014 DOWN!" 124 - elif combatant.hp <= combatant.hp_max // 2 < old_hp: 125 - result += " \u2014 Bloodied" 138 + if combatant.hp == 0: 139 + combatant.defeated = True 140 + result += " \u2014 DOWN!" 141 + elif combatant.hp <= combatant.hp_max // 2 < old_hp: 142 + result += " \u2014 Bloodied" 126 143 127 - return result 144 + return result 128 145 129 146 def apply_heal(self, target: str, amount: int) -> str: 130 147 """Heal a combatant.""" 131 - combatant = self._find(target) 132 - if combatant is None: 133 - return f"Combatant '{target}' not found." 148 + with self._lock: 149 + combatant = self._find(target) 150 + if combatant is None: 151 + return f"Combatant '{target}' not found." 134 152 135 - old_hp = combatant.hp 136 - combatant.hp = min(combatant.hp_max, combatant.hp + amount) 137 - actual = combatant.hp - old_hp 138 - if combatant.defeated and combatant.hp > 0: 139 - combatant.defeated = False 140 - return f"{target} heals {actual} HP ({old_hp} \u2192 {combatant.hp}/{combatant.hp_max})" 153 + old_hp = combatant.hp 154 + combatant.hp = min(combatant.hp_max, combatant.hp + amount) 155 + actual = combatant.hp - old_hp 156 + if combatant.defeated and combatant.hp > 0: 157 + combatant.defeated = False 158 + return ( 159 + f"{target} heals {actual} HP " 160 + f"({old_hp} \u2192 {combatant.hp}/{combatant.hp_max})" 161 + ) 141 162 142 163 def add_condition( 143 164 self, ··· 148 169 source: str = "", 149 170 ) -> str: 150 171 """Apply a condition to a combatant.""" 151 - combatant = self._find(target) 152 - if combatant is None: 153 - return f"Combatant '{target}' not found." 172 + with self._lock: 173 + combatant = self._find(target) 174 + if combatant is None: 175 + return f"Combatant '{target}' not found." 154 176 155 - tc = TrackedCondition( 156 - name=condition, source=source, duration=duration, ends_on=ends_on, 157 - ) 158 - combatant.conditions.append(tc) 177 + tc = TrackedCondition( 178 + name=condition, source=source, duration=duration, ends_on=ends_on, 179 + ) 180 + combatant.conditions.append(tc) 159 181 160 - dur_str = f" ({duration} rds)" if duration > 0 else "" 161 - return f"{target} is now {condition}{dur_str}" 182 + dur_str = f" ({duration} rds)" if duration > 0 else "" 183 + return f"{target} is now {condition}{dur_str}" 162 184 163 185 def remove_condition(self, target: str, condition: str) -> str: 164 186 """Remove a condition from a combatant.""" 165 - combatant = self._find(target) 166 - if combatant is None: 167 - return f"Combatant '{target}' not found." 187 + with self._lock: 188 + combatant = self._find(target) 189 + if combatant is None: 190 + return f"Combatant '{target}' not found." 168 191 169 - before = len(combatant.conditions) 170 - combatant.conditions = [c for c in combatant.conditions if c.name != condition] 171 - if len(combatant.conditions) == before: 172 - return f"{target} does not have {condition}." 173 - return f"{condition} removed from {target}." 192 + before = len(combatant.conditions) 193 + combatant.conditions = [ 194 + c for c in combatant.conditions if c.name != condition 195 + ] 196 + if len(combatant.conditions) == before: 197 + return f"{target} does not have {condition}." 198 + return f"{condition} removed from {target}." 174 199 175 200 def add_combatant(self, combatant: Combatant) -> str: 176 201 """Add a combatant at the correct initiative position.""" 177 - insert_idx = len(self.combatants) 178 - for i, c in enumerate(self.combatants): 179 - if combatant.initiative > c.initiative: 180 - insert_idx = i 181 - break 202 + with self._lock: 203 + insert_idx = len(self.combatants) 204 + for i, c in enumerate(self.combatants): 205 + if combatant.initiative > c.initiative: 206 + insert_idx = i 207 + break 182 208 183 - self.combatants.insert(insert_idx, combatant) 209 + self.combatants.insert(insert_idx, combatant) 184 210 185 - if insert_idx <= self.current_index: 186 - self.current_index += 1 211 + if insert_idx <= self.current_index: 212 + self.current_index += 1 187 213 188 - return f"{combatant.name} joins initiative (initiative {combatant.initiative})" 214 + return ( 215 + f"{combatant.name} joins initiative " 216 + f"(initiative {combatant.initiative})" 217 + ) 189 218 190 219 def remove_combatant(self, name: str) -> str: 191 220 """Remove a combatant from initiative.""" 192 - idx = None 193 - for i, c in enumerate(self.combatants): 194 - if c.name.lower() == name.lower(): 195 - idx = i 196 - break 221 + with self._lock: 222 + idx = None 223 + for i, c in enumerate(self.combatants): 224 + if c.name.lower() == name.lower(): 225 + idx = i 226 + break 197 227 198 - if idx is None: 199 - return f"Combatant '{name}' not found." 228 + if idx is None: 229 + return f"Combatant '{name}' not found." 200 230 201 - removed = self.combatants.pop(idx) 231 + removed = self.combatants.pop(idx) 202 232 203 - if not self.combatants: 204 - self.active = False 205 - return f"{removed.name} removed. No combatants remain — initiative ended." 233 + if not self.combatants: 234 + self.active = False 235 + return ( 236 + f"{removed.name} removed. No combatants remain — " 237 + f"initiative ended." 238 + ) 206 239 207 - if idx < self.current_index: 208 - self.current_index -= 1 209 - elif idx == self.current_index: 210 - if self.current_index >= len(self.combatants): 211 - self.current_index = 0 212 - self.round += 1 240 + if idx < self.current_index: 241 + self.current_index -= 1 242 + elif idx == self.current_index: 243 + if self.current_index >= len(self.combatants): 244 + self.current_index = 0 245 + self.round += 1 213 246 214 - return f"{removed.name} removed from initiative." 247 + return f"{removed.name} removed from initiative." 215 248 216 249 def end(self) -> str: 217 250 """End initiative and return a summary.""" 218 - defeated = [c for c in self.combatants if c.defeated] 219 - survivors = [c for c in self.combatants if not c.defeated] 220 - rounds = self.round 221 - duration_sec = rounds * 6 251 + with self._lock: 252 + defeated = [c for c in self.combatants if c.defeated] 253 + survivors = [c for c in self.combatants if not c.defeated] 254 + rounds = self.round 255 + duration_sec = rounds * 6 222 256 223 - lines = [f"Initiative ended after {rounds} rounds ({duration_sec} seconds)."] 257 + lines = [ 258 + f"Initiative ended after {rounds} rounds " 259 + f"({duration_sec} seconds)." 260 + ] 224 261 225 - if defeated: 226 - names = ", ".join(c.name for c in defeated) 227 - lines.append(f"Defeated: {names}") 262 + if defeated: 263 + names = ", ".join(c.name for c in defeated) 264 + lines.append(f"Defeated: {names}") 228 265 229 - if survivors: 230 - parts = [] 231 - for c in survivors: 232 - conds = "" 233 - if c.conditions: 234 - conds = f" [{', '.join(co.name for co in c.conditions)}]" 235 - parts.append(f"{c.name} ({c.hp}/{c.hp_max} HP{conds})") 236 - lines.append(f"Survivors: {', '.join(parts)}") 266 + if survivors: 267 + parts = [] 268 + for c in survivors: 269 + conds = "" 270 + if c.conditions: 271 + conds = f" [{', '.join(co.name for co in c.conditions)}]" 272 + parts.append(f"{c.name} ({c.hp}/{c.hp_max} HP{conds})") 273 + lines.append(f"Survivors: {', '.join(parts)}") 237 274 238 - self.active = False 239 - self.combatants = [] 240 - self.current_index = 0 241 - self.round = 0 275 + self.active = False 276 + self.combatants = [] 277 + self.current_index = 0 278 + self.round = 0 242 279 243 - return "\n".join(lines) 280 + return "\n".join(lines) 244 281 245 282 def format_for_context(self) -> str: 246 283 """Format full initiative state for system prompt injection.""" 247 - if not self.active: 248 - return "" 284 + with self._lock: 285 + if not self.active: 286 + return "" 249 287 250 - current = self.combatants[self.current_index] 288 + current = self.combatants[self.current_index] 251 289 252 - # Find who's next (next living combatant after current) 253 - next_idx = self._peek_next_living(self.current_index) 254 - next_up = self.combatants[next_idx] if next_idx is not None else None 290 + # Find who's next (next living combatant after current) 291 + next_idx = self._peek_next_living(self.current_index) 292 + next_up = self.combatants[next_idx] if next_idx is not None else None 255 293 256 - lines = [f"## Active Initiative \u2014 Round {self.round}", ""] 257 - lines.append("| # | Combatant | Init | HP | AC | Conditions |") 258 - lines.append("|---|-----------|------|----|----|------------|") 294 + lines = [f"## Active Initiative \u2014 Round {self.round}", ""] 295 + lines.append("| # | Combatant | Init | HP | AC | Conditions |") 296 + lines.append("|---|-----------|------|----|----|------------|") 259 297 260 - for i, c in enumerate(self.combatants): 261 - marker = " > " if i == self.current_index else " " 262 - name = f"**{c.name}**" if i == self.current_index else c.name 263 - if c.defeated: 264 - name = f"~~{c.name}~~" 265 - hp_str = f"{c.hp}/{c.hp_max}" 266 - conds = ", ".join(self._format_condition(co) for co in c.conditions) 267 - if c.defeated and not conds: 268 - conds = "Defeated" 269 - lines.append(f"|{marker}| {name} | {c.initiative} | {hp_str} | {c.ac} | {conds} |") 298 + for i, c in enumerate(self.combatants): 299 + marker = " > " if i == self.current_index else " " 300 + name = f"**{c.name}**" if i == self.current_index else c.name 301 + if c.defeated: 302 + name = f"~~{c.name}~~" 303 + hp_str = f"{c.hp}/{c.hp_max}" 304 + conds = ", ".join( 305 + self._format_condition(co) for co in c.conditions 306 + ) 307 + if c.defeated and not conds: 308 + conds = "Defeated" 309 + lines.append( 310 + f"|{marker}| {name} | {c.initiative} | {hp_str} " 311 + f"| {c.ac} | {conds} |" 312 + ) 270 313 271 - lines.append("") 314 + lines.append("") 272 315 273 - cond_str = "" 274 - if current.conditions: 275 - cond_str = ", " + ", ".join(co.name for co in current.conditions) 276 - lines.append( 277 - f"**Current turn:** {current.name} " 278 - f"({current.hp}/{current.hp_max} HP, AC {current.ac}{cond_str})" 279 - ) 316 + cond_str = "" 317 + if current.conditions: 318 + cond_str = ", " + ", ".join(co.name for co in current.conditions) 319 + lines.append( 320 + f"**Current turn:** {current.name} " 321 + f"({current.hp}/{current.hp_max} HP, AC {current.ac}{cond_str})" 322 + ) 280 323 281 - if next_up: 282 - lines.append(f"**Up next:** {next_up.name}") 324 + if next_up: 325 + lines.append(f"**Up next:** {next_up.name}") 283 326 284 - lines.append(f"**Round:** {self.round}") 285 - lines.append("") 286 - lines.append( 287 - f"Resolve {current.name}'s turn, then call `next_turn` to advance." 288 - ) 327 + lines.append(f"**Round:** {self.round}") 328 + lines.append("") 329 + lines.append( 330 + f"Resolve {current.name}'s turn, then call `next_turn` to advance." 331 + ) 289 332 290 - return "\n".join(lines) 333 + return "\n".join(lines) 291 334 292 335 def _find(self, name: str) -> Combatant | None: 293 336 for c in self.combatants:
+67 -51
src/storied/log.py
··· 3 3 from __future__ import annotations 4 4 5 5 import re 6 + import threading 6 7 from dataclasses import dataclass, field 7 8 from pathlib import Path 8 9 ··· 183 184 self.base_path = base_path or data_home() 184 185 self.log_dir = self.base_path / "worlds" / world_id / "log" 185 186 187 + # Single per-instance lock guarding mutation paths. The campaign 188 + # log is now a process-wide singleton shared across the engine 189 + # thread, the engine's MCP uvicorn thread, and any background 190 + # agents (advancement, ticker), so concurrent set_scene calls 191 + # could otherwise race the read-modify-write on current_time and 192 + # stomp each other's day-file saves. 193 + self._lock = threading.RLock() 194 + 186 195 # Load or initialize state 187 196 self._load_state() 188 197 ··· 337 346 if isinstance(duration, str): 338 347 duration = Duration.parse(duration) 339 348 340 - anchor = self.current_time.to_anchor() 341 - entry = LogEntry( 342 - anchor=anchor, 343 - event=event, 344 - duration=duration, 345 - tags=tags or [], 346 - ) 347 - self.current_entries.append(entry) 349 + with self._lock: 350 + anchor = self.current_time.to_anchor() 351 + entry = LogEntry( 352 + anchor=anchor, 353 + event=event, 354 + duration=duration, 355 + tags=tags or [], 356 + ) 357 + self.current_entries.append(entry) 348 358 349 - if advance_time: 350 - self.current_time = self.current_time.add_duration(duration) 359 + if advance_time: 360 + self.current_time = self.current_time.add_duration(duration) 351 361 352 - # Check if we crossed into a new day 353 - if self.current_time.day > self.current_day: 354 - self._roll_day() 362 + # Check if we crossed into a new day 363 + if self.current_time.day > self.current_day: 364 + self._roll_day() 355 365 356 - # Save current day entries 357 - self._save_day_file(self.current_day, self.current_entries) 358 - self._save_index() 359 - return anchor 366 + # Save current day entries 367 + self._save_day_file(self.current_day, self.current_entries) 368 + self._save_index() 369 + return anchor 360 370 361 371 def _roll_day(self) -> None: 362 372 """Archive current day and start a new one.""" ··· 386 396 days, this returns every entry — useful for scanning the log for 387 397 casually mentioned entities. 388 398 """ 389 - entries: list[LogEntry] = [] 390 - start_day = max(1, self.current_day - days + 1) 391 - for day in range(start_day, self.current_day + 1): 392 - entries.extend(self._load_day_entries(day)) 393 - return entries 399 + with self._lock: 400 + entries: list[LogEntry] = [] 401 + start_day = max(1, self.current_day - days + 1) 402 + for day in range(start_day, self.current_day + 1): 403 + entries.extend(self._load_day_entries(day)) 404 + return entries 394 405 395 406 def format_for_context(self) -> str: 396 407 """Format the log for inclusion in system prompt. ··· 399 410 top of the DM's context — see ``DMEngine._format_time_header``. 400 411 This block is just the recent event history. 401 412 """ 402 - lines: list[str] = [] 403 - 404 - if self.previous_summaries: 405 - lines.append("## Campaign Log") 406 - lines.append("") 407 - lines.append("**Previous Days:**") 408 - for summary in self.previous_summaries[-3:]: # Last 3 days 409 - lines.append(f"- {summary}") 413 + with self._lock: 414 + lines: list[str] = [] 410 415 411 - if self.current_entries: 412 - if not lines: 416 + if self.previous_summaries: 413 417 lines.append("## Campaign Log") 414 - lines.append("") 415 - lines.append(f"**Today (Day {self.current_day}):**") 416 - if len(self.current_entries) > 10: 417 - lines.append(f"({len(self.current_entries) - 10} earlier entries today)") 418 - for entry in self.current_entries[-10:]: 419 - lines.append(f"- {entry.event}") 418 + lines.append("") 419 + lines.append("**Previous Days:**") 420 + for summary in self.previous_summaries[-3:]: # Last 3 days 421 + lines.append(f"- {summary}") 422 + 423 + if self.current_entries: 424 + if not lines: 425 + lines.append("## Campaign Log") 426 + lines.append("") 427 + lines.append(f"**Today (Day {self.current_day}):**") 428 + if len(self.current_entries) > 10: 429 + lines.append( 430 + f"({len(self.current_entries) - 10} earlier entries today)" 431 + ) 432 + for entry in self.current_entries[-10:]: 433 + lines.append(f"- {entry.event}") 420 434 421 - return "\n".join(lines) 435 + return "\n".join(lines) 422 436 423 437 def get_all_entries(self) -> list[LogEntry]: 424 438 """Get every log entry from day 1 through the current day.""" 425 - entries: list[LogEntry] = [] 426 - for day in range(1, self.current_day + 1): 427 - entries.extend(self._load_day_entries(day)) 428 - return entries 439 + with self._lock: 440 + entries: list[LogEntry] = [] 441 + for day in range(1, self.current_day + 1): 442 + entries.extend(self._load_day_entries(day)) 443 + return entries 429 444 430 445 def get_entries_since_tag(self, tag: str) -> list[LogEntry]: 431 446 """Get all entries after the last occurrence of a tag. ··· 455 470 """Calculate time since last rest of given type.""" 456 471 tag = f"rest:{rest_type}" 457 472 458 - # Search backwards through entries 459 - total_minutes = 0 460 - for entry in reversed(self.current_entries): 461 - if tag in entry.tags: 462 - return Duration(minutes=total_minutes) 463 - total_minutes += entry.duration.total_minutes 473 + with self._lock: 474 + # Search backwards through entries 475 + total_minutes = 0 476 + for entry in reversed(self.current_entries): 477 + if tag in entry.tags: 478 + return Duration(minutes=total_minutes) 479 + total_minutes += entry.duration.total_minutes 464 480 465 - # If not found in current day, it's been longer 466 - return Duration(minutes=total_minutes + 8 * 60) # Add 8 hours as estimate 481 + # If not found in current day, it's been longer 482 + return Duration(minutes=total_minutes + 8 * 60) # +8h estimate 467 483 468 484 469 485 def load_log(world_id: str = "default", base_path: Path | None = None) -> CampaignLog:
+21 -23
src/storied/mcp_server.py
··· 3 3 start_server() launches a FastMCP server (SSE transport) on a free localhost 4 4 port in a background thread. Each call composes a per-role top-level server 5 5 by mounting the tools/*.py module-level FastMCP instances and applying 6 - tag-based visibility filters. ToolContext is process-global and accessed by 7 - tools via the Dependency subclasses in storied.tools._context. 6 + tag-based visibility filters. The ToolContext is a process-wide singleton 7 + fetched via :func:`get_or_create_ctx`, so every server in the process — DM, 8 + planner, ticker, advancement, seeder — reads and writes the same in-memory 9 + game state. 8 10 """ 9 11 10 12 import asyncio ··· 17 19 from fastmcp import FastMCP 18 20 19 21 from storied import paths 20 - from storied.log import CampaignLog 21 22 from storied.search import VectorIndex 22 - from storied.tools import character, combat, entities, mechanics, run_code, scene 23 + from storied.tools import ( 24 + character, 25 + combat, 26 + entities, 27 + mechanics, 28 + run_code, 29 + scene, 30 + ) 23 31 from storied.tools._context import ( 24 - EntityIndex, 25 32 ToolContext, 26 - init_ctx, 33 + get_or_create_ctx, 27 34 ) 28 35 29 36 ALL_ROLES = {"dm", "planner", "seeder", "advancement", "arc_architect"} ··· 174 181 world_id: str, 175 182 player_id: str, 176 183 tool_set: str = "dm", 177 - campaign_log: CampaignLog | None = None, 178 184 ) -> MCPServerHandle: 179 185 """Start an in-process FastMCP HTTP server on a free localhost port. 180 186 181 187 Returns an MCPServerHandle with the URL to pass to --mcp-config. 182 - The server runs in a daemon thread and shares the process-global 183 - ToolContext (set via init_ctx) with the caller. 188 + The server runs in a daemon thread and reads from the singleton 189 + ToolContext (constructed lazily by ``get_or_create_ctx`` on the 190 + first call). Subsequent calls — engine, planner, ticker, advancement, 191 + seeder — all bind to the same context, so set_scene mutations made 192 + via any role are visible to every reader. 184 193 185 194 Paths are resolved via :mod:`storied.paths` (the data home is set 186 195 once at CLI startup via ``configure()``), so this function takes ··· 192 201 in tests/test_mcp_server.py; mocking out uvicorn here would test the 193 202 mock, not the launcher. 194 203 """ 195 - if campaign_log is None: 196 - campaign_log = CampaignLog(world_id) 197 - 198 - world_dir = paths.world_path(world_id) 204 + ctx = get_or_create_ctx(world_id, player_id) 199 205 200 - vector_index = VectorIndex(world_dir / "search.db") 201 206 # Populate eagerly so the first recall never races the transcript 202 207 # upsert at turn end. `_populate_index` is idempotent — subsequent 203 208 # calls skip the SRD reseed and mtime-check the user/world layers. 204 - _populate_index(world_dir, vector_index) 205 - 206 - ctx = init_ctx( 207 - world_id=world_id, 208 - player_id=player_id, 209 - campaign_log=campaign_log, 210 - entity_index=EntityIndex(world_dir), 211 - vector_index=vector_index, 212 - ) 209 + world_dir = paths.world_path(world_id) 210 + _populate_index(world_dir, ctx.vector_index) 213 211 214 212 server = asyncio.run(_compose_server(tool_set)) 215 213
+7 -18
src/storied/planner.py
··· 12 12 from storied.claude import run_prompt, run_with_tools 13 13 from storied.engine import load_prompt 14 14 from storied.log import CampaignLog 15 + from storied.tools._context import get_or_create_ctx 15 16 from storied.mcp_server import start_server as start_mcp_server 16 17 from storied.paths import data_home, world_path 17 18 from storied.session import ( ··· 249 250 parts.append(body) 250 251 251 252 # Campaign log — full recent entries so the planner can spot casual mentions 252 - log = CampaignLog(world_id) 253 + log = get_or_create_ctx(world_id, player_id).campaign_log 253 254 parts.append(f"## Campaign Time: {log.get_current_time()}") 254 255 255 256 recent = log.get_recent_entries(days=2) ··· 344 345 context = build_planning_context(world_id, player_id, candidate_pairs) 345 346 system_prompt = load_prompt("planner-system") 346 347 347 - campaign_log = CampaignLog(world_id) 348 - mcp = start_mcp_server( 349 - world_id, player_id, "planner", campaign_log, 350 - ) 348 + mcp = start_mcp_server(world_id, player_id, "planner") 351 349 352 350 progress(f"Planning with {model}...") 353 351 ··· 442 440 ) 443 441 system_prompt = load_prompt("world-seed") 444 442 445 - campaign_log = CampaignLog(world_id) 446 - mcp = start_mcp_server( 447 - world_id, player_id, "seeder", campaign_log, 448 - ) 443 + mcp = start_mcp_server(world_id, player_id, "seeder") 449 444 450 445 progress(f"Seeding with {model}...") 451 446 ··· 562 557 + char_block 563 558 ) 564 559 565 - campaign_log = CampaignLog(world_id) 566 - mcp = start_mcp_server( 567 - world_id, player_id, "arc_architect", campaign_log, 568 - ) 560 + mcp = start_mcp_server(world_id, player_id, "arc_architect") 569 561 570 562 def on_tool(name: str) -> None: 571 563 if on_progress: ··· 636 628 parts.append(prefs.rstrip()) 637 629 638 630 # Campaign log and time 639 - log = CampaignLog(world_id) 631 + log = get_or_create_ctx(world_id, player_id).campaign_log 640 632 current_time = log.get_current_time() 641 633 parts.append(f"## Current Game Time: {current_time}") 642 634 ··· 704 696 context = build_tick_context(world_id, player_id, entities) 705 697 system_prompt = load_prompt("world-tick") 706 698 707 - campaign_log = CampaignLog(world_id) 708 - mcp = start_mcp_server( 709 - world_id, player_id, "planner", campaign_log, 710 - ) 699 + mcp = start_mcp_server(world_id, player_id, "planner") 711 700 712 701 progress(f"Ticking with {model}...") 713 702
+125 -107
src/storied/search.py
··· 7 7 import re 8 8 import shutil 9 9 import struct 10 + import threading 10 11 from collections.abc import Callable 11 12 from dataclasses import dataclass 12 13 from pathlib import Path ··· 154 155 ): 155 156 self._db_path = db_path 156 157 self._embed_fn: Callable[[list[str]], list[list[float]]] = _default_embed 158 + # The shared sqlite connection is opened with check_same_thread=False 159 + # so it can be used from any uvicorn worker thread, but a single 160 + # connection still serializes its statements through one cursor — 161 + # concurrent execute calls from different MCP server threads (DM, 162 + # planner, advancement) can race on commit boundaries. RLock so 163 + # ``reindex_directory`` can call ``upsert`` re-entrantly. 164 + self._lock = threading.RLock() 157 165 self._conn = self._open_or_recreate() 158 166 159 167 @staticmethod ··· 168 176 169 177 def reseed(self, seed_path: Path) -> None: 170 178 """Replace this index's DB with a copy of the seed and reconnect.""" 171 - self._conn.close() 172 - shutil.copy2(seed_path, self._db_path) 173 - self._conn = self._open_or_recreate() 179 + with self._lock: 180 + self._conn.close() 181 + shutil.copy2(seed_path, self._db_path) 182 + self._conn = self._open_or_recreate() 174 183 175 184 def has_source(self, source: str) -> bool: 176 185 """True if at least one document is already indexed from ``source``.""" 177 - row = self._conn.execute( 178 - "SELECT 1 FROM documents WHERE source = ? LIMIT 1", 179 - (source,), 180 - ).fetchone() 181 - return row is not None 186 + with self._lock: 187 + row = self._conn.execute( 188 + "SELECT 1 FROM documents WHERE source = ? LIMIT 1", 189 + (source,), 190 + ).fetchone() 191 + return row is not None 182 192 183 193 def close(self) -> None: 184 194 """Close the database connection.""" 185 - self._conn.close() 195 + with self._lock: 196 + self._conn.close() 186 197 187 198 def _open_or_recreate(self) -> sqlite3.Connection: 188 199 """Open the database, recreating if corrupt.""" ··· 232 243 blob = _serialize_f32(vec) 233 244 preview = text[:200].strip() 234 245 235 - self._conn.execute( 236 - "DELETE FROM vec_documents WHERE doc_id = ?", (doc_id,) 237 - ) 238 - self._conn.execute( 239 - "DELETE FROM documents WHERE doc_id = ?", (doc_id,) 240 - ) 246 + with self._lock: 247 + self._conn.execute( 248 + "DELETE FROM vec_documents WHERE doc_id = ?", (doc_id,) 249 + ) 250 + self._conn.execute( 251 + "DELETE FROM documents WHERE doc_id = ?", (doc_id,) 252 + ) 241 253 242 - self._conn.execute( 243 - """INSERT INTO documents 244 - (doc_id, path, source, content_type, chunk_index, 245 - title, body_preview, game_day, updated_at) 246 - VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)""", 247 - ( 248 - doc_id, 249 - metadata.get("path", ""), 250 - metadata["source"], 251 - metadata.get("content_type"), 252 - metadata.get("chunk_index", 0), 253 - metadata.get("title"), 254 - preview, 255 - metadata.get("game_day"), 256 - metadata.get("updated_at", 0.0), 257 - ), 258 - ) 259 - self._conn.execute( 260 - "INSERT INTO vec_documents (doc_id, embedding) VALUES (?, ?)", 261 - (doc_id, blob), 262 - ) 263 - self._conn.commit() 254 + self._conn.execute( 255 + """INSERT INTO documents 256 + (doc_id, path, source, content_type, chunk_index, 257 + title, body_preview, game_day, updated_at) 258 + VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)""", 259 + ( 260 + doc_id, 261 + metadata.get("path", ""), 262 + metadata["source"], 263 + metadata.get("content_type"), 264 + metadata.get("chunk_index", 0), 265 + metadata.get("title"), 266 + preview, 267 + metadata.get("game_day"), 268 + metadata.get("updated_at", 0.0), 269 + ), 270 + ) 271 + self._conn.execute( 272 + "INSERT INTO vec_documents (doc_id, embedding) VALUES (?, ?)", 273 + (doc_id, blob), 274 + ) 275 + self._conn.commit() 264 276 265 277 def delete(self, doc_id: str) -> None: 266 278 """Remove a document and its embedding.""" 267 - self._conn.execute( 268 - "DELETE FROM vec_documents WHERE doc_id = ?", (doc_id,) 269 - ) 270 - self._conn.execute( 271 - "DELETE FROM documents WHERE doc_id = ?", (doc_id,) 272 - ) 273 - self._conn.commit() 279 + with self._lock: 280 + self._conn.execute( 281 + "DELETE FROM vec_documents WHERE doc_id = ?", (doc_id,) 282 + ) 283 + self._conn.execute( 284 + "DELETE FROM documents WHERE doc_id = ?", (doc_id,) 285 + ) 286 + self._conn.commit() 274 287 275 288 def search( 276 289 self, ··· 309 322 310 323 fetch_limit = limit * 3 311 324 312 - rows = self._conn.execute( 313 - """SELECT v.doc_id, v.distance, d.path, d.source, 314 - d.content_type, d.body_preview, d.game_day 315 - FROM vec_documents v 316 - JOIN documents d ON v.doc_id = d.doc_id 317 - WHERE v.embedding MATCH ? 318 - AND k = ? 319 - ORDER BY v.distance""", 320 - (blob, fetch_limit), 321 - ).fetchall() 325 + with self._lock: 326 + rows = self._conn.execute( 327 + """SELECT v.doc_id, v.distance, d.path, d.source, 328 + d.content_type, d.body_preview, d.game_day 329 + FROM vec_documents v 330 + JOIN documents d ON v.doc_id = d.doc_id 331 + WHERE v.embedding MATCH ? 332 + AND k = ? 333 + ORDER BY v.distance""", 334 + (blob, fetch_limit), 335 + ).fetchall() 322 336 323 337 hits: list[SearchHit] = [] 324 338 for doc_id, distance, path, source, ctype, preview, game_day in rows: ··· 364 378 ``transcripts/`` — those files are indexed separately by the 365 379 engine under ``source="transcript"``. 366 380 """ 367 - existing = {} 368 - for row in self._conn.execute( 369 - "SELECT doc_id, updated_at FROM documents WHERE source = ?", 370 - (source,), 371 - ): 372 - existing[row[0]] = row[1] 381 + with self._lock: 382 + existing = {} 383 + for row in self._conn.execute( 384 + "SELECT doc_id, updated_at FROM documents WHERE source = ?", 385 + (source,), 386 + ): 387 + existing[row[0]] = row[1] 373 388 374 - seen_doc_ids: set[str] = set() 375 - count = 0 389 + seen_doc_ids: set[str] = set() 390 + count = 0 376 391 377 - for md_file in sorted(directory.rglob("*.md")): 378 - rel = md_file.relative_to(directory) 379 - if skip_subdirs and rel.parts and rel.parts[0] in skip_subdirs: 380 - continue 381 - content = md_file.read_text() 382 - mtime = md_file.stat().st_mtime 383 - content_type = rel.parts[0] if len(rel.parts) > 1 else "" 392 + for md_file in sorted(directory.rglob("*.md")): 393 + rel = md_file.relative_to(directory) 394 + if skip_subdirs and rel.parts and rel.parts[0] in skip_subdirs: 395 + continue 396 + content = md_file.read_text() 397 + mtime = md_file.stat().st_mtime 398 + content_type = rel.parts[0] if len(rel.parts) > 1 else "" 384 399 385 - chunks = chunk_document(md_file, content) 400 + chunks = chunk_document(md_file, content) 386 401 387 - for chunk_idx, chunk_text in chunks: 388 - doc_id = f"{source}:{rel}:{chunk_idx}" 389 - seen_doc_ids.add(doc_id) 402 + for chunk_idx, chunk_text in chunks: 403 + doc_id = f"{source}:{rel}:{chunk_idx}" 404 + seen_doc_ids.add(doc_id) 390 405 391 - if doc_id in existing and existing[doc_id] == mtime: 392 - count += 1 393 - continue 406 + if doc_id in existing and existing[doc_id] == mtime: 407 + count += 1 408 + continue 394 409 395 - title_match = re.match(r"^#\s+(.+)", content) 396 - title = ( 397 - title_match.group(1).strip() if title_match else md_file.stem 398 - ) 410 + title_match = re.match(r"^#\s+(.+)", content) 411 + title = ( 412 + title_match.group(1).strip() 413 + if title_match 414 + else md_file.stem 415 + ) 399 416 400 - game_day = None 401 - day_match = re.match(r"day([+-]\d+)", md_file.stem) 402 - if day_match: 403 - game_day = int(day_match.group(1)) 417 + game_day = None 418 + day_match = re.match(r"day([+-]\d+)", md_file.stem) 419 + if day_match: 420 + game_day = int(day_match.group(1)) 404 421 405 - self.upsert(doc_id, chunk_text, { 406 - "source": source, 407 - "content_type": content_type, 408 - "path": str(md_file), 409 - "title": title, 410 - "chunk_index": chunk_idx, 411 - "game_day": game_day, 412 - "updated_at": mtime, 413 - }) 414 - count += 1 422 + self.upsert(doc_id, chunk_text, { 423 + "source": source, 424 + "content_type": content_type, 425 + "path": str(md_file), 426 + "title": title, 427 + "chunk_index": chunk_idx, 428 + "game_day": game_day, 429 + "updated_at": mtime, 430 + }) 431 + count += 1 415 432 416 - stale = set(existing.keys()) - seen_doc_ids 417 - for doc_id in stale: 418 - self.delete(doc_id) 433 + stale = set(existing.keys()) - seen_doc_ids 434 + for doc_id in stale: 435 + self.delete(doc_id) 419 436 420 - return count 437 + return count 421 438 422 439 def stats(self) -> dict: 423 440 """Return index statistics.""" 424 - total = self._conn.execute( 425 - "SELECT count(*) FROM documents" 426 - ).fetchone()[0] 441 + with self._lock: 442 + total = self._conn.execute( 443 + "SELECT count(*) FROM documents" 444 + ).fetchone()[0] 427 445 428 - by_source: dict[str, int] = {} 429 - for source, cnt in self._conn.execute( 430 - "SELECT source, count(*) FROM documents GROUP BY source" 431 - ): 432 - by_source[source] = cnt 446 + by_source: dict[str, int] = {} 447 + for source, cnt in self._conn.execute( 448 + "SELECT source, count(*) FROM documents GROUP BY source" 449 + ): 450 + by_source[source] = cnt 433 451 434 - return {"total_documents": total, "by_source": by_source} 452 + return {"total_documents": total, "by_source": by_source}
+4
src/storied/tools/__init__.py
··· 18 18 World, 19 19 _get_file_lock, 20 20 _sync_player_hp, 21 + current_ctx, 22 + get_or_create_ctx, 21 23 init_ctx, 22 24 reset_ctx, 23 25 ) ··· 35 37 "_get_file_lock", 36 38 "_load_entity", 37 39 "_sync_player_hp", 40 + "current_ctx", 41 + "get_or_create_ctx", 38 42 "init_ctx", 39 43 "reset_ctx", 40 44 ]
+74 -14
src/storied/tools/_context.py
··· 62 62 class ToolContext: 63 63 """Shared infrastructure for all tool calls. 64 64 65 - Created once per process via init_ctx() and exposed to tools through 66 - the Dependency subclasses below. Tools should never reach for the 67 - full ToolContext — they ask for the specific slices they need. 65 + Genuine singleton — one per process, shared by every FastMCP server 66 + (DM, planner, seeder, advancement, arc_architect) so they all read 67 + and write the same in-memory game state. The DMEngine keeps a 68 + reference to this same instance for its display reads. Tools never 69 + reach for the full ToolContext; they ask for the slice they need 70 + via the Dependency subclasses below. 68 71 69 72 Filesystem paths live in :mod:`storied.paths` (module globals, 70 73 configured at CLI startup), not on the ToolContext. The context ··· 80 83 81 84 82 85 # --- Process-global ToolContext --------------------------------------------- 86 + # 87 + # Storied serves exactly one campaign at a time, so we collapse the per-MCP 88 + # ToolContext into a true process-wide singleton. Multiple FastMCP servers 89 + # (engine, planner, ticker, advancement, seeder) all retrieve the same 90 + # instance via ``get_or_create_ctx``, so set_scene mutations made in any 91 + # server are visible to every reader. The earlier "init on each start_server" 92 + # pattern caused background agents to silently swap in fresh CampaignLog / 93 + # EntityIndex / VectorIndex copies, leaving the engine reading a stale view. 83 94 84 95 _ctx: ToolContext | None = None 96 + _ctx_lock = threading.Lock() 85 97 86 98 87 99 def init_ctx( ··· 91 103 entity_index: EntityIndex, 92 104 vector_index: VectorIndex, 93 105 ) -> ToolContext: 94 - """Initialize the process-global ToolContext. 106 + """Set the process-global ToolContext directly. 95 107 96 - Idempotent — last writer wins. Tests use this to reset state between cases. 108 + Test-only entry point. Production code should call 109 + :func:`get_or_create_ctx` instead — it constructs the slices itself 110 + and refuses to clobber an existing context. Tests pair this with 111 + :func:`reset_ctx` in fixture teardown so each test starts clean. 97 112 """ 98 113 global _ctx 99 - _ctx = ToolContext( 100 - world_id=world_id, 101 - player_id=player_id, 102 - campaign_log=campaign_log, 103 - entity_index=entity_index, 104 - vector_index=vector_index, 105 - ) 114 + with _ctx_lock: 115 + _ctx = ToolContext( 116 + world_id=world_id, 117 + player_id=player_id, 118 + campaign_log=campaign_log, 119 + entity_index=entity_index, 120 + vector_index=vector_index, 121 + ) 122 + return _ctx 123 + 124 + 125 + def get_or_create_ctx(world_id: str, player_id: str) -> ToolContext: 126 + """Return the singleton ToolContext, constructing it on first call. 127 + 128 + Used by every production code path that needs to attach an MCP 129 + server or background agent to the live game state. The first call 130 + builds the CampaignLog, EntityIndex, and VectorIndex from disk; 131 + subsequent callers get the same instance back. Mismatched 132 + world_id/player_id raises rather than silently rebinding, since the 133 + process is committed to one campaign. 134 + """ 135 + from storied import paths 136 + from storied.search import VectorIndex as _VectorIndex 137 + 138 + global _ctx 139 + with _ctx_lock: 140 + if _ctx is not None: 141 + if _ctx.world_id != world_id or _ctx.player_id != player_id: 142 + raise RuntimeError( 143 + f"ToolContext already bound to " 144 + f"world={_ctx.world_id!r} player={_ctx.player_id!r}; " 145 + f"refusing to rebind to " 146 + f"world={world_id!r} player={player_id!r}" 147 + ) 148 + return _ctx 149 + 150 + world_dir = paths.world_path(world_id) 151 + _ctx = ToolContext( 152 + world_id=world_id, 153 + player_id=player_id, 154 + campaign_log=CampaignLog(world_id), 155 + entity_index=EntityIndex(world_dir), 156 + vector_index=_VectorIndex(world_dir / "search.db"), 157 + ) 158 + return _ctx 159 + 160 + 161 + def current_ctx() -> ToolContext | None: 162 + """Return the current ToolContext without constructing one.""" 106 163 return _ctx 107 164 108 165 109 166 def reset_ctx() -> None: 110 167 """Clear the process-global ToolContext (for test teardown).""" 111 168 global _ctx 112 - _ctx = None 169 + with _ctx_lock: 170 + _ctx = None 113 171 114 172 115 173 def _require() -> ToolContext: 116 174 if _ctx is None: 117 - raise RuntimeError("ToolContext not initialized; call init_ctx() first") 175 + raise RuntimeError( 176 + "ToolContext not initialized; call get_or_create_ctx() first" 177 + ) 118 178 return _ctx 119 179 120 180
+5
tests/test_engine.py
··· 76 76 77 77 with patch("storied.engine.start_mcp_server") as mock_mcp: 78 78 from storied.initiative import InitiativeTracker 79 + from storied.log import CampaignLog 79 80 from storied.tools import EntityIndex 80 81 81 82 mock_mcp.return_value = type("Handle", (), { 82 83 "url": "http://localhost:0/sse", 83 84 "ctx": type("Ctx", (), { 85 + "campaign_log": CampaignLog("test"), 84 86 "entity_index": EntityIndex(world_dir), 85 87 "vector_index": None, 86 88 "initiative": InitiativeTracker(), ··· 214 216 215 217 transcript_path = tmp_path / "transcripts" / "session.jsonl" 216 218 with patch("storied.engine.start_mcp_server") as mock_mcp: 219 + from storied.log import CampaignLog 220 + 217 221 mock_mcp.return_value = type("Handle", (), { 218 222 "url": "http://localhost:0/sse", 219 223 "ctx": type("Ctx", (), { 224 + "campaign_log": CampaignLog("test"), 220 225 "entity_index": EntityIndex(tmp_path / "worlds" / "test"), 221 226 "vector_index": None, 222 227 "initiative": InitiativeTracker(),