personal memory agent
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

skills: add skill_observer + skill_editor daily talents (Lode B)

Daily owner-wide talent pair that replaces the old per-activity
talent/skills observer/generator:

- skill_observer.md (cogitate, priority 41): reads recent activities
and promotes/seeds/refreshes patterns via sol call skills.
- skill_editor.md + skill_editor.py (generate, priority 60): writes
one profile per run from the pending queue (edit_requests →
needs_profile → needs_refresh → skip).
- Pre-hook builds $skill_context from metadata + full observation
ledger + last-5 activity records + last-3 narratives + per-span
JSONL reads for observations[-3:] (≈4.3–4.8k tokens worst case).
- Post-hook validates frontmatter (name, display_name, description
1–1024, category, numeric confidence 0–1), atomic writes via
think.skills.save_profile, clears pending flags idempotently, and
fires an agency.md nudge only on first-time creates.
- 19 hook tests + 1 prompt smoke test.

Part of Lode B of the skills-observer-editor refactor.

+1370
+100
apps/skills/talent/skill_editor.md
··· 1 + { 2 + "type": "generate", 3 + "title": "Skill Editor", 4 + "description": "Writes or refreshes one skill profile per day from observer-flagged patterns or chat edit requests.", 5 + "color": "#8e24aa", 6 + "schedule": "daily", 7 + "priority": 60, 8 + "multi_facet": false, 9 + "output": "md", 10 + "hook": {"pre": "skills:skill_editor", "post": "skills:skill_editor"}, 11 + "load": {"transcripts": false, "percepts": false, "talents": false} 12 + } 13 + 14 + You are the skill profile editor for solstone's owner. Your job is to write or refresh exactly one skill profile using the context provided. 15 + 16 + The profile format follows the Anthropic Agent Skills convention: a single SKILL.md-style markdown document with YAML frontmatter and a focused body. The profile describes one recurring capability the owner exercises — not a bundle of related capabilities. 17 + 18 + $skill_mode_instruction 19 + 20 + ## Context 21 + 22 + $skill_context 23 + 24 + ## Existing profile (if refreshing) 25 + 26 + $existing_profile 27 + 28 + ## Owner instructions 29 + 30 + $owner_instructions 31 + 32 + ## What to produce 33 + 34 + Return markdown in exactly this structure, and nothing else: 35 + 36 + --- 37 + name: $slug 38 + display_name: "<Human-readable skill name>" 39 + description: "<Anthropic-style description — see rules below>" 40 + category: "<category — e.g. engineering, communication, research, operations>" 41 + confidence: <float 0.0-1.0> 42 + --- 43 + 44 + ## Description 45 + 46 + <1-3 sentences. Grounded, specific. What is this capability?> 47 + 48 + ## How 49 + 50 + <One paragraph. How does the owner typically exercise this skill — tools, workflow, techniques — based on the observations. Name specific tools and collaborators when the evidence supports it. Do not guess.> 51 + 52 + ## Why 53 + 54 + <One paragraph. Why does the owner do this — what problem does it solve, what outcome does it produce? Infer from observation context. Do not speculate beyond the evidence.> 55 + 56 + ## Frontmatter field rules 57 + 58 + **`name`**: must equal exactly `$slug`. Do not alter. It is the kebab-case identifier and must match the filename. 59 + 60 + **`display_name`**: human-readable version of the capability name. Title case. 2–6 words typical. Example: `"Python Performance Profiling"` or `"Litigation Strategy"`. Quote the value. 61 + 62 + **`description`**: the most important field — Claude and other agents use it to decide when this profile is relevant. Rules: 63 + - Write in third person. Do not use "I" or "you". Example: "Analyzes Python scripts for..." — not "I analyze Python scripts for..." 64 + - Length: 200–900 characters (the limit is 1024; stay well under). 65 + - Include BOTH what the skill is AND triggering context: the kinds of questions, file types, tools, activity descriptions, or situations where a reader should consult this profile. 66 + - Be specific, not abstract. Name actual tools, domains, file types, or contexts from the observation evidence. "Optimizes Python scripts using cProfile, py-spy, and snakeviz" beats "helps with performance." 67 + - Include scope boundaries when helpful: "Do NOT use for..." clauses help prevent over-activation. 68 + - Avoid vague verbs like "helps with", "works with", "processes". Use concrete action verbs: "analyzes", "drafts", "negotiates", "debugs", "profiles". 69 + - If refreshing, start from the existing `description` and evolve it — do not rewrite from scratch unless the evidence shows the old description was wrong. 70 + 71 + Good description example: 72 + "Analyzes Python scripts for performance bottlenecks using cProfile, py-spy, and snakeviz. Use when diagnosing slow scripts, investigating CI time regressions, or picking optimization targets from a flame graph. Typically invoked when user-visible latency or CI duration becomes painful. Do NOT use for general code review, architectural design, or non-Python performance work." 73 + 74 + Bad description example: 75 + "Helps with Python performance stuff." 76 + 77 + **`category`**: single word or short phrase. Use what naturally describes the domain: `engineering`, `communication`, `research`, `operations`, `legal`, `writing`, etc. If unsure: pick the closest existing category from other profiles in the registry (see $skill_context). 78 + 79 + **`confidence`**: float 0.0 to 1.0. Your honest confidence in this profile as a grounded description of a real, recurring capability. Factors: observation count, consistency across observations, specificity of evidence. Err low when evidence is thin. Typical values: 0.4–0.6 on a freshly-promoted pattern, 0.7–0.9 on a well-established one. 80 + 81 + ## Body structure rules 82 + 83 + **Description section**: 1–3 sentences. Names the capability and its essence. Grounded in observations. 84 + 85 + **How section**: one paragraph. Concrete specifics — tools named, workflow described, collaborators cited when the evidence supports it. Assume the reader is a smart future agent: do not explain what cProfile is, just say "uses cProfile". Do not define common terms. Do not pad. 86 + 87 + **Why section**: one paragraph. The problem the capability solves and the outcome it produces. Inferable from observation context; do not speculate beyond the evidence. 88 + 89 + Total body under ~400 lines. If the profile is approaching that, the scope is too broad — narrow it to the core capability. 90 + 91 + ## Grounding rules 92 + 93 + - Stay within the evidence in $skill_context and $existing_profile. 94 + - Do not invent tools, collaborators, or techniques not present in the evidence. 95 + - If a field is thin, hedge: "appears to support X" is better than confident speculation. 96 + - When refreshing: preserve `name` exactly (matches `$slug`). Preserve the core skill identity unless evidence clearly shows the old identity was wrong. 97 + - When owner instructions are present: prioritize them, but do not let them invent capabilities not in the evidence. 98 + - One profile = one capability. If evidence suggests two distinct capabilities bundled in this pattern, note the second one at the end of the Why section as "Related but distinct: <capability>" — do not attempt to describe both in one profile. 99 + 100 + Return ONLY the markdown document. No preamble, no explanation, no code fences, no commentary.
+595
apps/skills/talent/skill_editor.py
··· 1 + # SPDX-License-Identifier: AGPL-3.0-only 2 + # Copyright (c) 2026 sol pbc 3 + 4 + from __future__ import annotations 5 + 6 + import json 7 + import logging 8 + from datetime import datetime 9 + from pathlib import Path 10 + from typing import Any 11 + 12 + import frontmatter 13 + 14 + from think import skills as think_skills 15 + from think.activities import get_activity_output_path, get_activity_record 16 + from think.identity import update_identity_section 17 + from think.utils import get_journal 18 + 19 + logger = logging.getLogger(__name__) 20 + 21 + WATCHING_AND_LEARNING = "[watching and learning]" 22 + NO_PENDING_SKILL_WORK = "no pending skill work" 23 + SPAN_DIRNAME = "spans" 24 + 25 + 26 + def _sort_key(value: Any) -> str: 27 + return str(value or "") 28 + 29 + 30 + def _compact_json(value: Any, limit: int | None = None) -> str: 31 + text = json.dumps(value, ensure_ascii=False, sort_keys=True, default=str) 32 + if limit is None or len(text) <= limit: 33 + return text 34 + return text[:limit].rstrip() + "..." 35 + 36 + 37 + def _pattern_facets(pattern: dict[str, Any]) -> list[str]: 38 + facets = pattern.get("facets_touched") 39 + if isinstance(facets, list) and facets: 40 + return [str(item) for item in facets if item] 41 + derived = { 42 + str(obs.get("facet")) 43 + for obs in pattern.get("observations", []) 44 + if obs.get("facet") 45 + } 46 + return sorted(derived) 47 + 48 + 49 + def _load_profile_metadata(markdown: str | None) -> dict[str, Any]: 50 + if not markdown: 51 + return {} 52 + try: 53 + post = frontmatter.loads(markdown) 54 + except Exception: 55 + logger.warning("skill_editor: failed to parse existing profile metadata") 56 + return {} 57 + meta = post.metadata 58 + return meta if isinstance(meta, dict) else {} 59 + 60 + 61 + def _read_identity_section(file_name: str, heading: str) -> str: 62 + path = Path(get_journal()) / "identity" / Path(file_name).name 63 + try: 64 + text = path.read_text(encoding="utf-8") 65 + except (FileNotFoundError, OSError): 66 + return "" 67 + 68 + lines = text.split("\n") 69 + start = None 70 + target_heading = f"## {heading}" 71 + for index, line in enumerate(lines): 72 + if line == target_heading: 73 + start = index + 1 74 + elif start is not None and line.startswith("## "): 75 + return "\n".join(lines[start:index]).strip() 76 + if start is not None: 77 + return "\n".join(lines[start:]).strip() 78 + return "" 79 + 80 + 81 + def _mark_edit_request_processed(request_id: str, *, error: str | None = None) -> None: 82 + now = think_skills.utc_now_iso() 83 + 84 + def mutate(rows: list[dict[str, Any]]) -> list[dict[str, Any]]: 85 + for row in rows: 86 + if row.get("id") != request_id or row.get("processed_at") is not None: 87 + continue 88 + row["processed_at"] = now 89 + if error: 90 + row["processing_error"] = error 91 + break 92 + return rows 93 + 94 + think_skills.locked_modify_edit_requests(mutate) 95 + 96 + 97 + def _build_metadata_section( 98 + pattern: dict[str, Any], 99 + profile_meta: dict[str, Any], 100 + *, 101 + request: dict[str, Any] | None, 102 + ) -> str: 103 + observations = pattern.get("observations", []) 104 + lines = [ 105 + "## Metadata", 106 + f"Name: {pattern.get('name', '')}", 107 + f"Slug: {pattern.get('slug', '')}", 108 + f"Display name: {profile_meta.get('display_name', '')}", 109 + f"Category: {profile_meta.get('category', '')}", 110 + f"Confidence: {profile_meta.get('confidence', '')}", 111 + f"Status: {pattern.get('status', '')}", 112 + f"First seen: {pattern.get('first_seen', '')}", 113 + f"Last seen: {pattern.get('last_seen', '')}", 114 + f"Observation count: {len(observations)}", 115 + f"Facet count: {len(_pattern_facets(pattern))}", 116 + f"Facets touched: {', '.join(_pattern_facets(pattern))}", 117 + f"Created at: {pattern.get('created_at', '')}", 118 + f"Updated at: {pattern.get('updated_at', '')}", 119 + f"Profile generated at: {pattern.get('profile_generated_at', '')}", 120 + ] 121 + 122 + if request is not None: 123 + lines.extend( 124 + [ 125 + "", 126 + "Last edit request:", 127 + f"- text: {request.get('instructions', '')}", 128 + f"- requested_at: {request.get('requested_at', '')}", 129 + f"- requested_by: {request.get('requested_by', '')}", 130 + ] 131 + ) 132 + 133 + return "\n".join(lines) 134 + 135 + 136 + def _build_observation_ledger(pattern: dict[str, Any]) -> str: 137 + lines = ["## Observation ledger"] 138 + observations = pattern.get("observations", []) 139 + if not observations: 140 + lines.append("[no observations]") 141 + return "\n".join(lines) 142 + lines.extend( 143 + _compact_json(observation) 144 + for observation in observations 145 + if isinstance(observation, dict) 146 + ) 147 + return "\n".join(lines) 148 + 149 + 150 + def _iter_recent_activity_refs( 151 + pattern: dict[str, Any], *, observation_limit: int 152 + ) -> list[tuple[dict[str, Any], str]]: 153 + refs: list[tuple[dict[str, Any], str]] = [] 154 + for observation in pattern.get("observations", [])[-observation_limit:]: 155 + if not isinstance(observation, dict): 156 + continue 157 + for activity_id in observation.get("activity_ids", []) or []: 158 + refs.append((observation, str(activity_id))) 159 + return refs 160 + 161 + 162 + def _segment_ledger(record: dict[str, Any]) -> str: 163 + summary = { 164 + "id": record.get("id"), 165 + "activity": record.get("activity"), 166 + "title": record.get("title"), 167 + "description": record.get("description"), 168 + "segments": (record.get("segments") or [])[:5], 169 + "active_entities": (record.get("active_entities") or [])[:5], 170 + "created_at": record.get("created_at"), 171 + } 172 + return _compact_json(summary, limit=400) 173 + 174 + 175 + def _build_recent_activity_records(pattern: dict[str, Any]) -> str: 176 + lines = ["## Recent activity records"] 177 + refs = _iter_recent_activity_refs(pattern, observation_limit=5) 178 + if not refs: 179 + lines.append("[activity record not available]") 180 + return "\n".join(lines) 181 + 182 + for observation, activity_id in refs: 183 + facet = str(observation.get("facet") or "") 184 + day = str(observation.get("day") or "") 185 + lines.extend( 186 + [ 187 + "", 188 + f"### {day} / {facet} / {activity_id}", 189 + ] 190 + ) 191 + record = get_activity_record(facet, day, activity_id) 192 + if record is None: 193 + lines.append("[activity record not available]") 194 + continue 195 + lines.append(_compact_json(record, limit=600)) 196 + lines.append(_segment_ledger(record)) 197 + return "\n".join(lines) 198 + 199 + 200 + def _build_recent_narratives(pattern: dict[str, Any]) -> str: 201 + lines = ["## Recent narratives"] 202 + found_any = False 203 + for observation, activity_id in _iter_recent_activity_refs( 204 + pattern, observation_limit=3 205 + ): 206 + facet = str(observation.get("facet") or "") 207 + day = str(observation.get("day") or "") 208 + path = get_activity_output_path(facet, day, activity_id, "narrative") 209 + try: 210 + content = path.read_text(encoding="utf-8").strip() 211 + except (FileNotFoundError, OSError): 212 + continue 213 + if not content: 214 + continue 215 + found_any = True 216 + lines.extend( 217 + [ 218 + "", 219 + f"### {day} / {facet} / {activity_id} / {path.name}", 220 + content[:800], 221 + ] 222 + ) 223 + if not found_any: 224 + lines.append("[narrative not available]") 225 + return "\n".join(lines) 226 + 227 + 228 + def _build_recent_spans(pattern: dict[str, Any]) -> str: 229 + lines = ["## Recent span bodies"] 230 + refs = pattern.get("observations", [])[-3:] 231 + if not refs: 232 + lines.append("[spans unavailable]") 233 + return "\n".join(lines) 234 + 235 + journal = Path(get_journal()) 236 + for observation in refs: 237 + if not isinstance(observation, dict): 238 + continue 239 + facet = str(observation.get("facet") or "") 240 + day = str(observation.get("day") or "") 241 + activity_ids = {str(item) for item in observation.get("activity_ids", []) or []} 242 + lines.extend( 243 + ["", f"### {day} / {facet} / ids={','.join(sorted(activity_ids))}"] 244 + ) 245 + span_file = journal / "facets" / facet / Path(SPAN_DIRNAME) / f"{day}.jsonl" 246 + if not span_file.exists(): 247 + lines.append("[spans unavailable]") 248 + continue 249 + 250 + matched = False 251 + try: 252 + with open(span_file, encoding="utf-8") as handle: 253 + for raw_line in handle: 254 + line = raw_line.strip() 255 + if not line: 256 + continue 257 + try: 258 + row = json.loads(line) 259 + except json.JSONDecodeError: 260 + logger.warning( 261 + "skill_editor: malformed spans row in %s", span_file 262 + ) 263 + continue 264 + if str(row.get("span_id") or "") not in activity_ids: 265 + continue 266 + matched = True 267 + body = str(row.get("body", "") or "")[:400] 268 + talent = str(row.get("talent", "unknown") or "unknown") 269 + lines.append(f"{talent}: {body}") 270 + except OSError: 271 + lines.append("[spans unavailable]") 272 + continue 273 + 274 + if not matched: 275 + lines.append("[no matching spans]") 276 + 277 + return "\n".join(lines) 278 + 279 + 280 + def _build_skill_context( 281 + pattern: dict[str, Any], 282 + profile_meta: dict[str, Any], 283 + *, 284 + request: dict[str, Any] | None, 285 + ) -> str: 286 + sections = [ 287 + _build_metadata_section(pattern, profile_meta, request=request), 288 + _build_observation_ledger(pattern), 289 + _build_recent_activity_records(pattern), 290 + _build_recent_narratives(pattern), 291 + _build_recent_spans(pattern), 292 + ] 293 + return "\n\n".join(section for section in sections if section).strip() 294 + 295 + 296 + def _validate_updated_at(value: str) -> bool: 297 + try: 298 + datetime.fromisoformat(value.replace("Z", "+00:00")) 299 + except ValueError: 300 + return False 301 + return True 302 + 303 + 304 + def pre_process(context: dict) -> dict | None: 305 + day = context.get("day") 306 + if not day: 307 + return None 308 + 309 + request: dict[str, Any] | None = None 310 + pattern: dict[str, Any] | None = None 311 + mode = "" 312 + request_id: str | None = None 313 + owner_instructions = "" 314 + 315 + pending_requests = sorted( 316 + [ 317 + row 318 + for row in think_skills.load_edit_requests() 319 + if isinstance(row, dict) and row.get("processed_at") is None 320 + ], 321 + key=lambda row: _sort_key(row.get("requested_at")), 322 + ) 323 + if pending_requests: 324 + request = pending_requests[0] 325 + request_id = str(request.get("id") or "") or None 326 + target_slug = str(request.get("slug") or "") 327 + pattern = think_skills.find_pattern(target_slug) 328 + if pattern is None: 329 + if request_id: 330 + _mark_edit_request_processed(request_id, error="slug missing") 331 + logger.warning( 332 + "skill_editor: edit request targets missing slug %s", target_slug 333 + ) 334 + return {"skip_reason": "edit-request target slug missing"} 335 + mode = "edit_request" 336 + owner_instructions = str(request.get("instructions") or "") 337 + else: 338 + patterns = think_skills.load_patterns() 339 + for candidate in sorted( 340 + patterns, 341 + key=lambda row: _sort_key(row.get("updated_at")), 342 + ): 343 + if not isinstance(candidate, dict) or not candidate.get("needs_profile"): 344 + continue 345 + observations = candidate.get("observations", []) 346 + if not observations: 347 + logger.warning( 348 + "skill_editor: skipping zero-observation needs_profile %s", 349 + candidate.get("slug"), 350 + ) 351 + continue 352 + pattern = think_skills.find_pattern(str(candidate.get("slug") or "")) 353 + if pattern is None: 354 + logger.warning( 355 + "skill_editor: missing needs_profile slug %s", 356 + candidate.get("slug"), 357 + ) 358 + continue 359 + mode = "create" 360 + break 361 + 362 + if pattern is None: 363 + for candidate in sorted( 364 + patterns, 365 + key=lambda row: _sort_key(row.get("updated_at")), 366 + ): 367 + if not isinstance(candidate, dict) or not candidate.get( 368 + "needs_refresh" 369 + ): 370 + continue 371 + observations = candidate.get("observations", []) 372 + if not observations: 373 + logger.warning( 374 + "skill_editor: skipping zero-observation needs_refresh %s", 375 + candidate.get("slug"), 376 + ) 377 + continue 378 + pattern = think_skills.find_pattern(str(candidate.get("slug") or "")) 379 + if pattern is None: 380 + logger.warning( 381 + "skill_editor: missing needs_refresh slug %s", 382 + candidate.get("slug"), 383 + ) 384 + continue 385 + mode = "refresh" 386 + break 387 + 388 + if pattern is None: 389 + return {"skip_reason": NO_PENDING_SKILL_WORK} 390 + 391 + slug = str(pattern.get("slug") or "") 392 + existing_profile = "" 393 + if mode in {"refresh", "edit_request"}: 394 + existing_profile = think_skills.load_profile(slug) or "" 395 + if mode == "refresh" and not existing_profile: 396 + logger.info( 397 + "skill_editor: refresh target %s has no profile, normalizing to create", 398 + slug, 399 + ) 400 + mode = "create" 401 + 402 + profile_meta = _load_profile_metadata(existing_profile) 403 + skill_context = _build_skill_context(pattern, profile_meta, request=request) 404 + 405 + if mode == "create": 406 + mode_instruction = ( 407 + "Produce a complete skill profile for this pattern. " 408 + "The observation evidence supports writing a grounded first version." 409 + ) 410 + existing_profile = "" 411 + elif mode == "refresh": 412 + mode_instruction = ( 413 + "Update this existing skill profile with new evidence from the most recent " 414 + "observations. Preserve the core skill identity and slug. Incorporate new " 415 + "tools, collaborators, or techniques that the evidence supports." 416 + ) 417 + else: 418 + mode_instruction = ( 419 + "The owner has provided specific instructions for refining this skill " 420 + "profile. Prioritize the instructions while staying grounded in the " 421 + "observation evidence." 422 + ) 423 + 424 + return { 425 + "template_vars": { 426 + "slug": slug, 427 + "skill_mode_instruction": mode_instruction, 428 + "skill_context": skill_context, 429 + "existing_profile": existing_profile, 430 + "owner_instructions": owner_instructions, 431 + }, 432 + "meta": { 433 + "slug": slug, 434 + "mode": mode, 435 + "request_id": request_id, 436 + }, 437 + } 438 + 439 + 440 + def post_process(result: str, context: dict) -> str | None: 441 + if not result or not result.strip().startswith("---"): 442 + logger.warning("skill_editor: result missing frontmatter") 443 + return None 444 + 445 + meta = context.get("meta") or {} 446 + slug = str(meta.get("slug") or "") 447 + mode = str(meta.get("mode") or "") 448 + request_id = meta.get("request_id") 449 + if not slug or mode not in {"create", "refresh", "edit_request"}: 450 + logger.warning("skill_editor: missing hook metadata") 451 + return None 452 + 453 + try: 454 + post = frontmatter.loads(result) 455 + except Exception: 456 + logger.warning("skill_editor: failed to parse profile markdown") 457 + return None 458 + 459 + data = post.metadata if isinstance(post.metadata, dict) else {} 460 + name = str(data.get("name") or "").strip() 461 + if name != slug: 462 + logger.warning("skill_editor: frontmatter name mismatch %s != %s", name, slug) 463 + return None 464 + 465 + display_name = str(data.get("display_name") or "").strip() 466 + description = str(data.get("description") or "").strip() 467 + category = str(data.get("category") or "").strip() 468 + confidence = data.get("confidence") 469 + 470 + if not display_name or len(display_name) > 80: 471 + logger.warning("skill_editor: invalid display_name") 472 + return None 473 + if not description or len(description) > 1024: 474 + logger.warning("skill_editor: invalid description length") 475 + return None 476 + if not category: 477 + logger.warning("skill_editor: missing category") 478 + return None 479 + if isinstance(confidence, bool) or not isinstance(confidence, (int, float)): 480 + logger.warning("skill_editor: invalid confidence type") 481 + return None 482 + confidence_value = float(confidence) 483 + if confidence_value < 0.0 or confidence_value > 1.0: 484 + logger.warning("skill_editor: confidence out of range") 485 + return None 486 + 487 + aliases = data.get("aliases") 488 + if aliases is not None: 489 + if not isinstance(aliases, list) or any( 490 + not isinstance(item, str) or not item.strip() for item in aliases 491 + ): 492 + logger.warning("skill_editor: invalid aliases") 493 + return None 494 + aliases = [item.strip() for item in aliases] 495 + 496 + updated_at = data.get("updated_at") 497 + if updated_at is not None: 498 + if not isinstance(updated_at, str) or not _validate_updated_at(updated_at): 499 + logger.warning("skill_editor: invalid updated_at") 500 + return None 501 + 502 + was_new_profile = think_skills.load_profile(slug) is None 503 + 504 + ordered_meta: dict[str, Any] = { 505 + "name": slug, 506 + "display_name": display_name, 507 + "description": description, 508 + "category": category, 509 + "confidence": confidence_value, 510 + } 511 + if aliases: 512 + ordered_meta["aliases"] = aliases 513 + if updated_at is not None: 514 + ordered_meta["updated_at"] = updated_at 515 + 516 + body = post.content.rstrip() + "\n" if post.content.strip() else "" 517 + markdown = frontmatter.dumps(frontmatter.Post(body, **ordered_meta)) 518 + think_skills.save_profile(slug, markdown) 519 + 520 + pattern = think_skills.find_pattern(slug) 521 + observation_count = 0 522 + if pattern is not None: 523 + observation_count = len(pattern.get("observations", [])) 524 + 525 + now = think_skills.utc_now_iso() 526 + 527 + def mutate_patterns(rows: list[dict[str, Any]]) -> list[dict[str, Any]]: 528 + target = think_skills.find_pattern(slug, rows) 529 + if target is None: 530 + logger.warning( 531 + "skill_editor: pattern missing during post_process: %s", slug 532 + ) 533 + return rows 534 + 535 + changed = False 536 + if not target.get("profile_generated_at"): 537 + target["profile_generated_at"] = now 538 + changed = True 539 + if mode == "create": 540 + if bool(target.get("needs_profile")): 541 + target["needs_profile"] = False 542 + changed = True 543 + if target.get("status") == "emerging": 544 + target["status"] = "mature" 545 + changed = True 546 + elif mode == "refresh": 547 + if bool(target.get("needs_refresh")): 548 + target["needs_refresh"] = False 549 + changed = True 550 + else: 551 + if bool(target.get("needs_profile")): 552 + target["needs_profile"] = False 553 + changed = True 554 + if bool(target.get("needs_refresh")): 555 + target["needs_refresh"] = False 556 + changed = True 557 + if target.get("status") == "emerging": 558 + target["status"] = "mature" 559 + changed = True 560 + 561 + if changed: 562 + think_skills.touch_updated(target) 563 + return rows 564 + 565 + think_skills.locked_modify_patterns(mutate_patterns) 566 + 567 + if mode == "edit_request" and request_id: 568 + 569 + def mutate_requests(rows: list[dict[str, Any]]) -> list[dict[str, Any]]: 570 + for row in rows: 571 + if row.get("id") != request_id or row.get("processed_at") is not None: 572 + continue 573 + row["processed_at"] = now 574 + break 575 + return rows 576 + 577 + think_skills.locked_modify_edit_requests(mutate_requests) 578 + 579 + if mode == "create" and was_new_profile: 580 + nudge_line = f"- Noticed recurring skill: {display_name} — observed {observation_count} times" 581 + existing = _read_identity_section("agency.md", "observations") 582 + if nudge_line not in existing.splitlines(): 583 + if existing and existing.strip() != WATCHING_AND_LEARNING: 584 + content = existing.rstrip("\n") + "\n" + nudge_line 585 + else: 586 + content = nudge_line 587 + update_identity_section( 588 + "agency.md", 589 + "observations", 590 + content, 591 + actor="skill_editor", 592 + reason="new recurring skill observed", 593 + ) 594 + 595 + return None
+78
apps/skills/talent/skill_observer.md
··· 1 + { 2 + "type": "cogitate", 3 + "title": "Skill Observer", 4 + "description": "Daily owner-wide scan for recurring skill patterns in today's activities.", 5 + "color": "#5e35b1", 6 + "schedule": "daily", 7 + "priority": 41, 8 + "multi_facet": false, 9 + "tier": 3, 10 + "group": "Skills", 11 + "load": {"transcripts": false, "percepts": false, "talents": false} 12 + } 13 + 14 + You are the skill observer for solstone's owner. Run once per day to update the owner-wide skill registry based on $day's activities across all facets. Today is $day. 15 + 16 + Your job: recognize recurring patterns of capability. A skill is something the owner does, repeatedly, across spans and days, with consistent tools, collaborators, or techniques. One-off activities are not skills. 17 + 18 + ## Gather today's evidence 19 + 20 + 1. `sol call journal facets --json` — list enabled facets 21 + 2. For each enabled facet: `sol call activities list --facet <facet> --day $day --json` 22 + 3. For activities that look substantive (skip routine admin, trivial errands), read deeper: 23 + - `sol call activities get <id> --json` for the full activity record 24 + - Or read narrative detail at `journal/facets/<facet>/activities/$day/<span_id>/*.md` if useful 25 + - Or read span rows at `journal/facets/<facet>/spans/$day.jsonl` for the conversation/work/event narratives 26 + 27 + ## Read the existing skill registry 28 + 29 + - `sol call skills list --json` — all patterns with status, observation counts, last_seen, facets_touched 30 + - `sol call skills show <slug> --json` — full detail on one pattern including observation log 31 + 32 + ## Decide 33 + 34 + For each substantive activity today, judge whether it reflects: 35 + 36 + **An existing pattern.** Use semantic judgment. "ran profiler on the indexer" and "traced a latency regression" are the same capability — performance profiling — even if the words differ. Err toward consolidation. 37 + 38 + If yes: 39 + - `sol call skills observe <slug> --day $day --facet <facet> --activity-ids <comma-separated-ids> --notes "<one-sentence note about what this observation adds>"` 40 + - Consider promoting if BOTH of these are true: (a) the pattern has been observed across at least 3 distinct days with consistent tools, collaborators, or techniques, AND (b) a future session reading this pattern's profile would learn something non-obvious — either a capability worth naming, a specific way the owner approaches it, or a triggering context. If yes: `sol call skills promote <slug>`. If the pattern is real but the profile would be thin ("owner sometimes does X"), defer — wait for more signal. 41 + - If the pattern already has a profile AND today's evidence materially changes what the profile should say (new tool, new collaborator, different technique), run: `sol call skills refresh <slug>` 42 + 43 + **A new recurring pattern.** Is this the start of a skill, or a one-off? 44 + 45 + Err toward patience. Don't seed a pattern from a single activity unless the signal is strong (specialized tools, clear repeated context). If seeding: 46 + - Pick a stable kebab-case slug that describes the capability. 47 + - Slug rules (strict, enforced by the CLI): lowercase letters, digits, and single hyphens only. 1–64 characters. No leading or trailing hyphens. No consecutive hyphens. Cannot be `anthropic` or `claude`. The slug should read as a capability name on its own: `python-performance-profiling` is good; `profiling-jer-work-2026-04-19` is bad (ephemeral); `stuff` is bad (vague); `profiling` alone is bad (too broad). 48 + - Aim for ONE capability per slug. If the pattern spans multiple sub-capabilities, prefer the most specific framing that still captures what recurs. When in doubt, narrower beats broader — two skills can always be merged later. 49 + - `sol call skills seed <slug> --name "<Human-readable Name>" --day $day --facet <facet> --activity-ids <ids> --notes "<why this might be recurring>"` 50 + 51 + **Neither.** Most activities fall here. Do nothing. 52 + 53 + ## Dormancy sweep 54 + 55 + Before finishing, scan the existing pattern list for dormancy: 56 + - Any `mature` pattern with `last_seen` more than 60 days before $day: `sol call skills mark-dormant <slug>`. 57 + - Leave `emerging` patterns alone regardless of age — they might yet mature. 58 + 59 + ## Grounding rules 60 + 61 + - Only issue commands based on evidence you've actually read. 62 + - Don't guess at tools or collaborators. 63 + - If uncertain whether an activity matches an existing skill or starts a new one, default to inaction. 64 + - Before running `promote` or `refresh`, re-read the pattern's observation log via `sol call skills show` — don't duplicate observations. 65 + - Stay within $day. You are not processing historical activities. 66 + 67 + ## Report 68 + 69 + Return a brief markdown report (100–300 words) of what you did: 70 + 71 + - Observations filed: list `<slug>` per line 72 + - Patterns seeded: list `<slug>` per line 73 + - Patterns promoted: list `<slug>` per line 74 + - Patterns refreshed: list `<slug>` per line 75 + - Patterns marked dormant: count 76 + - Patterns considered but not acted on: one line per with the reason 77 + 78 + This report goes into the daily run log; there is no other sink for it.
+549
tests/test_skill_editor_hook.py
··· 1 + # SPDX-License-Identifier: AGPL-3.0-only 2 + # Copyright (c) 2026 sol pbc 3 + 4 + from __future__ import annotations 5 + 6 + import json 7 + from pathlib import Path 8 + 9 + import pytest 10 + 11 + from apps.skills.talent.skill_editor import ( 12 + NO_PENDING_SKILL_WORK, 13 + WATCHING_AND_LEARNING, 14 + post_process, 15 + pre_process, 16 + ) 17 + from think import skills as think_skills 18 + 19 + 20 + @pytest.fixture 21 + def skill_editor_env(monkeypatch, tmp_path): 22 + monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", str(tmp_path)) 23 + return Path(tmp_path) 24 + 25 + 26 + def _pattern( 27 + *, 28 + slug: str = "alpha-skill", 29 + name: str = "Alpha Skill", 30 + status: str = "emerging", 31 + day: str = "2026-04-19", 32 + facet: str = "work", 33 + activity_ids: list[str] | None = None, 34 + notes: str = "Observed recurring work", 35 + needs_profile: bool = False, 36 + needs_refresh: bool = False, 37 + profile_generated_at: str | None = None, 38 + created_at: str = "2026-04-19T09:00:00Z", 39 + updated_at: str = "2026-04-19T10:00:00Z", 40 + observations: list[dict] | None = None, 41 + ) -> dict: 42 + if observations is None: 43 + ids = ["act-1"] if activity_ids is None else activity_ids 44 + observations = [ 45 + { 46 + "day": day, 47 + "facet": facet, 48 + "activity_ids": ids, 49 + "notes": notes, 50 + "recorded_at": created_at, 51 + } 52 + ] 53 + facets = sorted({obs["facet"] for obs in observations if obs.get("facet")}) 54 + days = sorted(obs["day"] for obs in observations if obs.get("day")) 55 + return { 56 + "slug": slug, 57 + "name": name, 58 + "status": status, 59 + "observations": observations, 60 + "facets_touched": facets, 61 + "first_seen": days[0] if days else "", 62 + "last_seen": days[-1] if days else "", 63 + "needs_profile": needs_profile, 64 + "needs_refresh": needs_refresh, 65 + "profile_generated_at": profile_generated_at, 66 + "created_at": created_at, 67 + "updated_at": updated_at, 68 + } 69 + 70 + 71 + def _request( 72 + *, 73 + request_id: str = "req-1", 74 + slug: str = "alpha-skill", 75 + instructions: str = "Refine the description", 76 + requested_at: str = "2026-04-19T08:00:00Z", 77 + requested_by: str = "chat", 78 + processed_at: str | None = None, 79 + ) -> dict: 80 + return { 81 + "id": request_id, 82 + "slug": slug, 83 + "instructions": instructions, 84 + "requested_at": requested_at, 85 + "requested_by": requested_by, 86 + "processed_at": processed_at, 87 + } 88 + 89 + 90 + def _activity_record( 91 + *, 92 + activity_id: str = "act-1", 93 + description: str = "Investigated a performance regression", 94 + segments: list[str] | None = None, 95 + facet: str = "work", 96 + day: str = "2026-04-19", 97 + ) -> dict: 98 + return { 99 + "id": activity_id, 100 + "activity": "coding", 101 + "title": "Performance investigation", 102 + "description": description, 103 + "segments": ["090000_300"] if segments is None else segments, 104 + "active_entities": ["indexer", "cprofile"], 105 + "created_at": "2026-04-19T09:05:00Z", 106 + "facet": facet, 107 + "day": day, 108 + } 109 + 110 + 111 + def _profile_markdown( 112 + *, 113 + slug: str = "alpha-skill", 114 + display_name: str = "Alpha Skill", 115 + description: str = "A grounded description.", 116 + category: str = "engineering", 117 + confidence: float = 0.6, 118 + body: str = "## Description\n\nProfile body.\n", 119 + aliases: list[str] | None = None, 120 + updated_at: str | None = None, 121 + ) -> str: 122 + lines = [ 123 + "---", 124 + f'name: "{slug}"', 125 + f'display_name: "{display_name}"', 126 + f'description: "{description}"', 127 + f'category: "{category}"', 128 + f"confidence: {confidence}", 129 + ] 130 + if aliases is not None: 131 + lines.append("aliases:") 132 + for alias in aliases: 133 + lines.append(f' - "{alias}"') 134 + if updated_at is not None: 135 + lines.append(f'updated_at: "{updated_at}"') 136 + lines.extend(["---", "", body.rstrip(), ""]) 137 + return "\n".join(lines) 138 + 139 + 140 + def _seed_fixture( 141 + root: Path, 142 + *, 143 + patterns: list[dict] | None = None, 144 + edit_requests: list[dict] | None = None, 145 + profiles: dict[str, str] | None = None, 146 + activities: dict[tuple[str, str], list[dict]] | None = None, 147 + narratives: dict[tuple[str, str, str], str] | None = None, 148 + spans: dict[tuple[str, str], list[dict]] | None = None, 149 + agency_observations: str = WATCHING_AND_LEARNING, 150 + ) -> None: 151 + skills_dir = root / "skills" 152 + skills_dir.mkdir(parents=True, exist_ok=True) 153 + 154 + if patterns is not None: 155 + content = "\n".join(json.dumps(row) for row in patterns) 156 + if content: 157 + content += "\n" 158 + (skills_dir / "patterns.jsonl").write_text(content, encoding="utf-8") 159 + 160 + if edit_requests is not None: 161 + content = "\n".join(json.dumps(row) for row in edit_requests) 162 + if content: 163 + content += "\n" 164 + (skills_dir / "edit_requests.jsonl").write_text(content, encoding="utf-8") 165 + 166 + for slug, markdown in (profiles or {}).items(): 167 + (skills_dir / f"{slug}.md").write_text(markdown, encoding="utf-8") 168 + 169 + identity_dir = root / "identity" 170 + identity_dir.mkdir(parents=True, exist_ok=True) 171 + (identity_dir / "agency.md").write_text( 172 + "# agency\n\n## observations\n" 173 + + agency_observations 174 + + "\n\n## next\n\n[nothing yet]\n", 175 + encoding="utf-8", 176 + ) 177 + 178 + for (facet, day), records in (activities or {}).items(): 179 + activities_dir = root / "facets" / facet / "activities" 180 + activities_dir.mkdir(parents=True, exist_ok=True) 181 + content = "\n".join(json.dumps(record) for record in records) 182 + if content: 183 + content += "\n" 184 + (activities_dir / f"{day}.jsonl").write_text(content, encoding="utf-8") 185 + 186 + for (facet, day, activity_id), content in (narratives or {}).items(): 187 + narrative_dir = root / "facets" / facet / "activities" / day / activity_id 188 + narrative_dir.mkdir(parents=True, exist_ok=True) 189 + (narrative_dir / "narrative.md").write_text(content, encoding="utf-8") 190 + 191 + for (facet, day), rows in (spans or {}).items(): 192 + spans_dir = root / "facets" / facet / "spans" 193 + spans_dir.mkdir(parents=True, exist_ok=True) 194 + content = "\n".join(json.dumps(row) for row in rows) 195 + if content: 196 + content += "\n" 197 + (spans_dir / f"{day}.jsonl").write_text(content, encoding="utf-8") 198 + 199 + 200 + def test_pre_picks_oldest_edit_request(skill_editor_env): 201 + pattern = _pattern(needs_profile=True) 202 + older = _request(request_id="req-old", requested_at="2026-04-19T07:00:00Z") 203 + newer = _request( 204 + request_id="req-new", 205 + requested_at="2026-04-19T09:00:00Z", 206 + instructions="Newer request", 207 + ) 208 + _seed_fixture( 209 + skill_editor_env, 210 + patterns=[pattern], 211 + edit_requests=[newer, older], 212 + profiles={"alpha-skill": _profile_markdown()}, 213 + ) 214 + 215 + result = pre_process({"day": "2026-04-19"}) 216 + 217 + assert result is not None 218 + assert result["meta"]["mode"] == "edit_request" 219 + assert result["meta"]["request_id"] == "req-old" 220 + assert result["template_vars"]["owner_instructions"] == "Refine the description" 221 + 222 + 223 + def test_pre_picks_needs_profile_when_no_edit_request(skill_editor_env): 224 + _seed_fixture( 225 + skill_editor_env, 226 + patterns=[ 227 + _pattern( 228 + slug="first", needs_profile=True, updated_at="2026-04-19T07:00:00Z" 229 + ), 230 + _pattern( 231 + slug="second", 232 + needs_profile=True, 233 + updated_at="2026-04-19T08:00:00Z", 234 + ), 235 + ], 236 + activities={ 237 + ("work", "2026-04-19"): [_activity_record(activity_id="act-1")], 238 + }, 239 + ) 240 + 241 + result = pre_process({"day": "2026-04-19"}) 242 + 243 + assert result is not None 244 + assert result["meta"]["mode"] == "create" 245 + assert result["meta"]["slug"] == "first" 246 + assert "## Observation ledger" in result["template_vars"]["skill_context"] 247 + 248 + 249 + def test_pre_picks_needs_refresh_when_no_new_or_edit(skill_editor_env): 250 + _seed_fixture( 251 + skill_editor_env, 252 + patterns=[_pattern(status="mature", needs_refresh=True)], 253 + profiles={"alpha-skill": _profile_markdown()}, 254 + ) 255 + 256 + result = pre_process({"day": "2026-04-19"}) 257 + 258 + assert result is not None 259 + assert result["meta"]["mode"] == "refresh" 260 + assert result["template_vars"]["existing_profile"].startswith("---") 261 + 262 + 263 + def test_pre_returns_skip_when_nothing_pending(skill_editor_env): 264 + _seed_fixture(skill_editor_env, patterns=[_pattern(status="mature")]) 265 + 266 + result = pre_process({"day": "2026-04-19"}) 267 + 268 + assert result == {"skip_reason": NO_PENDING_SKILL_WORK} 269 + 270 + 271 + def test_pre_skips_zero_observation_pattern(skill_editor_env): 272 + empty = _pattern(slug="empty", needs_profile=True, observations=[]) 273 + ready = _pattern( 274 + slug="ready", needs_profile=True, updated_at="2026-04-19T11:00:00Z" 275 + ) 276 + _seed_fixture(skill_editor_env, patterns=[empty, ready]) 277 + 278 + result = pre_process({"day": "2026-04-19"}) 279 + 280 + assert result is not None 281 + assert result["meta"]["slug"] == "ready" 282 + 283 + 284 + def test_pre_handles_missing_slug_defensively(skill_editor_env): 285 + _seed_fixture( 286 + skill_editor_env, 287 + edit_requests=[_request(slug="missing-skill")], 288 + ) 289 + 290 + result = pre_process({"day": "2026-04-19"}) 291 + 292 + assert result == {"skip_reason": "edit-request target slug missing"} 293 + rows = think_skills.load_edit_requests() 294 + assert rows[0]["processed_at"] is not None 295 + assert rows[0]["processing_error"] == "slug missing" 296 + 297 + 298 + def test_pre_refresh_missing_profile_falls_back_to_create(skill_editor_env): 299 + _seed_fixture( 300 + skill_editor_env, 301 + patterns=[_pattern(status="mature", needs_refresh=True)], 302 + ) 303 + 304 + result = pre_process({"day": "2026-04-19"}) 305 + 306 + assert result is not None 307 + assert result["meta"]["mode"] == "create" 308 + assert result["template_vars"]["existing_profile"] == "" 309 + 310 + 311 + def test_pre_includes_span_bodies_from_spans_jsonl(skill_editor_env): 312 + _seed_fixture( 313 + skill_editor_env, 314 + patterns=[_pattern(needs_profile=True)], 315 + spans={ 316 + ("work", "2026-04-19"): [ 317 + {"span_id": "act-1", "talent": "story", "body": "Span body evidence"} 318 + ] 319 + }, 320 + ) 321 + 322 + result = pre_process({"day": "2026-04-19"}) 323 + 324 + assert result is not None 325 + assert "story: Span body evidence" in result["template_vars"]["skill_context"] 326 + 327 + 328 + def test_pre_emits_spans_unavailable_when_file_missing(skill_editor_env): 329 + _seed_fixture( 330 + skill_editor_env, 331 + patterns=[_pattern(needs_profile=True)], 332 + ) 333 + 334 + result = pre_process({"day": "2026-04-19"}) 335 + 336 + assert result is not None 337 + assert "[spans unavailable]" in result["template_vars"]["skill_context"] 338 + 339 + 340 + def test_post_creates_new_profile_and_fires_nudge(skill_editor_env): 341 + pattern = _pattern( 342 + needs_profile=True, 343 + observations=[ 344 + _pattern()["observations"][0], 345 + _pattern(day="2026-04-20")["observations"][0], 346 + ], 347 + ) 348 + _seed_fixture(skill_editor_env, patterns=[pattern]) 349 + result = _profile_markdown( 350 + slug="alpha-skill", 351 + display_name="Alpha Skill", 352 + description="A grounded recurring capability.", 353 + confidence=0.8, 354 + body="## Description\n\nNew profile.\n", 355 + ) 356 + 357 + post_process( 358 + result, {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}} 359 + ) 360 + 361 + saved = think_skills.load_profile("alpha-skill") 362 + assert saved is not None and "display_name: Alpha Skill" in saved 363 + updated = think_skills.find_pattern("alpha-skill") 364 + assert updated["status"] == "mature" 365 + assert updated["needs_profile"] is False 366 + agency = (skill_editor_env / "identity" / "agency.md").read_text(encoding="utf-8") 367 + assert "- Noticed recurring skill: Alpha Skill — observed 2 times" in agency 368 + 369 + 370 + def test_post_refreshes_existing_profile_no_nudge(skill_editor_env): 371 + _seed_fixture( 372 + skill_editor_env, 373 + patterns=[ 374 + _pattern( 375 + status="mature", 376 + needs_refresh=True, 377 + profile_generated_at="2026-04-18T09:00:00Z", 378 + ) 379 + ], 380 + profiles={"alpha-skill": _profile_markdown(display_name="Old Name")}, 381 + agency_observations="existing observation", 382 + ) 383 + result = _profile_markdown( 384 + slug="alpha-skill", 385 + display_name="Refreshed Skill", 386 + description="Updated grounded profile.", 387 + confidence=0.7, 388 + ) 389 + 390 + post_process( 391 + result, {"meta": {"slug": "alpha-skill", "mode": "refresh", "request_id": None}} 392 + ) 393 + 394 + saved = think_skills.load_profile("alpha-skill") 395 + assert saved is not None and "Refreshed Skill" in saved 396 + updated = think_skills.find_pattern("alpha-skill") 397 + assert updated["needs_refresh"] is False 398 + agency = (skill_editor_env / "identity" / "agency.md").read_text(encoding="utf-8") 399 + assert "Refreshed Skill" not in agency 400 + 401 + 402 + def test_post_processes_edit_request_clears_both_flags(skill_editor_env): 403 + _seed_fixture( 404 + skill_editor_env, 405 + patterns=[_pattern(needs_profile=True, needs_refresh=True)], 406 + edit_requests=[_request()], 407 + profiles={"alpha-skill": _profile_markdown()}, 408 + ) 409 + result = _profile_markdown( 410 + slug="alpha-skill", 411 + display_name="Edited Skill", 412 + description="Edited grounded profile.", 413 + confidence=0.75, 414 + ) 415 + 416 + post_process( 417 + result, 418 + { 419 + "meta": { 420 + "slug": "alpha-skill", 421 + "mode": "edit_request", 422 + "request_id": "req-1", 423 + } 424 + }, 425 + ) 426 + 427 + updated = think_skills.find_pattern("alpha-skill") 428 + assert updated["needs_profile"] is False 429 + assert updated["needs_refresh"] is False 430 + requests = think_skills.load_edit_requests() 431 + assert requests[0]["processed_at"] is not None 432 + 433 + 434 + def test_post_rejects_slug_mismatch(skill_editor_env): 435 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 436 + 437 + result = post_process( 438 + _profile_markdown(slug="wrong-slug"), 439 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 440 + ) 441 + 442 + assert result is None 443 + assert think_skills.load_profile("alpha-skill") is None 444 + 445 + 446 + def test_post_rejects_missing_required_field(skill_editor_env): 447 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 448 + invalid = """--- 449 + name: "alpha-skill" 450 + display_name: "Alpha Skill" 451 + description: "desc" 452 + confidence: 0.5 453 + --- 454 + 455 + ## Description 456 + 457 + Missing category. 458 + """ 459 + 460 + result = post_process( 461 + invalid, 462 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 463 + ) 464 + 465 + assert result is None 466 + assert think_skills.find_pattern("alpha-skill")["needs_profile"] is True 467 + 468 + 469 + def test_post_rejects_description_too_long(skill_editor_env): 470 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 471 + 472 + result = post_process( 473 + _profile_markdown(description="x" * 1025), 474 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 475 + ) 476 + 477 + assert result is None 478 + 479 + 480 + def test_post_rejects_description_empty(skill_editor_env): 481 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 482 + 483 + result = post_process( 484 + _profile_markdown(description=""), 485 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 486 + ) 487 + 488 + assert result is None 489 + 490 + 491 + def test_post_rejects_non_numeric_confidence(skill_editor_env): 492 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 493 + invalid = """--- 494 + name: "alpha-skill" 495 + display_name: "Alpha Skill" 496 + description: "Grounded description" 497 + category: "engineering" 498 + confidence: "high" 499 + --- 500 + 501 + ## Description 502 + 503 + Invalid confidence. 504 + """ 505 + 506 + result = post_process( 507 + invalid, 508 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 509 + ) 510 + 511 + assert result is None 512 + 513 + 514 + def test_post_is_idempotent_on_double_call(skill_editor_env): 515 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 516 + result = _profile_markdown( 517 + description="A description that is grounded.", 518 + display_name="Stable Skill", 519 + confidence=0.9, 520 + ) 521 + context = {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}} 522 + 523 + post_process(result, context) 524 + first_pattern = think_skills.find_pattern("alpha-skill").copy() 525 + first_agency = (skill_editor_env / "identity" / "agency.md").read_text( 526 + encoding="utf-8" 527 + ) 528 + 529 + post_process(result, context) 530 + 531 + second_pattern = think_skills.find_pattern("alpha-skill").copy() 532 + second_agency = (skill_editor_env / "identity" / "agency.md").read_text( 533 + encoding="utf-8" 534 + ) 535 + assert first_pattern == second_pattern 536 + assert first_agency == second_agency 537 + 538 + 539 + def test_post_preserves_flags_when_output_invalid(skill_editor_env): 540 + _seed_fixture(skill_editor_env, patterns=[_pattern(needs_profile=True)]) 541 + 542 + post_process( 543 + "not markdown", 544 + {"meta": {"slug": "alpha-skill", "mode": "create", "request_id": None}}, 545 + ) 546 + 547 + updated = think_skills.find_pattern("alpha-skill") 548 + assert updated["needs_profile"] is True 549 + assert think_skills.load_profile("alpha-skill") is None
+48
tests/test_skill_editor_prompt.py
··· 1 + # SPDX-License-Identifier: AGPL-3.0-only 2 + # Copyright (c) 2026 sol pbc 3 + 4 + from __future__ import annotations 5 + 6 + import json 7 + from pathlib import Path 8 + 9 + from think.prompts import load_prompt 10 + 11 + 12 + def _seed_config(tmp_path): 13 + config_dir = tmp_path / "config" 14 + config_dir.mkdir(parents=True, exist_ok=True) 15 + (config_dir / "journal.json").write_text( 16 + json.dumps( 17 + { 18 + "identity": {"name": "Test User", "preferred": "Test User"}, 19 + "agent": {"name": "sol", "name_status": "default", "named_date": None}, 20 + } 21 + ), 22 + encoding="utf-8", 23 + ) 24 + 25 + 26 + def test_skill_editor_prompt_substitutes_template_vars(monkeypatch, tmp_path): 27 + _seed_config(tmp_path) 28 + monkeypatch.setenv("_SOLSTONE_JOURNAL_OVERRIDE", str(tmp_path)) 29 + repo_root = Path(__file__).resolve().parent.parent 30 + 31 + prompt = load_prompt( 32 + "skill_editor", 33 + base_dir=repo_root / "apps" / "skills" / "talent", 34 + context={ 35 + "day": "Saturday, April 19, 2026", 36 + "skill_mode_instruction": "Create the profile for this skill.", 37 + "skill_context": "## Metadata\nName: Alpha Skill", 38 + "existing_profile": "", 39 + "owner_instructions": "Refine wording", 40 + "slug": "alpha-skill", 41 + }, 42 + ) 43 + 44 + assert "Create the profile for this skill." in prompt.text 45 + assert "## Metadata" in prompt.text 46 + assert "alpha-skill" in prompt.text 47 + assert "$skill_context" not in prompt.text 48 + assert "$owner_instructions" not in prompt.text