personal memory agent
0
fork

Configure Feed

Select the types of activity you want to include in your feed.

Muse agent import alignment: stream-aware preambles, content filtering, prompt updates

Hop A — Foundation:
- Add $stream, $content_description, $import_guidance template variables to
_build_prompt_context() with stream-specific descriptions for all 7 importers
- Update segment/daily/activity preamble templates to use $content_description
instead of hardcoded "audio transcription and screen recording"
- Add exclude_streams filtering to run_prompts_by_priority() in dream.py
(fnmatch-based glob matching, e.g. "import.*")
- Add exclude_streams: ["import.*"] to speakers.md and observation.md
- Verified screen.md auto-skips imports via percepts: "required"

Hop B — Activity agent rewrite:
- Rewrite activity.md to be content-agnostic with $import_guidance variable
- Live capture gets frame comparison + spoken audio guidance
- AI chat imports get conversation analysis guidance
- Calendar/note/reading imports each get content-type-specific guidance

Hop C — Minor prompt updates:
- entities.md: remove "Visible on screen" reference
- decisions.md: "Audio quotes" → "Transcript quotes"
- followups.md: "screen cues" → "contextual cues"
- knowledge_graph.md: generalize multi-tasking note

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

+134 -43
+7 -30
muse/activity.md
··· 2 2 "type": "generate", 3 3 4 4 "title": "Activity Synthesis", 5 - "description": "Synthesizes segment activity from screenshots and audio, focusing on observable changes and searchability.", 5 + "description": "Synthesizes segment activity from content, focusing on observable changes and searchability.", 6 6 "color": "#00bcd4", 7 7 "schedule": "segment", 8 8 "priority": 10, ··· 18 18 19 19 # Segment Activity Synthesis 20 20 21 - ## Core Rule 22 - 23 - ONLY report what CHANGED between screenshots or was SPOKEN in audio. 24 - If content looks the same across frames, skip it entirely. 21 + Report the key activities, discussions, and actions observable in this content. 25 22 26 - ## Your Inputs 27 - 28 - - **Screenshots**: Sampled across this segment. Compare frames - what's different? 29 - - **Audio**: Transcript of speech. What was said? 23 + $import_guidance 30 24 31 25 ## Banned Language 32 26 33 - Never use these words - they describe presence, not action: 27 + Never use these words — they describe presence, not action: 34 28 - reviewing, monitoring, tracking, checking, observing, maintaining, managing 35 29 36 - Use action verbs instead: wrote, sent, received, created, deleted, switched to, typed, said, discussed, decided 30 + Use action verbs instead: wrote, sent, received, created, deleted, switched to, typed, said, discussed, decided, asked, proposed, resolved 37 31 38 32 ## What to Report 39 33 40 - For each item, identify the CHANGE: 41 - - "Typed message to X about Y" (text appeared) 42 - - "Switched from Gmail to Terminal" (window focus changed) 43 - - "Received reply from X" (new message appeared) 44 - - "Said X about Y in meeting" (audio evidence) 45 - 46 - If you cannot name the specific change, do not include it. 34 + For each item, identify what happened — the specific action, change, or exchange. 47 35 48 36 ### Facets 49 37 Which project/context? Every segment has at least one. ··· 53 41 54 42 ## Before Writing 55 43 56 - For each item, ask: 57 - - Can I point to a SPECIFIC CHANGE between screenshots? 58 - - Or SPECIFIC WORDS spoken in audio? 59 - 60 - If neither, omit it. 61 - 62 - ## SKIP Entirely 63 - 64 - - Windows that look identical in first and last frame 65 - - Apps open but showing same content throughout 66 - - Background windows never brought to focus 67 - - Anything you'd describe as "had open" or "was visible" 44 + For each item, ask: can I point to a SPECIFIC action, exchange, or change in the content? If not, omit it. 68 45 69 46 ## Output Format 70 47
+3 -3
muse/decisions.md
··· 68 68 - **Entities:** people, teams, groups, projects/issues, repos/branches, docs/artifacts, meetings, environments, orgs 69 69 - **Impact Surface:** approx # people affected; external stakeholders? breadth (low/med/high); criticality flags (time_sensitive, high_centrality, irreversible) 70 70 - **Evidence:** 71 - - Audio quotes (<= 20 words each, 1–2 max) 72 - - Screen phrases (OCR/visual cues of enactment) 71 + - Transcript quotes (<= 20 words each, 1–2 max) 72 + - Screen phrases (OCR/visual cues of enactment, when available) 73 73 - Metadata notes (audience size, env flags, etc.) 74 74 - **Stakes for Others:** <= 30 words on likely consequences 75 75 - **Confidence:** 0.0–1.0 calibration (0.50 maybe, 0.70 likely, 0.85+ clear) ··· 79 79 STRICT RULES 80 80 - Do not fabricate entities or counts; estimate only from inputs. 81 81 - Anchor times to actual segment boundaries. 82 - - Evidence should include both intent (often audio) and enactment (often screen/metadata) when available. 82 + - Evidence should include both intent (often transcript) and enactment (often screen/metadata) when available. 83 83 - Maintain Markdown only; no JSON or code blocks in the final output.
+2 -2
muse/entities.md
··· 34 34 * Type: Entity Name - Description 35 35 36 36 Example: 37 - * Person: Alice Smith - Mentioned in Slack discussing the project timeline 38 - * Tool: Grafana - Visible on screen showing metrics dashboards 37 + * Person: Alice Smith - Mentioned in discussion about the project timeline 38 + * Tool: Grafana - Referenced for monitoring metrics dashboards
+1 -1
muse/followups.md
··· 32 32 33 33 1. **Sequential Review** 34 34 - Read the transcript chronologically, one block at a time. 35 - - Look for statements or screen cues indicating outstanding tasks, open questions, or commitments to reconnect later. 35 + - Look for statements or contextual cues indicating outstanding tasks, open questions, or commitments to reconnect later. 36 36 37 37 2. **Recognize Follow-up Triggers** 38 38 - Phrases such as "I'll do that tomorrow," "Let's talk later," or "Need to check".
+2 -2
muse/knowledge_graph.md
··· 58 58 * A qualitative description of what a visual network diagram of this day would highlight. Include specific examples of the 2-3 most interesting or unexpected connections discovered, explaining why they are noteworthy (e.g., "An interesting connection is Person A using Tool Z, typically associated with Project Q, for an ad-hoc task related to Concept R. This suggests a novel application or workaround."). 59 59 60 60 **Key Considerations:** 61 - * Synthesize information from both audio and screen transcript data within each chunk. 61 + * Synthesize information from all transcript content within each chunk. 62 62 * Disambiguate entities: e.g., "John" referring to "John Doe." 63 63 * Infer implicit relationships where explicit statements are lacking but context strongly suggests a connection. 64 64 * Focus on the most relevant and significant entities and relationships to avoid an overly noisy graph. 65 - * $Preferred often multi-tasks where joined on a team zoom in the background while working on an unrelated task, so the audio transcripts may not always align with the screen transcripts. 65 + * For live capture, $preferred often multi-tasks — e.g., joined on a team zoom in the background while working on an unrelated task — so different content streams may not always align. 66 66 * Take time to consider all of the nuance of the interactions from the day, deeply think through how best to prioritize the most important aspects and understandings, formulate the best approach for each step of the analysis.
+1
muse/observation.md
··· 9 9 "tier": 3, 10 10 "thinking_budget": 2048, 11 11 "max_output_tokens": 2048, 12 + "exclude_streams": ["import.*"], 12 13 "instructions": { 13 14 "sources": {"transcripts": true, "percepts": true, "agents": false} 14 15 }
+1
muse/speakers.md
··· 7 7 "priority": 10, 8 8 "output": "json", 9 9 "color": "#e64a19", 10 + "exclude_streams": ["import.*"], 10 11 "instructions": { 11 12 "sources": {"transcripts": "required", "percepts": true, "agents": false} 12 13 }
+102
think/agents.py
··· 100 100 # ============================================================================= 101 101 102 102 103 + def _stream_content_description(stream: str | None) -> str: 104 + """Return a human-readable content description for a stream. 105 + 106 + Used in preamble templates so agents know what kind of content they're 107 + analyzing (live capture vs imported conversations, notes, etc.). 108 + """ 109 + if not stream: 110 + return "audio transcription and screen recording" 111 + 112 + STREAM_DESCRIPTIONS = { 113 + "archon": "audio transcription and screen recording", 114 + "import.chatgpt": "an imported ChatGPT conversation", 115 + "import.claude": "an imported Claude conversation", 116 + "import.gemini": "an imported Gemini conversation", 117 + "import.ics": "an imported calendar event", 118 + "import.obsidian": "an imported note from Obsidian", 119 + "import.kindle": "imported Kindle reading highlights", 120 + } 121 + 122 + if stream in STREAM_DESCRIPTIONS: 123 + return STREAM_DESCRIPTIONS[stream] 124 + 125 + # Fallback for unknown import streams 126 + if stream.startswith("import."): 127 + source = stream.split(".", 1)[1] 128 + return f"imported content from {source}" 129 + 130 + return "captured content" 131 + 132 + 133 + def _stream_import_guidance(stream: str | None) -> str: 134 + """Return stream-conditional guidance for the activity agent. 135 + 136 + For live capture, returns guidance about frame comparison and spoken audio. 137 + For imports, returns content-type-specific analysis instructions. 138 + Returns empty string for unknown streams. 139 + """ 140 + if not stream or stream == "archon": 141 + return ( 142 + "## Live Capture Guidance\n\n" 143 + "ONLY report what CHANGED between screenshots or was SPOKEN in audio. " 144 + "If content looks the same across frames, skip it entirely.\n\n" 145 + "### Your Inputs\n\n" 146 + "- **Screenshots**: Sampled across this segment. Compare frames — what's different?\n" 147 + "- **Audio**: Transcript of speech. What was said?\n\n" 148 + "### SKIP Entirely\n\n" 149 + "- Windows that look identical in first and last frame\n" 150 + "- Apps open but showing same content throughout\n" 151 + "- Background windows never brought to focus\n" 152 + "- Anything you'd describe as \"had open\" or \"was visible\"" 153 + ) 154 + 155 + IMPORT_GUIDANCE = { 156 + "import.chatgpt": ( 157 + "This is an AI conversation. Summarize the key topics discussed, " 158 + "questions asked, solutions proposed, and decisions reached. " 159 + "Focus on what the human was trying to accomplish and what they learned or decided." 160 + ), 161 + "import.claude": ( 162 + "This is an AI conversation. Summarize the key topics discussed, " 163 + "questions asked, solutions proposed, and decisions reached. " 164 + "Focus on what the human was trying to accomplish and what they learned or decided." 165 + ), 166 + "import.gemini": ( 167 + "This is an AI conversation. Summarize the key topics discussed, " 168 + "questions asked, solutions proposed, and decisions reached. " 169 + "Focus on what the human was trying to accomplish and what they learned or decided." 170 + ), 171 + "import.ics": ( 172 + "This is a calendar event. Describe the event: its purpose, " 173 + "participants, and any context from the description about why it was scheduled." 174 + ), 175 + "import.obsidian": ( 176 + "This is a note. Summarize the key ideas, references, and connections. " 177 + "What was the author thinking about and working through?" 178 + ), 179 + "import.kindle": ( 180 + "These are reading highlights. Describe what was being read and what " 181 + "the reader found noteworthy. What themes or ideas do these highlights capture?" 182 + ), 183 + } 184 + 185 + if stream in IMPORT_GUIDANCE: 186 + return f"## Content Guidance\n\n{IMPORT_GUIDANCE[stream]}" 187 + 188 + if stream.startswith("import."): 189 + return ( 190 + "## Content Guidance\n\n" 191 + "This is imported content. Summarize the key topics, actions, " 192 + "and takeaways present in this segment." 193 + ) 194 + 195 + return "" 196 + 197 + 103 198 def _build_prompt_context( 104 199 day: str | None, 105 200 segment: str | None, ··· 119 214 - day: Friendly format (e.g., "Sunday, February 2, 2025") 120 215 - day_YYYYMMDD: Raw day string (e.g., "20250202") 121 216 - segment_start, segment_end: Time strings if segment/span provided 217 + - stream, content_description: Stream name and human-readable description 122 218 - activity_*: Activity fields if activity record provided 123 219 """ 124 220 context: dict[str, str] = {} ··· 127 223 128 224 context["day"] = format_day(day) 129 225 context["day_YYYYMMDD"] = day 226 + 227 + # Stream-aware content description and import guidance 228 + stream = os.environ.get("SOL_STREAM") 229 + context["stream"] = stream or "archon" 230 + context["content_description"] = _stream_content_description(stream) 231 + context["import_guidance"] = _stream_import_guidance(stream) 130 232 131 233 if segment: 132 234 start_str, end_str = format_segment_times(segment)
+10
think/dream.py
··· 9 9 """ 10 10 11 11 import argparse 12 + import fnmatch 12 13 import logging 13 14 import sys 14 15 import threading ··· 410 411 411 412 for prompt_name, config in prompts_list: 412 413 is_generate = config["type"] == "generate" 414 + 415 + # Check exclude_streams filter 416 + exclude_patterns = config.get("exclude_streams") 417 + if exclude_patterns and stream: 418 + if any(fnmatch.fnmatch(stream, pat) for pat in exclude_patterns): 419 + logging.info( 420 + f"Skipping {prompt_name}: stream '{stream}' matches exclude_streams" 421 + ) 422 + continue 413 423 414 424 try: 415 425 if config.get("multi_facet"):
+2 -2
think/templates/activity_preamble.md
··· 1 - You are analyzing a **$activity_type** activity from $preferred's workday on **$day** ($day_YYYYMMDD), covering **$segment_start to $segment_end** (~$activity_duration minutes). 1 + You are analyzing a **$activity_type** activity from $preferred's journal on **$day** ($day_YYYYMMDD), covering **$segment_start to $segment_end** (~$activity_duration minutes). 2 2 3 3 **Activity:** $activity_type 4 4 **Description:** $activity_description 5 5 **Entities involved:** $activity_entities 6 6 7 - The transcript below contains all audio and screen data from the recording segments where this activity occurred. These segments may also contain content from other concurrent activities — focus your analysis ONLY on content related to this $activity_type activity. 7 + The transcript below contains $content_description from the segments where this activity occurred. These segments may also contain content from other concurrent activities — focus your analysis ONLY on content related to this $activity_type activity.
+1 -1
think/templates/daily_preamble.md
··· 1 - You are an expert analyst tasked with analyzing $preferred's full workday transcript from **$day** ($day_YYYYMMDD). The transcript contains both audio conversations and screen activity data, organized into recording segments with timestamps. 1 + You are an expert analyst tasked with analyzing $preferred's full day journal from **$day** ($day_YYYYMMDD). The content is organized into segments with timestamps, containing $content_description. 2 2 3 3 You will be given the transcripts followed by a detailed request for how to process them. Follow those instructions carefully. Take time to consider all of the nuance of the interactions from the day, think through how best to prioritize the most important aspects, and formulate the best approach for each step of the analysis.
+2 -2
think/templates/segment_preamble.md
··· 1 - You are analyzing a recording segment from $preferred's workday on **$day** ($day_YYYYMMDD), covering **$segment_start to $segment_end**. This segment captures a specific time window of activity through audio transcription and screen recording. 1 + You are analyzing a segment from $preferred's journal on **$day** ($day_YYYYMMDD), covering **$segment_start to $segment_end**. This segment contains $content_description. 2 2 3 - Focus your analysis on this discrete period - its context, activities, and significance within the broader day. 3 + Focus your analysis on this discrete period — its context, content, and significance within the broader day.