
ai-sandbox/ ├── about.md # who I am — loaded into every session ├── investing-frameworks.md # how I evaluate deals ├── investing-track-record.md # historical performance ├── context-layer-design.md # design notes + journey log ├── MAP.md # system schema and search patterns ├── startups/ │ ├── deals-index.yaml # central registry, filterable by any field │ ├── taxonomy.md # sectors, stages, statuses, entities │ ├── templates/ │ ├── startup/ │ │ ├── startup.md # living memo — canonical info │ │ └── assets/ │ │ ├── 2026-02-05-call-transcript.md │ │ ├── 2026-02-05-151824.png # screenshot from call │ │ └── 2026-02-05-151824.md # metadata sidecar │ └── ... ├── learning/ │ ├── index.md # distilled insights, not file listings │ ├── ai-learning/ │ ├── gtm-learning/ │ ├── enterprise-ai/ │ └── vertical-learning/ ├── workflows/ # SOPs — single source of truth │ ├── deal-processing.md │ ├── reflections.md │ ├── inbox.md │ ├── image-triage.md │ ├── conversation-analysis.md │ ├── learning.md │ └── broad-context.md ├── writing/ │ └── (essays and drafts — flat folder) ├── tools/ │ ├── callmemo/ # generates .docx investment memos │ ├── gamma/ # presentations from markdown │ ├── gtask/ # Google Tasks CLI │ ├── img/ # screenshot capture │ └── triage/ # inbox processing ├── inbox/ # raw captures from phone — zero structure required │ └── assets/images/inbox/ # screenshots before triage └── bin/ # symlinks to tools (in PATH)
ai-sandbox/ ├── about.md # who I am — loaded into every session ├── investing-frameworks.md # how I evaluate deals ├── investing-track-record.md # historical performance ├── context-layer-design.md # design notes + journey log ├── MAP.md # system schema and search patterns ├── startups/ │ ├── deals-index.yaml # central registry, filterable by any field │ ├── taxonomy.md # sectors, stages, statuses, entities │ ├── templates/ │ ├── startup/ │ │ ├── startup.md # living memo — canonical info │ │ └── assets/ │ │ ├── 2026-02-05-call-transcript.md │ │ ├── 2026-02-05-151824.png # screenshot from call │ │ └── 2026-02-05-151824.md # metadata sidecar │ └── ... ├── learning/ │ ├── index.md # distilled insights, not file listings │ ├── ai-learning/ │ ├── gtm-learning/ │ ├── enterprise-ai/ │ └── vertical-learning/ ├── workflows/ # SOPs — single source of truth │ ├── deal-processing.md │ ├── reflections.md │ ├── inbox.md │ ├── image-triage.md │ ├── conversation-analysis.md │ ├── learning.md │ └── broad-context.md ├── writing/ │ └── (essays and drafts — flat folder) ├── tools/ │ ├── callmemo/ # generates .docx investment memos │ ├── gamma/ # presentations from markdown │ ├── gtask/ # Google Tasks CLI │ ├── img/ # screenshot capture │ └── triage/ # inbox processing ├── inbox/ # raw captures from phone — zero structure required │ └── assets/images/inbox/ # screenshots before triage └── bin/ # symlinks to tools (in PATH)
Key file types and what they do
Canonical files CLAUDE.md is the master instruction set that tell the AI agent how the system works, where to find things, what workflows to trigger, and how to maintain consistency. Every time I open a Claude Code session, the agent already knows the system. It knows that if I paste call notes, it should check the deal template, create the folder, populate YAML, and update the index. It knows that reflections need frontmatter with themes and a headline. It knows the naming conventions, the status values, the wikilink policy. About.md has essential knowledge about me MAP.md gives the complete view of my context layer The long-term challenge is to evolve these, yet keep them clean and on-point. If agents have to read through 50k words in every session, it's going to struggle. Startup tracking Every startup I evaluate gets its own folder. Inside: a living memo that's always up-to-date (as far as I know), so I can query or produce content from current info. Every single file has YAML metadata to make them easily retrievable. E.g. memo front matter covers stage, sector, status, ARR, valuation, deal terms. A central YAML index file makes all deals searchable by any field. As for raw call transcripts, investor updates, assets, etc. they live in another folder. Thinking Frameworks Evolving documents that represent my "cognitive patterns": How I evaluate investments. My decisions update these My mental models. Used to challenge the way I think Reflections. I run daily reflection sessions. Claude Code uses my context to probe deeper, push back, and then summarize into compact takeaways, and these get "passed" into higher-level index.md file that has my latest cognitive patterns Learning notes Research organized by opinionated title — not "article about context" but "context is the moat." Each gets source attribution, topic tags, confidence level, and flags for actionability and writing fuel. Inbox A flat folder where raw thoughts land from my phone (via an iPhone Shortcut through Obsidian), or results of quick screen captures on my Mac, with zero structure required. A triage process reads each dump, infers where it belongs, adds the right frontmatter, and files it into the correct destination.
The "no more data entry" moment
On January 22nd, something fundamental shifted, and I wrote it down (or rather, I told Claude Code to remember it) because I knew it was important. Once the deal template, data schema, and agent instructions were all in place, I stopped doing data entry. Completely. Raw input goes in — call transcripts, pasted research, stream-of-thought notes — and structured output comes out. The deal memo gets written. YAML metadata populates. The index updates. Wikilinks connect entities across the system. I focus on thinking. Asking questions. Having conversations. Doing research. The agent handles all the structured data maintenance. This is the context layer working as designed: intelligence in the middle between the human and structured knowledge. The system of record updates itself. I want to sit with that for a second because it's genuinely a paradigm shift. Traditional systems of record — CRMs, deal databases, project management tools — require enormous human effort to keep current. People hate maintaining them. The data degrades. Half the fields are stale. Everyone knows this. What changes when you have an intelligent layer in the middle is that the human inputs thinking and the machine outputs structure. The human never fills in a form. The human never manually updates a status field. The human just... does human stuff. And the system stays current. It sounds magical, but is actually complex, because everything rests on the AI having the right core instructions and tool access and templates and naming conventions and... yet all written in an elegantly simple .md file. We shall see whether over time that becomes so cluttered and messy that agent performance crashes.
The moment the system began producing
This happened literally in the past week or so. My firm has a specific format for investment memos — a Word document called the "Call Memo" with checkmark bullets, a header table with checkboxes for deal category and stage, 10pt Arial, single-spaced, very precise formatting. Writing these memos used to take hours: the analysis itself, plus wrestling with Microsoft Word's formatting quirks. Now I run a single command: /callmemo. The agent reads the structured deal data from my context layer, writes the memo in my voice (guided by a style guide Claude Code created from my past memos), generates a structured JSON file, and a Python script produces a formatted .docx. I upload it to Google Docs, make minor tweaks, and it's done. 3-5 minutes at most. It took about 10 iterations to get right. And here's the funny part: the hard part wasn't the intelligence. The analysis and judgment were already captured in the deal folders — that's literally what the context layer is for. The hard part was getting Microsoft Word's bullet types to produce consistent line heights. A custom font (Noto Sans Symbols) caused inflated spacing in Google Docs. The fix? Switch to Arial for bullet characters — same font as body text, no metric mismatch. The last mile of AI automation isn't intelligence. It's formatting compliance with tools nobody would design today. But the pattern is what matters: context in → agentic command → production output. The context layer isn't a note-taking system. It's a production input.
The "holy crap I can build anything" moment
Outputting docx call memos was awesome. But I still had to manually insert images (we like for humans to see visuals, of course). So the idea came: what if I could capture images, instantly label them in seconds, then AI can handle the rest — triage into the right startup folder and then insert them into the right places in memos and presentations? I put that to Claude Code, and it said: "oh simple, we just build a MacOS app for it." Ummm okay? We can do that? Yes, yes we could. And 30 min later, I had a working app on my MacBook Air, with its own shortcut (cmd + shfit + 0). Here's how it works: I was on a call with a startup and they showed system architecture. I pressed the shortcut, selected the area I wanted (with their permission) And up popped a dialogue box, I typed "deal "startup name" architecture. That's it. The image would show up automatically in memos and presentations, because another workflow went in and read each screenshot, generated metadata sidecars (section, order, caption — is this a product shot? a financials slide? a technology diagram?), and filed everything into the correct startup folder. Then I ran /callmemo — and the images inserted themselves into the right sections of the Word document. Product diagrams into Products & Competition, partner slides into Financials. Seven embedded visuals, centered with captions. Zero manual image placement. The pipeline: screenshot → label → triage (metadata + filing) → callmemo (reads metadata, embeds images). Three workflows chained together through nothing but markdown metadata files. Each step is independent, but the metadata format is the contract between them. The principle: metadata sidecars are the glue. The .md file sitting next to each .png carries enough information (section, order, caption) for any downstream consumer to know what the image is and where it belongs. The memo-generation script doesn't need to "understand" images. It just reads placement instructions from metadata. The same pattern could power presentations, deal decks, or any visual deliverable.
Inbox capture from my phone
I built an iPhone Shortcut that dumps raw voice-to-text or typed notes into an inbox/ folder via Obsidian. No structure required. No frontmatter. Just raw thoughts. A triage process — either manual or running as a cron job — picks up each dump, reads the content, infers the destination (is this a deal impression? a learning note? a reflection? a writing idea?), creates the properly-structured file with full frontmatter, updates relevant indexes, and marks the inbox item as filed. The capture side is ruthlessly minimal: open phone, dictate thought, done. All the structure gets added downstream by the agent, guided by routing rules I defined in a workflow SOP. The principle: separate capture from structure. The human dumps raw thoughts; the system handles all categorization and formatting. This is the opposite of most productivity tools, which demand that you categorize at the moment of capture — exactly when you have the least mental bandwidth to do it. Presentations from deal memos Because my deal memos are structured markdown with YAML metadata, I can generate investor presentations directly from them. One command, and a startup pitch deck materializes from the same source data that feeds the investment memo. Same context, different output format. This is what I mean by the context layer being a production input. It's not a place where information goes to die. It's an upstream source that feeds multiple downstream deliverables — memos, presentations, portfolio summaries, deal analyses — all from the same structured files.