Why I Built a Tool That Lets AI Read and Update My Local Files

Why I Built a Tool That Lets AI Read and Update My Local Files

Last Updated: January 21, 2026 11 min read Tags: #system design

Table of Contents


    Standard chatbots have a fatal flaw for professional work: they start from zero every time.

    Sure, you can drag and drop your strategy files into the chat window at the start of every session. I did that for months. It works, until you forget one file and the AI builds a strategy based on last month’s thinking.

    At first I would solve this by building custom GPTs and Gemini Gems pre-loaded with both specific directions and important documents like my brand voice, pricing models, and audience research. This gave the AI the permanent context it needed to generate high-quality outputs.

    But that solution then created a new nightmare: Version Control Hell.

    Every time I tweaked a pricing model or updated a roadmap in my internal documents (something that would happen often), my custom bots instantly became obsolete. To keep them accurate, I had to manually log into each bot, delete the old JSON file, and upload the new one.

    The sheer tedium of keeping a dozen disparate bots synchronized with my local files was obnoxious. Yet, I knew first-hand that Contextual Accuracy was the only way to get helpful responses.

    [!alert] This friction caused “Strategic Drift.”

    If I decided to pivot my pricing strategy in the chat, the AI would agree, but my local files remained unchanged. To keep my documentation relevant, I had to manually hunt down and update each of the different strategic blueprints.

    This is doable with a small number of files, but becomes unsustainable the more files you need to manage.

    For me, this created a dangerous gap. My JSON documents are supposed to be my “Source of Truth.” If I hired a freelancer tomorrow and handed them those outdated files, they could do the job with outdated information. If I asked a chatbot to plan my Q2 strategy based on Q1 files, it would give me advice based on old data.

    Better prompts couldn’t solve this. I needed a tool that could reference blueprints automatically and, more importantly, update those documents the moment a decision was made.

    I cleared my schedule for four days, argued with the Gemini API until 3 AM (I don’t write code; I bully LLMs until they write it for me), and built a solution I call Helios.

    Helios is a custom dashboard that connects to Gemini 3. I used my prompt-engineering skills to construct a “Second Brain” that holds my blueprints, remembers my context, and has the power to update my documentation as I work.

    The Helios Console interface showing the sidebar controls for Model, Persona, and Mode selection alongside the main chat area The Helios Console. A local-first AI workspace with persistent memory, swappable personas, and the ability to read and write directly to my files.

    Here is how I built it to solve context amnesia.

    [!warning] A Note on the Data

    The tool shown below is real and functional, but for privacy, the screenshots in this article are using demo material for a fictional SaaS company called “Orbit.”

    A Brain That Never Forgets

    The first problem I had to solve was retrieval.

    In a standard workflow, if I want the AI to know about, for example, my pricing model, I’d have to find the respective file and upload it, or spend the time typing it all in. That is “human middleware”, and humans make mistakes. I wanted to eliminate the risk of uploading the wrong version, wrong file or forgetting a detail entirely.

    The Business Requirement

    I needed an automated librarian. I wanted to simply ask, “Are we charging enough for Enterprise?” and have the system know where to look without me dragging and dropping a thing, or needing to type anything extra whatsoever.

    The Technical Fix

    I built a logic filter called The Manifest.

    I avoided complex vector databases because the fuzzy matching logic risks missing things. Instead, I gave the AI a simple JSON “menu” of every file in my project (which I’m presently responsible for manually updating, but am working on an automated way to handle it).

    When I ask Helios a question, the request hits a lightweight “Router” first. It scans the menu, identifies which files are relevant, pulls the relevant text from my local hard drive, and injects it into the context window.

    Helios automatically retrieving local files based on a user query I don’t need to drag-and-drop, or spend any time providing additional context. I just ask. The ‘Manifest’ logic scans my index, identifies that I’m asking about budget, and autonomously pulls two separate files, ‘Q1_Roadmap.json’ and ‘Budget_Allocation.json’, from my hard drive to construct the answer.

    The Business Outcome

    I can ask a specific question about a project I haven’t touched in three weeks, and the system instantly summons the correct documentation. It guarantees that every answer is grounded in the latest data, not my memory.

    [!tip] On Efficiency vs. Accuracy This method burns more tokens than a standard chat. I don’t care.

    In my opinion, the cost of a hallucination (wasted hours, inaccurate responses) is infinitely higher than the cost of tokens (pennies). I optimized Helios for accuracy, not saving on tokens.

    I would rather pay $0.50 for a conversation that is 100% accurate than $0.05 for one that risks hallucinating constraints.

    Simulating a Boardroom to Stress-Test Bad Ideas

    Solving the memory problem was Step 1. But I quickly ran into a second, more subtle problem.

    Standard LLMs are designed to be helpful assistants. They are agreeable. If I pitch a mediocre idea, they say, “Great approach!”

    In the real world, agreement is dangerous. Agreement erodes value. A senior leader points out security flaws. A CFO asks about ROI. Friction is where the bad ideas get filtered out.

    The Business Requirement

    As a solo operator, I must think like a full team. To build enterprise-grade systems, I need a “Critic” to tear my ideas apart and an “Academic” to structure them.

    The Technical Fix

    I implemented Persona Switching.

    I instructed the system to let me swap the mindset of the machine mid-conversation without wiping its memory. Because the Manifest (described above) handles the context, I can switch personalities and the new persona still has access to all the same files.

    Rather than building one “super-assistant,” I designed a roster of specialists—each with their own cognitive framework and constraints.

    The Helios Persona dropdown showing a list of available AI specialists including Head of Product, Head of Sales, Lead Designer, PR Coach, Senior Engineer, Social Media Expert, The Arbiter, and The Critic The full ‘Boardroom.’ Each persona is a separate text file with custom instructions that define how it thinks, what it prioritizes, and what it’s allowed to critique. I can swap between them mid-conversation without losing context.

    Here is an example workflow I use to enforce our strict brand standards:

    Step 1: The Setup

    I load the Social Media Expert persona and give it a standard prompt: “Draft a LinkedIn announcement for the ‘State of Async 2026’ report. Focus on the metric that teams using Orbit cut meetings by 40%. Keep it professional.”

    It produces a polished, standard announcement. For a large number of companies, this is likely a “good” result as-is.

    The Social Media Expert persona generating a standard draft The ‘Social Media Expert’ persona produces a competent, safe, and professional draft using the correct guideline files

    Step 2: The Audit

    This is where standard workflows stop—and where Helios begins. I don’t want “safe.” I want “Orbit.”

    In the same chat window, I switch the persona to The Critic. I don’t need to re-upload the file or paste the text again. I simply ask: “Review the draft above produced by the social media expert. How could this be better?”

    Because Helios maintains a continuous memory stream across persona swaps, The Critic immediately tears the previous draft apart against our Social_Media_Strategy.json rules.

    The Critic persona auditing the previous draft The ‘Boardroom’ feature in action. I switch to ‘The Critic’—without losing context—and force the AI to stress-test the copy. It correctly flags that the first draft violates our ‘Anti-Hype’ mandate and alienates our core persona (Sarah).

    I am essentially orchestrating a synthetic debate. I force the system to stress-test itself rather than just asking for an answer.

    Ready to get started?

    Have any questions? Wanna chat?

    Contact Me Here Contact Me Here

    The “Safety-First” Editor

    We have the Shared Brain and the Boardroom. But we still face the bottleneck that kills most projects: Execution.

    Previously, after I’d have a good strategy session with the AI, I’d have to perform manual labor. I’d have to find the specific JSON file, scroll to line 400, copy the new text from the chat, paste it in, and format it. Or upload each JSON one-by-one to the AI and ask it to make the updates itself.

    This friction isn’t impossible, but the sheer volume of files makes that a tedious, error-prone process. And if you’re needing to update the files referenced by numerous personas on top of that? A waste of time at best, a liability at worst.

    The Business Requirement

    I needed the AI to have the power to update the source of truth as we worked without me leaving the console.

    The Technical Fix

    I built the Surgical Editor (Wizard Mode).

    Many AI tools are reckless. I’ve seen generic agents accidentally delete important chunks (nearly an entire file even) when trying to update one aspect. To solve for that, I installed a safety protocol.

    For example, immediately after The Critic (in the previous step) pointed out that our messaging failed to address our target customer profiles’ fear of “Surveillance,” I didn’t have to manually hunt down the file. I simply switched modes and told Helios:

    Update Ideal_Customer_Profile.json accordingly. Change the ‘Value Proposition to be more specific that Orbit provides ‘Visibility without Surveillance’

    Helios then executed the safety protocol:

    1. The Integrity Check: The AI reads the entire file and rewrites the full document with the new changes incorporated. This guarantees syntax integrity.
    2. The Reality Check: I act as the ‘Editor-in-Chief.’ I review the Diff View (green lines for new ideas, red lines for cuts). If the logic holds, I click ‘Accept’ and the actual file updates on my hard drive instantly, directly.

    The Surgical Editor showing a side-by-side 'Diff View' of a file update This is the ‘Surgical Editor.’ Instead of blindly overwriting my files, Helios proposes the change (Green) against the original (Red). I only click ‘Accept’ if I approve. This prevents the AI from accidentally deleting important data.

    It’s the same logic as approving a pull request, applied to business strategy. This only updates one file at a time (for precision), but you can have it work on as many files in one sitting as necessary. It also allows for providing feedback if you need the AI to handle anything differently, or skip a specific file’s edits all together.

    [!warning] The “Time Machine” Safety Layer

    To be extra careful, I added a Granular Version Control system directly into the UI.

    It’s not merely an “Undo” button but rather a Snapshot & Revert system.

    • Snapshots: Before a major change, I click “Save” to name a restore point (e.g., “Pre-Q3 Strategy Update”). This creates a verifiable Git commit.

    • Revert: If I realize I made a mistake three turns ago, I click “Revert.” A modal pops up listing my recent git commits. I simply select the version I want, and the system restores the entire folder on my local hard drive to that exact moment in time.

    The Version Control modal allowing a rollback to previous states Safety is paramount. Because Helios integrates with Git, every change creates a restore point. If I (or the AI) make a mistake, I can simply open the ‘Time Machine’ and revert the entire folder to the state it was the last time I made a snapshot.

    How This Applies to a Growth Team

    To be clear: Helios is currently a “Single-Player” tool.

    I built it to optimize my specific workflow as a solo operator, which means it currently lacks the multi-user safeguards required for a massive team. However, it serves as a functional Proof of Concept for a better way of organizational working.

    The core architecture—separating a “Source of Truth” (The Manifest) from the reasoning engine—is infinitely scalable. If you were to move these files from a local folder to a shared secure cloud, the principle of “Context Persistence” becomes a force multiplier for the entire company.

    Eliminating the “Misalignment Tax”

    Imagine if your Sales, Marketing, and Dev teams could all query a secure, internal model that references the exact same “Q4 Strategy” document.

    If the Marketing Lead updates the Pricing Tier in the morning, the Sales Rep’s AI knows about the new price five seconds later. No memos to miss. No Slack threads to ignore. The source of truth propagates instantly across the organization.

    Operational Velocity

    When I start a task now, I never start from zero. I load the “Project Manifest” and the system immediately knows the goal, the constraints, and the history. This saves 15 minutes of context-setting per task. Scaled across a 10-person team, that compounds into hundreds of hours of saved engineering and strategy time annually.

    Future-Facing Questions

    [!faq]- Is this safe to use on a company codebase?

    It depends on your company’s risk tolerance. But here’s my thinking:

    1. The files live on your laptop. Unlike most SaaS tools where you upload your strategy to their cloud servers, Helios keeps your files physically on your computer. You retain full custody.

    2. You control the “Menu.” The AI cannot snoop through your hard drive. It can only read the specific files you give it access to. If you don’t want it to, the AI can’t see it.

    3. Data Privacy. When you ask a question, the system sends only the relevant text to the AI provider (in my case, Gemini) to generate the answer. If your company already allows you to use ChatGPT or Gemini for work, this is no different.

    [!faq]- Why build this instead of just using Custom GPTs? Custom GPTs are powerful, but they are siloed. They live inside OpenAI’s walled garden. They cannot read or write to my local computer, and they cannot “talk” to each other easily. I needed an environment where the “Strategist” persona could update the “Builder’s” files directly. I find this far superior, personally.

    [!faq]- Could I use this for coding?

    Technically, yes. To Helios, a Python script is no different than a blog post draft or a pricing table. It reads, updates, and writes whatever files it must.

    But to be brutally honest: I don’t use it to replace my IDE.

    I still use tools like Cursor or Windsurf for the actual “bricklaying”—the syntax highlighting, the multi-file refactoring, and the debugging. Those tools are purpose-built for speed.

    I use Helios for the thinking and to “architect the system”, update the documentation, and stress-test the logic using the “Critic” persona before I open my IDE.

    • Cursor updates the code.
    • Helios updates the reasoning and the source of truth (the documentation) that guides the code.

    If I tried to build complex features in Cursor alone, I found my code would evolve but my documentation would rot. Helios makes it possible for me to update the blueprints first.

    Photo of Justin Borge

    By Justin Borge

    Marketing Growth Engineer. I design integrated systems (analytics, content, AI) to solve marketing chaos. This website is my 'backstage pass,' documenting what I'm working on, implementing, and learning.