I come from an SEO background. For 8 years I ran an SEO and Content Marketing agency.
I’ve done enough technical SEO audits and cleaned up enough messy internal link architectures to know that, over time, link rot impacts a LOT of teams.
If you publish content for years and years without a plan for managing and maintaining your internal links, you’ll inevitably create dead ends, orphaned pages and links will break.
Observationally, this is all too common. For years, I saw teams treat SEO audits as a reactive cleanup phase. They would publish content, relying on a spreadsheet to track the titles and URLs of their content, realizing only later that they had created a structural mess without any plan for their internal linking systems or potential 404 errors.
By the time it becomes a priority, they would hire someone like me to spend weeks untangling it. Lots of SEO’s depend on a tool called Screaming Frog to crawl a site and find technical SEO issues, but even then… it just spits out a long, tedious to-do list of links that nobody wants to take responsibility for.
It’s not just repairing broken links, but giving purpose to them, ensuring files link up and down logically, showing Googlebot and other crawlers your website’s range of topical authority (incredibly important these days, as SEO is getting harder and harder).
This kind of work is expensive, exhausting, and frankly, preventable.
When I started designing the content system for GetViajo.com, I decided to solve this problem before it started. I wanted to connect the markdown files I was writing directly to a system that visualized the structural health of the site’s links in real-time.
I had used Obsidian for personal PKM notes for years, so the utility of its graph view was always clear.
Drawing the connection between its ability to visualize relationships and the practical need to manage topic clusters always made sense to me, and I was excited for the opportunity to use this for my own projects.
My goal was simple: Maintaining proper internal linking shouldn’t ever be a headache, even if (and when) my content library were to span hundreds of articles.
My sophisticated links should be a feature of the writing environment, ensuring my topic clusters would be a world-class “mesh” system, not a mere hub-and-spoke.
The traditional SEO hub-and-spoke model (left) vs. the more robust mesh system (right). Goals.
Here is the headless CMS system I built to turn my writing environment into a live dashboard for operations (which makes this all possible).
Obsidian’s Graph as a Link Visualization Tool
Out of the box, Obsidian allows you to view your interconnected files in a graph view, showing each’s relationship to one another via wikilinks. As a visual-learner and SEO, I immediately saw the utility in this, to see (and plan) topic clusters as part of the process rather than an after-the-fact audit.
My system as of December, 2025. Each line a link. Each node a file. My content is a perfect ecosystem for SEO crawlers to never hit a dead end.
This visualization functions as an operational tool. By seeing this structure in real-time, I get immediate feedback on whether a file is properly anchored to a pillar hub before I push the code to production, and spokes are interconnected amongst each other, ensuring that no piece of content is a dead end.
A zoomed in view of single, specific cluster where the central hub is successfully supporting several related spokes, all connected.
Operational Dashboards for Structural Health
While visualizing the graph is satisfying, practically, I also wanted to see the vital signs of the site directly in my CMS’ sidebar.
To achieve this, I used an Obsidian plugin called Dataview to query my content files like a database, visualized in easy-to-read dashboards.
Dashboard 1: For Eliminating Orphaned Content
An “Orphaned” page is a file with zero inbound links. This means that if Googlebot or another crawler scanned your site, it wouldn’t be able to find it.
These are wasted assets that are difficult for a search engine to index properly. In a traditional workflow, these often pile up unnoticed without the help of a premium SEO tool like Screaming Frog (and my opinions on those are iffy, tbh).
My favorite view in my entire system is a list that, right now, has nothing on it.
An empty list in this dashboard proves that every single asset on my site is connected and discoverable by a user.
This dashboard scans my entire library for any file that is disconnected from the graph. If a file appears here, I can simply add a link and fix it immediately (where it then automatically gets removed).
I do not need a paid crawler to tell me I have a problem; Obsidian tells me before the file ever gets pushed to production.
Dashboard 2: For Catching Dead Ends
For good UX and SEO alike, every page on your site must lead the user to a next step.
A “Dead End” is a page that might be linked TO, but the file itself has zero outbound links, which kills both the crawler’s journey and the user experience, increasing your site’s bounce rate
(ie. without a link to click, the user will be more likely to leave the site. The rate of that happening is a known ranking metric).
This real-time audit ensures that every article acts as a bridge to another part of the system, enforcing a continuous user journey.
👆 At the present moment, the file with the least amount of outbound links still has 2.
Dashboard 3: Inbound Link Validation + Build Time Resolution
To make this system robust, I enforce a strict “one link” rule: Every note must contain at least one contextual body link to another note (ie. not coming from the footer or sidebar).
[!tip] Wikilinks for the win
Another cool thing about Obsidian is its native wikilinks (in double brackets ’[[ ]]’). This is how links are handled across the program, as opposed to standard URLs.
With standard URLs, if you change a URL eventually, the link breaks.
An Obsidian wikilink, on the other hand, represents the file itself, not its URL. This decouples the content from the final web address. Update one wikilink and every instance of those URLs change across the site. You’ll never worry about broken links again.
Ensuring inbound link density ensures that the site maintains topical authority by avoiding isolated “island” content.
Having graduated from Wordpress sites to using a static framework called Astro, this is a really huge benefit. Astro, when pushed live, automatically resolves these internal Wikilinks into perfect, friendly URLs based on the category and slug defined in the file (perfect for SEO).
This allows me to reorganize categories or rename files in Obsidian without breaking a single link.
[!tip] Still Use Redirects
Don’t forget that although you can depend on Astro + Obsidian to handle your links functionally, you’d still need to set up 301 redirects if your URL’s have been indexed and then subsequently changed to not upset the Google overlords.
Dashboard 4: Identify Which Files Need Updating
Tracking the last updated date directly in my sidebar prevents content from slowly rotting over time.
Finally, and this is less about internal linking structure, but keeping your content fresh is a big SEO win. Lots of companies are using old-fashioned spreadsheets to track when their material was last updated (or just not tracking it at all).
This dataview system allows me to simply see at a glance, when each of my files were last updated.
See something getting old? Click into it, add some useful tidbits, update the ‘lastUpdated’ data in the metadata, and publish. Voila. Updated.
Gaining Velocity Through Visibility
When you remove the fear of breaking things, you move faster.
I’m confident that because I invested the time to set this up early, the system is built for the long haul. It effortlessly turns what used to be a reactive headache into a simple, proactive workflow.
The return is that I can publish articles endlessly into the future without ever worrying about infrastructure debt or crawlability issues. I’ll never need to spend time auditing broken links or dead ends, and I’ll sleep better at night knowing I’ve done everything I can for my topic cluster meshing system.
This is the kind of thing that might take a bit of time to set up right, but once it’s done, it’ll be smooth sailing from there on out. ⛵
How this applies to a growth team
Technical debt eventually slows down a marketing team, and clean-up technical SEO specialists can cost a fortune. You can apply these principles regardless of your specific tech stack.
-
Bring data to the writer: Do not lock your content’s data in a separate tool that only a specialist can see. Put the vital signs directly where the work is being done.
-
Prevent debt by design: Create a system that makes broken links difficult to avoid, and proper clustering impossible to ignore. Use build-time resolution (if possible) to ensure your links do not break when you reorganize your site.
Future facing questions
[!faq]- Is this overkill for a small site? I don’t think so. Especially if you’re planning on investing in content in the long run. Over time, links can become big messes if you don’t have a plan for how they will be maintained, and if you’re not clustering them, you’re leaving money on the table. It is easier to build a logical system when you have 10 articles than when you have 1000. Enforcing these rules from day one ensures the site scales without accumulating debt.
[!faq]- Does this require a developer? The setup for resolving links during the build process does require engineering effort. However, the logic behind it is a universal strategy. Any team can adopt the one link rule and a hub and spoke model to improve their topical authority.
If you like stuff like this, you might be interested in the other systems that power my getviajo.com engine. You can read about how I’ve ensured marketing data integrity or how I am going about solving AI hallucination.
Have questions? Reach out anytime.