Skip to content

AI Agency

AI Marketing Team for Agencies: No Budget Required

No headcount. No retainer. Here's how I compete anyway.

How a solo founder built an AI marketing operation using Obsidian and GitHub Copilot CLI, covering content, SEO, outreach, and CMS management for under $50 a month.

N
Nik · Founder, Sagely

Forget Claude Max. You're burning money as a solo founder. You can get the same results with GitHub Copilot for half the price. I'm not affiliated with GitHub, but I do like maximizing value.

A content writer, an SEO analyst, and an outreach specialist walk into a bar. The tab comes to $190,000–$225,000 a year, not counting benefits. That's the real cost of a three-person marketing team in the US. I'm a solo founder. I've never had that team, and I've never had the budget for one. The best I can do right now is hire contractors when the budget allows. Otherwise, it's just me executing on every angle of the business.

At first, I tried to do it all manually. Freelancers for bits of it, five disconnected tools for the rest, context lost every other week. Creating training docs to onboard contractors took enormous time, and nothing compounded.

What I built instead is an AI marketing team for agencies: a system, not a collection of tools. Obsidian as the brain, GitHub Copilot CLI as the execution layer, and a set of composable skills that give the agent specific jobs to do. Together, they cover what a small marketing team would cover: content, SEO, outreach, CMS management. No hiring, no management overhead, no $200K salary line.

The Stack: GitHub Copilot for Marketing and Obsidian

The whole thing runs on three components, maybe four.

Obsidian is the vault. Every brief, strategy doc, daily note, research file, article draft, and keyword list lives here as a plain Markdown file. No cloud lock-in, no proprietary format. Just files on disk that Copilot can read, write, and reason over.

GitHub Copilot CLI is the agent. It runs in the terminal, has access to the vault's file system, and connects to external tools via MCP servers. At $10 a month, it's the entire execution layer. It reads context from the vault, drafts content, runs audits, and pushes to Webflow, all in a single terminal session.

Skills are modular AI markdown files that specialize the agent for specific jobs. There's one for cold email, one for CMS audits, one for copywriting, one for SEO briefs. Each skill has its own instructions and context. When I need something done, I invoke the right skill rather than starting from scratch. Creating specific skills will literally save you days of work and keeps you sane along the way.

Additional cost will likely come from APIs like Semrush or any other tool that needs keyword research for content creation. I use Keywords Everywhere. It's cheap and accurate. Google Trends and Google Keyword Planner are free if budget is very tight. Otherwise, Answer the Public and manual Google search will get you far enough.

Total cost of this setup: well under $50 a month. Obsidian is free for personal use. Copilot Pro+ is $39 a month. Everything else is already on my computer and in my head.

Obsidian Vault
Persistent memory
Copilot CLI
Execution layer
Skills
Specialist agents

Why GitHub Copilot CLI?

You could do this with OpenClaw or Claude Code. But Copilot's request-based pricing model is the real differentiator. If you plan carefully, chain skills together, and work spec-first, you'll never burn through your allocation. One subscription also gives you access to Claude Haiku, Sonnet, and Opus, plus OpenAI's frontier models. That's multiple model families for the price of one tool. It's genuinely insane value right now.

The Mental Model

Stop thinking of AI as a writing tool. Think of it as a team member with a specific job description. Once you've mastered the team member model, start thinking about orchestrating entire teams of AI. But start small: one CLI session, one team member.

The Obsidian vault is the office. It holds all the documents any new hire would need: brand voice guidelines, content strategy, competitive research, articles in progress, and the history of what's worked. A new employee would spend two weeks reading all of this. Copilot reads it at the start of every session.

You are the CEO. You delegate tasks, review output, make judgment calls, and handle the things AI genuinely can't do. Copilot handles execution. It's not magic. It's a fast, consistent worker who needs clear direction and honest feedback.

What separates this from ChatGPT: the context is persistent. It lives in files, not in chat history. Every session starts with everything the agent needs already on disk, locally. grep alone can save you millions of tokens instead of some RAG-bloated vector DB that takes forever just to search for context. Markdown files are light, connected, and token-efficient.

The Memory System: Obsidian Vault Workflow for Persistent Context

This is what makes it work, probably 95% of the time. Every AI chat tool has the same flaw: it forgets. You spend fifteen minutes re-explaining your brand voice, your ICP, your content pillars, your editorial rules. Then the session ends, and you do it again tomorrow. The context problem is a big reason why most people say AI hasn't reduced their workload.

The Obsidian vault solves this with two files.

Daily Notes (YYYY-MM-DD.md) are the raw log. Every session, the agent appends what it did: tasks completed, decisions made, files created, links added. Sessions are saved and remembered through a simple flat Markdown file. When Copilot starts a new session, it reads today's note and yesterday's note before doing anything else. That's the short-term memory.

MEMORY.md is the long-term memory. Curated, not comprehensive. Over time, the agent distills the daily notes down to what actually matters: product positioning, things we've tried and abandoned, content that's performing, editorial rules, ICP details. Long-term memory gets updated weekly, then pruned monthly.

The session ritual is simple:

  • Start: read MEMORY.md and yesterday's daily note
  • End: append what happened to today's note; if something significant changed, update MEMORY.md

The vault becomes institutional memory. The agent isn't starting from scratch each session. It's picking up where it left off. One thing to watch: context windows have limits. If you let daily notes pile up without distilling them, the agent eventually can't load all the relevant context. The fix is periodic reviews: pull key insights from a week of daily notes into MEMORY.md, then archive the raw logs.

For the first two weeks, keep it simple. Get into the habit of logging sessions with Copilot. The magic compounds at around day 10–14 when there's enough data to see patterns. LLMs are pattern machines. They'll connect dots in ways your brain can't hold after two weeks of dense work.

Daily Notes
Short-term · Session log
MEMORY.md
Long-term · Curated weekly
2026-04-09.md
DISTILL
ICP:Agency founders, 5–50 people
Tone:Direct, no filler, first-person
KW:client portal software for agencies
No-go:em dashes, leverage, seamlessly
Stack:Obsidian + Copilot CLI + Skills

The Workflow

A typical session looks like this.

I open the terminal. Copilot loads context from the vault. I brief a task in plain English: "Write a 2,000-word article on 'agency client portal software'. Use the research in /marketing/research/client-portal-research.md. Target keyword is 'client portal software for agencies'. Save it to the cluster folder."

The agent reads the research file, checks copilot-instructions.md for voice and positioning rules, invokes the relevant skill, and writes the draft. It saves the file to the correct folder, adds the right frontmatter, appends a Related section with wikilinks to the hub article and the research doc. It logs what it did to today's daily note.

I review the draft. If something's off, I redirect in plain English: "The intro is too generic. Open with the specific problem of scattered client feedback, not a market overview." The agent rewrites. I approve.

The loop: Context load › Brief › Execute › Review › Close. Each step has a clear handoff point. I never lose track of where something is because every output is a file in the vault with a filename and frontmatter I can query.

The Session Loop
Load Context
MEMORY.md + yesterday's noteAgent
Brief
Plain-English task + file refsHuman
Execute
Agent runs the skillAgent
Review
Human reads, redirects if neededHuman
Close
Daily note updated, file savedAgent

Skills

Skills are what make the agent more than a general-purpose writer.

A skill is a set of instructions that specializes the agent for a specific function. I've built a few I can chain together to accomplish more in one session without re-prompting.

The agency-writer skill knows the brand voice: direct, no hollow superlatives, short paragraphs, no em dashes. The cms-auditor skill knows how to scan a CMS for copy patterns, identify problems, generate a fix list, implement changes in batch, and wait for approval before publishing. The cold-email skill knows the B2B outreach formula: specific subject line, one-sentence hook, clear ask, short follow-up sequence.

Each skill is like a job description. When I invoke cms-auditor, I'm not prompting a general AI. I'm activating a specialist who has read the brief, knows the tools, and has a defined process.

New skills can be added as the operation grows. Skills also prevent drift. The agency-writer skill carries the voice rules. Every piece written through it inherits those rules without me re-entering them.

Stop prompting everything. Start generating skills that can run multiple workflows in one session. You can browse the skills community at skills.sh.

What This AI Content Marketing System Produces

Here's what this setup has produced in practice.

SEO blog articles: briefed, drafted, formatted with correct frontmatter and wikilinks, published to Webflow CMS. A 2,000-word article from brief to draft takes about thirty minutes of my time, mostly on review and redirects.

CMS audits: one session cleaned 46 published blog posts. The cms-auditor skill scanned every post, identified overused phrases and copy clichés, generated a task list, implemented fixes in batch, and staged changes for my approval before publishing. That would have taken a human a full week.

Competitor and keyword research: the agent processes Semrush CSV exports, clusters keywords by intent, identifies content gaps, and writes a research doc I can brief from.

Cold outreach sequences: ICP defined in the vault, copy briefed through the cold-email skill, sequence written and ready to load into an outreach tool.

Webflow metadata management: page titles, SEO descriptions, Open Graph data, updated across the site via the Webflow MCP connection. Bulk operations that used to take hours now run in a single terminal session.

The Verification Loop

The agent isn't fully autonomous. You can chain skills together to build a semi-autonomous workflow, but I still review the final output before anything is published.

Nuance is hard. Brand voice drift happens at the edges: a sentence that's technically correct but sounds like a press release, an analogy that doesn't fit the audience. The agent produces consistent first drafts, not final copy. Human eyes are still the last gate.

The QA pattern I use: agent drafts, I read and leave comments in plain language, agent implements feedback, I approve. For shorter pieces, it's one round. For longer or higher-stakes content, it might be two or three.

"Done" has a clear definition. The file is in the vault with correct frontmatter. The Related section has wikilinks to the hub and research docs. Today's daily note has a line noting what was completed. The CMS item is published or staged. If any of those four are missing, it's not done. The agent knows this because it's in the instructions.

The verification loop also covers facts. AI hallucination is real. Any stat, tool name, or external claim in a draft needs a quick check before it goes live. I don't publish without reading the full piece.

What It Can't Do

Here's where it breaks down.

Context window limits: the more the vault grows, the more selective the agent has to be about what it loads. Memory hygiene solves this, but it requires periodic effort.

Tone drift in long-form content: on pieces over 2,500 words, the agent sometimes drifts toward a more generic, listicle-style register toward the end. The fix is breaking long articles into sections and reviewing each before continuing.

Relationship work: the agent can draft cold email sequences. It cannot maintain a relationship. Follow-up conversations, sales calls, partnerships, referrals: these require a human. The system handles volume and consistency. Judgment and trust still live with me.

What This AI Marketing Team for Agencies Covers

Most of the execution layer of a small agency's marketing automation stack. A content writer handling ten articles a month. An SEO analyst doing keyword research, clustering, and on-page optimization. A CMS manager handling metadata, publishing, and post-launch audits. An outreach coordinator drafting sequences and managing copy for cold campaigns.

That's four roles this system fills, without the hiring, the management overhead, or the salary line. Mid-range freelancers for those four functions, at twenty hours a week each, would cost $3,000–$5,000 a month. The stack I'm running costs under $50 a month.

Human Team
US salaries · no benefits included
AI Stack
Monthly subscription total

To be clear: this isn't a pitch. It's a constraint. I'm a solo founder. Sagely, the client management platform I'm building for agencies, is still early. The marketing budget is real but limited. This system exists because it had to.

What it doesn't cover: strategic bets, community relationships, conference conversations, editorial judgment calls, and decisions about what to build and what to stop. Those are mine. The system handles the volume. I handle the direction.

Getting Started: Build Your Own AI Marketing Team for Agencies

You don't need a team before you start this. You just need the vault. Start there, and be patient with the setup.

Set up the vault first. The structure matters more than the content. You need a clear folder hierarchy: content by cluster, research separate from drafts, a dedicated folder for daily notes, a top-level MEMORY.md. The agent navigates this structure. A messy vault produces messy outputs.

Write MEMORY.md on day one. Document your ICP, your product positioning, your editorial rules, your brand voice, your content clusters, and the tools you use. This is the brief your agent reads every session. Spend thirty minutes getting it right. It will save hours of re-prompting.

Install Copilot CLI and connect it to your vault. Set the vault folder as the working directory. Make sure the agent can read and write files. Test it with a simple task: "Summarize the contents of MEMORY.md and tell me what's missing."

Write your first skill. Start with the one you need most. If you're an agency founder, it's probably a voice guide or a brief template. Define the format, the rules, the examples. Invoke it on a real piece of work. Iterate on the instructions until the output is publishable without changes.

Run one session end-to-end. Brief a real task, have the agent execute it, review the output, redirect if needed, close the session with a daily note. The first time through, it will feel slow. By the fifth time, you'll have a rhythm.

The system doesn't replace judgment. It handles the volume so you have time to use yours.

Sagely

The client management platform for agencies.

Branded client portal, structured approval workflows, no more scattered feedback across email and Slack. If you're running a digital, creative, or marketing agency and the "client communication is everywhere" problem sounds familiar.

getsagely.co →