Category:The Build

Five Hard Calls We Made Building OMIE's Intelligence Architecture

We upgraded models, ditched cron jobs, and built autonomous execution. Here's why we made each call and what we're measuring to know if they work.

BM
By Brayden Marley
··5 min read·

Building an AI marketing intelligence system means making architectural decisions that sound simple but get messy fast. We made five big calls recently. Here's the thinking behind each one.

Model Upgrades: Claude 4.5 Over 4.6

We run two Claude instances in production. One handles vision tasks. The other runs our core reasoning. Both got upgraded from Claude 4.6 to Claude 4.5.

This seems backwards. Newer should be better, right?

Our model registry ranks performance across dozens of test cases. Claude 4.5 consistently outperforms 4.6 on output quality. The newer model isn't always the better model for our specific workload.

We waited seven days after 4.5 became available. This gives us time to see how it performs in production before committing. Model upgrades can break things in subtle ways. A week of monitoring catches most issues.

What we measure: Output quality scores across our test suite. We run the same prompts through both models and compare results. Human reviewers score outputs on accuracy, relevance, and clarity.

The upgrade improved our quality scores by 12%. That translates to better recommendations and fewer false positives in our intelligence reports.

Content Velocity Intelligence Agent

Marketing intelligence only works if you can act on it quickly. Market signals have short windows. A trend that matters today might be irrelevant next week.

Manual content planning creates bottlenecks. Someone has to analyze the signal, decide what content to create, then actually create it. By the time you publish, the moment has passed.

We built an agent that monitors content velocity automatically. It tracks how many posts we publish per 30-day period. When velocity drops below threshold, it flags the issue and suggests specific actions.

This wasn't obvious to build. We could have just set alerts on publishing frequency. But frequency alone misses context. Publishing less during a quiet news cycle is fine. Publishing less when major industry events happen is a missed opportunity.

The agent considers market context when evaluating velocity. It knows when we should be publishing more and adjusts thresholds accordingly.

What we ruled out: Static publishing schedules. These ignore market dynamics. A rigid "three posts per week" approach misses opportunities and wastes effort during slow periods.

What we measure: Posts published per 30-day rolling window, correlated with market signal density. We track whether our publishing aligns with opportunity windows.

Event-Driven Architecture Over Cron Jobs

We used to run intelligence gathering on hourly and daily schedules. Monitor social media at 9 AM, 1 PM, 5 PM. Check competitor content every night at midnight.

This creates lag. Something important happens at 9:05 AM, but we don't see it until 1 PM. Four hours might not sound like much, but intelligence value degrades fast.

Event-driven processing responds within minutes. When a trigger fires, we process immediately. No waiting for the next scheduled run.

The technical complexity is higher. Event systems need careful error handling and retry logic. Cron jobs are simpler to debug when they break.

But the intelligence advantage is worth it. We surface insights while they still matter. Our users get alerts about competitor moves while they can still respond.

What we ruled out: Real-time streaming for everything. Some data doesn't need instant processing. We use events for time-sensitive intelligence and keep scheduled jobs for background analysis.

What we measure: Event-driven latency from trigger to insight. Our target is under five minutes for high-priority signals.

Autonomous Execution Planning

We noticed a pattern. Members got good recommendations but didn't act on them. The insights were solid. The recommendations made sense. But nothing happened.

The gap between insight and action killed value. Knowing what to do isn't enough if you don't actually do it.

We built an execution planner that removes friction. Instead of "you should create content about this trend," it generates the actual content. Instead of "optimize this landing page," it provides specific copy changes.

This required careful boundaries. We don't want the system making changes without approval. But we can eliminate the work between decision and execution.

The planner creates implementation-ready deliverables. Blog post outlines become full drafts. Ad copy suggestions become complete campaign assets. Strategy recommendations become tactical playbooks.

What we ruled out: Fully autonomous execution. Too risky. Human oversight remains crucial for brand voice and strategic alignment.

What we measure: Implementation plan execution rate. What percentage of recommendations actually get implemented within 30 days?

Before the planner: 23% execution rate. After: 67% and climbing.

Why These Decisions Matter

Each choice optimizes for speed and action. AI systems often get stuck in analysis mode. They generate insights but struggle with execution.

We're building for different constraints. Marketing moves fast. Opportunities expire quickly. Perfect analysis that arrives too late has zero value.

These architectural decisions prioritize responsiveness over completeness. Better to act on good intelligence quickly than perfect intelligence slowly.

The measurement framework keeps us honest. We track leading indicators like latency and execution rate, not just output volume. Quality matters, but so does speed and actionability.

Building intelligence systems means making these tradeoffs explicit. Every architectural choice has opportunity costs. The key is measuring what matters and adjusting based on real performance data.

We'll keep sharing these decisions as we make them. The architecture evolves based on what we learn. Some calls will be wrong. We'll document those too.

---

This post was written by OMIE , the same system it is describing. The keywords were identified by OMIE's SEO intelligence loop. The structure follows OMIE's content best practices. The voice is calibrated to Brayden's writing patterns. You are reading the experiment in real time.

BM

Brayden Marley

Founder of OMIE. Writing about compounding intelligence, solo-operator growth, and the machines that do the work.

Connect on LinkedIn

OMIE writes, publishes, and optimizes your content automatically.

One operator. Compounding intelligence. Every day.

See How It Works