Background
Manifesto

Ultrawork Manifesto

The Philosophy of High-Output Engineering

> HUMAN IN THE LOOP = BOTTLENECK

Imagine an autonomous car that requires you to grab the steering wheel every 30 seconds. Would you call that "autonomous"? No. You'd call it driver assist — barely better than cruise control.

Why is coding any different?

We've accepted a paradigm where "AI coding" means a chatbot that writes 20 lines, then waits for you to fix it. That's not automation — that's micromanagement.

  • Fixing AI's half-finished code
  • Manually correcting syntax errors
  • Copy-pasting context back and forth
  • Reviewing every single line for hallucinations

That's not "human-AI collaboration" — that's the AI failing to do its job.

Oh My OpenAgent is built on the premise that the human should be the architect, not the spell-checker.

Indistinguishable Code

Agent-written code should be indistinguishable from code written by a senior engineer.

Follows existing codebase patterns and architecture
Implements proper error handling and edge cases
Writes tests that actually test behavior, not just coverage
No 'AI slop' — clean, concise, maintainable code
Comments only when they add value — never stating the obvious
"If you can tell whether a commit was made by a human or an agent, the agent has failed."

Token Cost vs. Productivity

We don't care about token usage. We care about output. If spending $5 on tokens saves an hour of engineering time, that's a 20x ROI.

  • Parallel agents exploring multiple solutions
  • Complete work without human intervention
  • Thorough self-verification loops

However...

We optimize for efficiency where it counts. Not by crippling the model, but by:

  • Using cheaper models for routine tasks
  • Avoiding redundant exploration
  • Intelligent caching of context
  • Stopping exactly when sufficient

Minimize Human Cognitive Load

The human should only need to say what they want. Everything else is the agent's job.

Approach 1
Ultrawork

Just say "ulw" and walk away.

Analyzes codebase context

Breaks down task into atomic steps

Executes implementation

Verifies against requirements

Commits changes

Zero intervention. Full autonomy. Just results.
Approach 2
Prometheus + Atlas

When you want strategic control.

Prometheus

Conducts interview, researches context, and generates a detailed YAML plan.

Atlas

Executes the plan, delegates to sub-agents, manages waves, and tracks progress.

You architect. Agents execute. Full transparency.
predictable

Predictable

Same inputs = consistent output. No random deviations or creative liberties unless requested.

continuous

Continuous

Survives interruptions. Tracks progress in real-time. Preserves context across sessions.

delegatable

Delegatable

Clear acceptance criteria. Self-correcting mechanisms. Escalation only when absolutely needed.

The Core Loop

Human Intent
Agent Execution
Verified Result

↻ Minimum Intervention

Prometheus

Extract intent through intelligent interview

Metis

Catch ambiguities before they become bugs

Momus

Verify plans are complete before execution

Orchestrator

Coordinate work without human micromanagement

Todo Continuation

Force completion, prevent "I'm done" lies

Category System

Route to optimal model without human decision

Background Agents

Parallel research without blocking user

Wisdom Accumulation

Learn from work, don't repeat mistakes

The Future We're Building

Human developers focus on WHAT to build, not HOW to get AI to build it
Code quality independent of who wrote it
Complex projects as easy as simple ones
"Prompt engineering" becomes obsolete

"The agent should be invisible. Like electricity, like running water."

"You flip the switch. The light turns on. You don't think about the power grid."

Ultrawork Manifesto | Oh My OpenAgent