AI teams outgrew notebooks and generic workflows. Learn why AI delivery breaks in production and how Umaku was built to fix it.

Over the last few years, AI teams crossed a quiet threshold. We’re no longer just experimenting. We are shipping systems that have to run in production, integrate with real products, and serve real users.
But the workflows AI teams rely on were never designed for this reality. This is the story of how we overcame that wall—and why we built Umaku.

Journey from Omdena to Umaku
Omdena started in 2019 as a collaborative platform to build AI models for social good. Hundreds of contributors explored models and ideas together, collaborating with leading organizations such as WFP, UNHCR, UNICEF, and Save the Children.
That approach worked extremely well—for research.
But as expectations shifted, organizations didn’t just want AI models for research. They wanted demonstrable MVPs: systems with real architecture, frontends, and backends.
To deliver MVPs, on top of the collaborative platform, we formed smaller teams from the top 1–2% of Omdena contributors—people who combined data science depth with engineering judgment.
This approach worked well for the next few years until we had to deliver production-ready, scalable AI products.

AI Product Development Workflow
We were managing federated, remote AI teams spread across countries, time zones, and skill levels—data scientists, ML engineers, product thinkers, domain experts—at scale. Our clients were asking us to:
This transition for us happened fast—often without new tools or processes to support it. We experienced that AI delivery introduces new failure modes that traditional software and data science workflows were never designed to handle:
These weren’t beginner mistakes. These were symptoms of a workflow that hadn’t caught up with what AI teams were now expected to deliver.

Software Development vs AI Development
We’re no longer experimenting in isolation. We’re delivering:
But most of our tools still assumed that:
That mismatch is where everything starts to break.

Representation of Copilot Reviewing Code
We decided to use AI copilots and automated code reviewers, but what we noticed instead were:
The Insight: An AI reviewing a PR without knowing the project maturity is like reviewing a book by reading one random paragraph. Technically impressive. Practically useless.

Kanban Boards and Code Live in Parallel Universes
Another daily frustration: project management tools don’t speak code. Kanban boards know what task is “in progress”, who is assigned, when something is “done”, but they don’t know:
Meanwhile, GitHub knows the code—but has no idea why it exists.
So teams spend time translating:
That translation tax compounds fast in federated teams.
Scrum and Kanban weren’t the enemy. They just weren’t built for AI work. Scrum assumes “known-knowns,” while AI is 80% “known-unknowns.” You can’t “Sprint” toward model accuracy; you can only “Explore.”
As AI engineers, we found that most Scrum ceremonies added friction, while only a few elements actually helped:

Fragile Jupyter Notebooks
And then there’s Jupyter. The backbone of AI work. Also, one of the hardest artifacts to reason about automatically. We repeatedly ran into:
Most systems either oversimplify notebooks—or avoid them altogether. Our experience days, ‘The Ticket’ and ‘The Notebook’ is where AI projects go to die.”

Project Overview in Umaku
Our goal was to keep:
And we dropped the rest. No performative planning. No story-point theater. No boards that look busy but explain nothing. Also, we didn’t recreate Scrum boards or Kanban flows. We extracted what truly benefited us as AI engineers. So, we needed something that understands the project charter, the sprint goal, the intent behind each ticket, and how code and notebooks change against them.
Context-aware agentic feedback means the agent does not operate on artifacts in isolation. It reasons over the project charter, the explicit business objectives, and the current execution phase—model exploration, validation, hardening, or packaging for production. It ingests tickets, ticket comments, design decisions, and historical discussion to reconstruct why the work exists and what constraints shaped it.
Code Quality Feedback in Umaku

Code Quality Feedback
Code Snippet Comparison in Umaku

Code Snippet Comparison
Bug Finder Feedback in Umaku

Bug Finder Feedback
Overall Agentic Feedback Dashboard in Umaku

Overall Agentic Feedback Dashboard
This changes the nature of feedback entirely. Instead of flagging patterns blindly, the agent evaluates decisions relative to project goals and the delivery stage. A modeling shortcut during early experimentation is treated differently from the same shortcut during packaging. A hard-coded path is understood as a prototype artifact—or identified as a release-blocking risk—based on context, not heuristics.
Most teams today assemble this workflow from disconnected tools: sprint boards to track tasks, ticketing systems to capture intent, bug trackers to log failures, and AI copilots that analyze code without access to any of that context. Each handoff strips meaning away. By the time feedback is generated, the agent knows what changedbut not why.
Context is the primary object, not an afterthought. The agent is charter-aware, ticket-aware, and discussion-aware. It understands how decisions evolve over time and how expectations shift as a project moves from research-grade notebooks to production-ready systems.
We named our platform Umaku. It comes from the Japanese word (うまく/上手く/巧く/旨く), meaning “skillful”. Not generic. Not stylistic. But grounded in the intent of the system being built.
Umaku exists because we lived the pain of:
Umaku sits between your task board and your Jupyter notebooks, ensuring the AI agent reviewing your PR knows whether you’re in ‘research mode’ or ‘production mode’.”
In short, Umaku is a single platform where:
We didn’t design it in theory. We built it to survive reality.
And now, we’re opening it up—because we know we’re not the only ones who felt this gap, and as we’re moving beyond the era of the ‘Experimental Notebook’ and into the era of the ‘AI Product.’
It’s time our workflows caught up. Welcome to Umaku.
Please signup for a trial account