Skip to Content
Artificial Intelligence

It's Not About the Model, It's About the Pipeline: How Agentic AI is Redefining the Next Era of Artificial Intelligence

Sean Barker
Sean Barker
Developer

The recent boom in AI that’s been dominating headlines, VC boardrooms, and GitHub repositories was driven largely by one thing: very sophisticated and affordable large language models (LLMs). Companies like OpenAI, Anthropic, Google, and Microsoft spent billions of dollars to make these increasingly powerful models that could unlock possibilities that were not probable to most users and companies prior to their release.

When GPT-3 released in 2020 it demonstrated that simply scaling the size of the model could unlock emergent capabilities. These include things like chain-of-thought reasoning, basic coding, and even early forms of logical inference. The release of GPT-4 further cemented the narrative: bigger models are better models. It was clear that silicon valley was at the start of a race to the most powerful model.

However, as we move into 2025, it’s becoming increasingly clear that this story has shifted a bit. The race is no longer only about the model.

It’s about the pipeline.

Why Bigger Models Are Becoming Less Important

To be clear, we still expect to see foundational models make improvements over time. There’s been no shortage of innovation in this regard and companies like OpenAI, Anthropic and Meta are still very much in the race to squeeze as much performance as possible out of their models.

Yet returns are diminishing.

Model benchmarking and evaluations through tools like LMArena have increasingly shown that the top 10-15 models are clustered extremely closely in terms of user preference and win rates. To a growing degree, the industry is finding that most users don’t notice (or care) about the model that answers their question — they just want an answer.

Meanwhile, the most interesting innovations in AI today are not due to companies squeezing an extra 2% of performance out of their foundational models. It’s the products that are finding ways to orchestrate, reason, retrieve, and execute tools across these models.

That’s where agentic AI comes in.

What Is Agentic AI?

Agentic AI is a broadly used term, but for the sake of this article, we'll define it as: a system where AI is not simply producing single-turn outputs (as traditional chatbots do), but is instead autonomously planning, reasoning, deciding, and executing tasks over time.

Rather than "answer this question" or "summarize this document," agentic AI might be told:

  • "Find the best supplier for this part, negotiate a discount, and draft a purchase order."

  • "Research five marketing strategies for a new product launch and present a ranked plan."

  • "Design and run experiments to improve a software system's reliability."

The concept of agentic systems is by no means a new one. Depending on how loosely you define it, the philosophical roots of the concept can trace all the way back to the mid-20th century, with Alan Turing’s foundational work on machine intelligence and autonomous reasoning.

However, if you set theory aside and focus on the current AI revolution we really start to see the concepts of agentic AI systems become tested in projects like AutoGPT and BabyAGI, which demonstrate that chaining simple model outputs together could accomplish surprisingly sophisticated tasks.

Later, more robust frameworks like OpenAI's Assistants API, LangChain, and crewAI built on this foundation, offering ways to coordinate memory, tools, retrievers, and LLMs into modular workflows.

The key difference is the system's ability to orchestrate.

Agentic systems don’t rely on any single model alone. They strategically plan a sequence of actions that it can execute to solve the problem it has been presented with. They do this through retrieval of long-term memory, calling APIs, writing and recording intermediate outputs, and evaluating progress along the way. Along this journey, the system might swap out the model at any point to pick something better suited for the current task.


Pipelines Beat Prompts

In this new (current?) era, innovation won't be limited to having the “best” LLM.

The products that will take the headlines and emerge as a trailblazer in the industry are those that can build flexible, resilient, pipeline-based systems where models are easily swappable, tasks are modularized, and complex workflows are intelligently managed.

A few concepts that will be front and center will be:

  • Model Agnosticism: A system shouldn’t rely on any one model to produce an output. Swapping for the latest and greatest should be a key concept of the product.

  • Memory and Retrieval: The best products will find innovative ways to fit the most comprehensive and efficient information into their context windows.

  • Tool Execution: A fundamental part of the agentic concept is to allow the system to act on your behalf. This means giving the system the ability to execute actions rather than relying purely on language prediction. (perform a web search, check for similar documents in my drive, send a Slack message to inform the team of a potential outage)

  • Self-Critique and Iteration: The best agentic systems will be able to monitor their intermediate progress and pivot/adjust to make sure they’re producing the best output possible

These systems won’t rely on prompt engineering alone and instead put a heavy focus on system design.

Conclusion: The Shift Is Underway

The first phase of this AI boom taught us to care about "the model."

The next phase will be about what you build on top of it.

In the future, the most powerful AI solutions won’t be "GPT-5 in a box." They’ll be pipelines — sprawling, modular, agentic architectures that plan, learn, reason, and act — swapping models as needed, plugging into live data streams, building real memory, and coordinating complex projects from start to finish.

The winners won’t just be those who have the smartest models.

They’ll be those who have the smartest pipelines.