Should you build with AI frameworks?

AI

The entire world seems to be building AI products - new products, AI modules attached to existing products, and frameworks to help you bolt your new AI products together.

Frameworks you may have heard of include LangChain, Crewai, and Langflow.

They purport to offer a simplified way of hooking up more complex AI workflows - linking different data sources, accessing your company’s core processes, switching between models, managing evaluations and many more LLM-related challenges.

But in doing so they abstract away from the models themselves - sometimes leaving you with skills and knowledge of the framework, but not of the underlying LLM you are interacting with.

So what’s the right approach - interact with the LLMs directly, or learn a framework?

Interestingly both OpenAI and Anthropic are quite public about how you should think about this topic - from the perspective of where you invest as a builder, and as a customer.

OpenAI encourages you to assume the models will improve

Just prior to Christmas Sam Altman presented at the OpenAI developer day via a live stream interview with Harry Stebbings of 20VC.

In this segment Harry asks Sam which companies should be worried about being steamrollered by OpenAI.

(go to 4:05 if the video doesn’t automatically start there)

In short:

Trillions of dollars of new market cap value will be created by companies that build new products on top of LLMs to allow users to do things that we not possible or impractical before.

But, he says, there is a small area that they want to make irrelevant by making the models better and better.

He says that if your bet is that the models will not improve and you’ve built a tool to help get around the model or to improve the way the model works - you will see OpenAI remove the problem you have built your hypothesis on.

If however, your bet is that the models will improve, and as they do improve your product will get better as a result, then you are heading in the right direction and are unlikely to see OpenAI destroy the value of your product.

Anthropic encourages you to interact with the models directly

Also just prior to Christmas Anthropic (the company behind Claude) published a landmark blog post titled “Building effective agents”.

In the article they break down some definitions for workflows (systems where LLMs and tools are orchestrated through predefined code paths), and agents (systems where the LLMs dynamically direct their own processes and tool usage, maintaining control over how they accomplish tasks).

In the article they are explicit about when to use frameworks:

“These frameworks make it easy to get started by simplifying standard low-level tasks like calling LLMs, defining and parsing tools, and chaining calls together. However, they often create extra layers of abstraction that can obscure the underlying prompts ​​and responses, making them harder to debug. They can also make it tempting to add complexity when a simpler setup would suffice.

We suggest that developers start by using LLM APIs directly: Many patterns can be implemented in a few lines of code. If you do use a framework, ensure you understand the underlying code. Incorrect assumptions about what's under the hood are a common source of customer error.”

Read that second paragraph again - “We suggest that developers start by using LLM APIs directly.”

In both the OpenAI and Anthropic APIs functionality is being rolled out that will start to offer many of the capabilities of chaining tool calls together, model switching, gathering context from your company’s environment, and providing agentic behaviour directly within the LLM platform.

Where should you spend your time learning?

My advice would be to stay close to learning the LLM APIs directly rather than dedicating time to learning a framework.

Sam Altman couldn’t be any clearer - they are on a mission to make the need to have a framework irrelevant.

Anthropic are direct - our advice is to use LLM APIs directly - incorrect assumptions about what’s under the hood are a common source of customer error.

Remember Dunlop and the science of suspension

In 1887 John Dunlop invented the first pneumatic tyre for his son’s tricyle.

Prior to that the main form of suspension on a bike was springs that would go under the seat or in the frame.

What Dunlop figured out was that the further away the suspension was from the bumps in the road the less effective it was.

The best solution was to figure out how to remove the bumps “at source” by developing a form of suspension within the tyre itself.

When you are considering AI frameworks keep this in mind.

AI frameworks are trying to solve today’s LLM gaps by putting some springs under the seat.

Meanwhile Anthropic, OpenAI and the other frontier models are developing pneumatic tyres for the future.

If you do invest in learning and using frameworks, keep a close eye on what the LLMs are developing themselves so you have the flexibility to shift in future.


Get Started

Whenever you are ready there are three ways I can be helpful:

  1. YouTube Channel: Subscribe for regular updates on AI developments

  2. AI Adoption Accelerator: 12 week AI adoption program for companies

  3. Kowalah: AI-powered buying platform for technology buying groups

Previous
Previous

AI Inspiration Briefings

Next
Next

RevOpsCharlie is now charliecowan.ai