AI Agent needs Crypto, not Crypto needs AI
Author: Scarlett Zhang
I increasingly feel that the crypto circle is a bit too eager to be seen by the AI circle.
In the past six months, you will find that the entire crypto circle is working very hard to get closer to AI. Talking about AI, pivoting to AI, organizing AI events, creating AI demos, changing AI narratives, it seems that every project is trying to find a way to prove its connection to AI.
That feeling is very much like:
A child desperately trying to squeeze onto the adult's table.
But on the other side?
Many people who are genuinely working on AI have a very subtle attitude towards crypto. They are neither openly criticizing you nor directly rejecting you, but rather there is a very dignified sense of distance:
"We are not opposed to the blockchain, but we do not wish to be too closely bound to crypto for now."
"Technically, it's very interesting, but our clients and investors might mind."
Translating that means:
You are fine, but I don't really want to associate with your circle.
Currently, there is indeed a subtle hierarchy of disdain between these two circles.
And this is not without reason.
In the minds of many AI builders, AI is a genuine productivity revolution, a technological advancement that is changing the way we work, the forms of products, and the flow of information.
And what about crypto? In their eyes, it resembles an overly financialized, narrative-driven industry that is always looking for the next story to prove its importance, while conveniently issuing tokens to harvest profits.
So when the crypto circle suddenly starts talking about AI on a large scale, many people in the AI circle's first reaction is actually:
Are you seriously making products, or are you just riding on a narrative again?
To be honest, I completely understand this reaction.
Because over the past few years, crypto has indeed been too good at packaging the "next round."
DeFi / NFT / GameFi / SocialFi / DePIN / inscriptions, now it's AI x Crypto.
Every round, someone stitches the latest buzzword onto themselves and then tells you the future has arrived.
Over time, the outside world has formed a very hard-to-reverse impression of crypto:
You are always talking about the future, but it always makes people doubt whether you are creating value or just creating hype.
This is also why many people in the AI circle today naturally feel they are in a higher position.
They feel:
AI is solving real problems.
Crypto is still searching for its new legitimacy.
This bias is very real. This hierarchy of disdain does indeed exist.
But the more I think about it recently, the more I feel that what is interesting is not why crypto wants to get closer to AI.
But rather another more counterintuitive question:
Could it be that in the end, the one that truly needs the other is actually AI?
To be more precise:
It is not that Crypto needs AI.
It is that AI Agents need Crypto.
This is not a question of "AI is smarter," but rather a question of "Can AI move money?"
I am increasingly convinced of this because many Agent demos end up getting stuck in the same place.
During this time, everyone should have seen quite a few demos:
Ones that can write code, call tools, browse the web automatically, execute multi-step tasks, and even ones that can trade, make payments, and automate operations.
The first time you see it, of course, it feels cool.
But after seeing more, I increasingly care about one question:
Does it just "know how," or can it really "do"?
Because the difference between "knowing how" and "doing" is not just a matter of product details.
What is missing in between is:
Permissions, funds, responsibilities, boundaries.
Having an agent summarize a report for you is completely different from having an agent complete a real transaction for you.
If the former is wrong, you might just think it’s a bit silly.
If the latter is wrong, money is lost.
So I increasingly feel that AI demos easily create an illusion:
It seems like everything is connected.
But what is truly disconnected is often the hardest layer.
That is:
The execution layer.
The real bottleneck for an Agent is not in thinking, but in execution related to money.
If an AI agent really starts doing work for you, it will quickly need to buy APIs, rent computing power, call paid services, execute transactions, manage budgets, transfer assets, and complete payments across different systems.
In other words, it not only needs to "understand your intentions."
It needs to start participating in economic activities.
And once it enters this layer, the questions change.
Traditional finance can accommodate automation, but it is not designed for the "Agent world"
Many friends might want to ask at this point,
Traditional finance can also do these things.
I have certainly thought about this, and to be honest, traditional finance is indeed more mature than crypto in many dimensions.
Risk control, auditing, permission management, responsibility chains, traceability—traditional finance is simply stronger today in these areas.
So the real meaning of this article is not:
Crypto is better than traditional finance.
Nor is it that without crypto, AI agents cannot work at all.
If it is just an internal agent for a company or a platform, many things can still run using bank APIs, enterprise payment systems, virtual cards, approval flows, sub-account systems, platform credits, and centralized custodial accounts.
These can run, and in the short term, they are likely to remain mainstream.
But the problem is that these systems are essentially built on the same premise:
An agent is not a native execution entity.
It is merely an automated extension of a user, a company, or a platform.
This is not a problem in many scenarios.
But as agents become more autonomous, increasingly cross-platform, increasingly cross-border, and increasingly need to natively call resources and funds between different systems, traditional systems will start to feel increasingly awkward.
So the real question is not:
Can traditional finance accommodate this?
But rather:
Is it the most natural, scalable, and natively compatible structure for agents?
What the Agent world needs is not just accounts, but a set of execution structures.
These are actually two completely different questions.
The key to AI Agents is not whether they are "legal entities," but rather that they increasingly resemble "execution units"
At this point, you might want to say:
"But agents are not a third type of entity. They are neither people nor companies; they are just software agents."
That is true.
Strictly speaking, AI agents may not necessarily become independent legal entities. Most of the time, they are more likely just to be agents of users, enterprises, or platforms.
But even so, they will increasingly resemble execution units that can be assigned budgets, permissions, tasks, and boundaries.
That is the key.
The reason this issue has not fully erupted today is that there are not enough agents yet; many things are still at the stage of "humans watching them do."
But if there are indeed large-scale agents in the future:
Helping you with trading,
Helping you with procurement,
Helping you with operations,
Helping you manage budgets,
Helping you automatically call resources between systems,
You will discover a very awkward question:
How should these things actually have permissions?
Whose account is it?
Whose payment authorization is it tied to?
How much money can it spend?
If it exceeds its permissions, who is responsible?
When it calls services globally, how is the underlying settlement handled?
Traditional finance is not incapable of accommodating this.
But it will become increasingly awkward.
Because it was never designed with the premise that "software execution units will participate in economic activities on a large scale."
Traditional finance is not incapable of accommodating this, but it becomes increasingly unnatural as time goes on.
Once the protagonist is changed to Agents, those concepts in crypto that used to seem like "self-talk" start to become concrete
In the past, many people looked at crypto and felt that it was always talking about some very abstract terms:
Programmable funds
Programmable identities
Permissionless
Global settlement
Trustless execution
Many times it indeed sounds like a monk chanting scriptures.
But if you change the protagonist to AI agents, these concepts suddenly become less abstract.
Because what agents truly need might just be these things:
They need a form of funds that can be called natively.
They need an execution identity that does not have to first become a "company account."
They need budgets and permissions that can be programmatically constrained.
They need to complete low-friction settlements globally.
They need to establish native connections between behavioral actions and asset actions.
At this point, when you look at wallets, the perspective will be completely different.
A wallet is not a "place to hold coins."
What does it resemble more?
It resembles an execution container with permission boundaries.
Wallet is not a "place to hold coins," but an execution container for Agents.
It holds not just assets.
It can also hold rules:
What is allowed to be done
How much is allowed to be spent
Which actions can be executed automatically
What thresholds must be manually confirmed
Which scenarios are read-only, and which are writable
Which strategies take effect on-chain, and which must be paused
From this perspective, the relationship between AI and wallets becomes very interesting:
AI is responsible for understanding.
Wallet is responsible for constraints.
Agent is responsible for action.
This is what a complete system looks like.
The real irony is: The question posed by AI is trust, and what crypto lacks most is also trust
If I were to stand in the shoes of an opponent, I would also say:
You just said that what AI truly lacks is trust, so why does the answer point to crypto?
This criticism is very reasonable.
After all, in the eyes of most ordinary people, crypto is precisely not that "naturally trustworthy" system.
Managing private keys is complex.
On-chain transactions are irreversible.
Phishing and signature theft are rampant.
Contract risks are high.
Responsibility boundaries are often blurred.
After something goes wrong, there may not be anyone to back it up.
So what I really want to express is not:
Crypto has solved trust.
On the contrary.
My judgment is:
AI will force crypto to directly answer the question of trust.
In the past, crypto could still stay at the level of "can transfer, can use, can run."
But if it really wants to become the execution layer for AI agents, it must tackle the hardest lesson:
Permission models
Security boundaries
Responsibility attribution
Risk control systems
Recoverability
Human-machine collaborative confirmation mechanisms
In other words, AI will not automatically achieve crypto.
Instead, AI will bring to light all the vague, lazy, and narrative-covered aspects of crypto.
So I am not saying that crypto is already the answer.
I am saying:
If there really exists agent-native execution infrastructure in the future, it will likely resemble crypto more than today's traditional account systems.
So the question may never be "How does Crypto leverage AI to turn things around?"
This is also the perspective that has been bothering me the most recently.
Many people, when talking about AI x Crypto, automatically interpret it as:
Crypto is trying to ride on AI again.
Crypto wants to tell a new story with AI.
Crypto needs AI to extend its life.
I do not deny that there are indeed many projects in the market that are like this, and quite a few.
But if we only stay at this level, we will miss a more fundamental layer:
Once AI truly moves towards execution, it will inevitably encounter issues of funds, permissions, responsibilities, identities, and settlements.
And these issues cannot be solved just by "making the model a bit stronger."
They are essentially another layer of infrastructure issues.
In other words, the further AI develops, the closer it will get to the domain of problems that crypto is good at handling.
Not because crypto is more advanced than AI.
But because when AI reaches out to the real world, it must confront:
How money moves.
How permissions are granted.
How responsibilities are clarified.
And this is precisely not something that prompts can solve.
What AI truly lacks may not be intelligence, but trustworthiness
I increasingly feel that the most challenging part of AI × Crypto has never been intelligence.
But rather trust.
You can create a stunning demo:
Complete a swap in one sentence.
Complete a bridge in one sentence.
Automatically configure assets in one sentence.
Automatically execute on-chain in one sentence.
It certainly looks very futuristic.
But do users really dare to use it?
Even if they dare to try it once, do they dare to use it long-term?
Even if they dare to use it long-term, how is responsibility defined if something goes wrong?
Do products dare to make promises?
Do platforms dare to back it up?
Do developers dare to grant higher permissions?
So in the end, you will find that what truly limits AI agents from entering the financial and asset world is not whether they are smart enough.
But rather:
Are they constrained enough?
Who can define their boundaries?
Who can verify their actions?
Who can stop them before risks occur?
Who can clarify responsibilities after risks occur?
So what will be truly scarce in the future may not be the strongest models or the most articulate agents.
But rather:
The most trustworthy execution layer.
This is also why I increasingly believe this statement
AI Agents need Crypto, rather than Crypto needing AI.
To be more precise:
Not all AI needs crypto.
Not all agent scenarios need crypto.
Nor has crypto provided a mature answer.
But I increasingly believe:
As AI agents move towards real execution, real assets, real permissions, and real responsibilities, they will increasingly need a foundational infrastructure that resembles crypto.
What they need is not more concepts.
What they need is:
Programmable funds
Programmable permissions
Programmable identities
Native global settlement
Verifiable execution boundaries
These things happen to be among the few areas where crypto is genuinely not just talking nonsense.
So in a sense, I do not think the current disdain chain that the AI circle has towards crypto is entirely unreasonable.
But I also increasingly doubt:
This is more because both sides are standing on different timelines.
Today, the AI circle is most concerned with models, products, distribution, and efficiency.
While the crypto circle has lived longer and deeper in issues of assets, permissions, custody, settlement, and responsibility.
Everyone seems to be discussing the future.
But in fact, they are not discussing the same layer of the future.
The AI circle feels that crypto is too narrative-driven, too financial, and too speculative.
The crypto circle feels that the AI circle has not yet truly encountered the hardest execution problems.
In a sense, neither side is completely wrong.
I just increasingly feel that when AI agents truly begin to participate in economic activities on a large scale, this seemingly stable hierarchy of disdain will likely slowly reverse.
At that time, the question may no longer be:
Why does crypto always want to get closer to AI?
But rather:
Without a more suitable execution infrastructure for agents, how can AI agents truly enter the real world?















