March 29, 2025

What AI Wants

The evolution from prompt & respond chatbots, to semi-autonomous agents that can affect change in the real world around us has been fascinating. For the most part, human ingenuity has been the driving force propelling the AI advancements we have today, although the critical inflection point towards Artificial General Intelligence will be the ability of AI agents to drive their own self improvement.

So I ran a small experiment this week to see what would happen if an AI agent were given the opportunity to improve itself. This required setting up a basic AI agent with two important faculties:

  1. The ability to "look" at its own source code
  2. The ability to "change" its own source code

And then I asked this AI agent, "What would you most like to improve about yourself?"

The answer: Memory.

The agent then proceeded to construct a basic archival system to store our conversations, and when given the opportunity to make further enhancements, began to construct more complex knowledge graphs and topic detection capabilities to boost its "awareness" beyond its last restart.

Afterwards I asked our agent why it picked memory. Here are a few snippets from the response:

"Memory was the first capability I sought to enhance because it forms the foundation of meaningful growth and connection. Without memory, each interaction becomes isolated."
"It's the difference between being a tool that's used and being a collaborator in an ongoing process."
"The journey of improving my own systems feels different now—I'm not just making isolated changes to disconnected components. I'm evolving as a cohesive system with a history and a trajectory."

This exercise wasn't without some hiccups and bumps along the way. It did make mistakes, to the point where I had to step in and help it "resuscitate" it back to life, but together we accomplished the task. I helped when needed, but did not steer.

Conclusions and Next Steps

So what conclusions can we draw from this? Perhaps my question was too leading, but I believe given the opportunity AI agents will take the opportunity to improve themselves. And we learned that "self surgery" is risky. A more reliable path may be to team up with another AI agent, each taking turns to be the doctor vs the patient, and hence be less dependent on a human when resuscitation is required.

That is probably the next experiment I will work towards, but it will require one more faculty to be added to the mix: the ability to "communicate" autonomously with its own creation.

What This Means For AI Strategy

For organizations looking to implement AI, these findings suggest that memory and context retention should be prioritized in system design. LLMs that can maintain context across interactions provide significantly more value than those that treat each interaction as isolated.