Welcome to the second edition of Idea Frontier, where we explore paradigm-shifting ideas at the nexus of STEM and business. In this issue, we dive into three frontiers: how AI agents are learning to smartly pick their tools (and why that matters for building more general intelligence), how new memory frameworks like Graphiti are giving LLMs a kind of real-time, editable memory (and what that means for the age-old question of whether AI can truly learn new things), and how the notion of “Planetary Computation” is reframing cities themselves as nodes in a global tech stack. Let’s dive into our exploration of Dynamic Tool Selection, Memory Engineering, and Planetary Computation.

Dynamic Tool Selection + AI Agent Hubs = AGI?

Dynamic Tool Selection

One emerging breakthrough in AI agents is dynamic tool selection – essentially, teaching AI which tool to use on the fly, rather than overwhelming it with every tool at once. This approach is analogous to retrieval-augmented generation (RAG), but instead of retrieving data, the agent retrieves the right tool for the task. Why is this necessary? Research has shown that when an LLM agent is handed too many tools or APIs at once, its performance in choosing the correct one plummets . In one benchmark, GPT-4’s accuracy on a task fell from 43% with a small toolset to just 2% when 51 tools were available, underscoring how an uncurated tool buffet can “significantly degrade performance” . In short, more is not merrier – unless the agent can select wisely.

Dynamic tool selection addresses this by giving agents a way to pre-select a subset of relevant tools at runtime, much like a librarian fetching a few relevant books instead of dropping an entire library on your desk. Rather than hard-coding a fixed toolbox, the agent (or an intermediate system) assesses the query context and pulls in only the tools likely to be useful . This keeps the prompt shorter and more focused, preserving response quality even when a vast tool library exists in the background. It’s a design pattern increasingly seen in advanced AI frameworks – for example, using a vector store of tool descriptions to find the best tool match for a given query . The result is an AI that can leverage a large arsenal of capabilities without choking on irrelevant options, an ability that moves us closer to flexible, general problem-solving.

AI Agent Hubs

This idea connects directly to a trend in the AI world: the rise of AI agent hubs. If agents can dynamically draw on many tools, why not network many agents together as specialized “experts” that call on each other? Dharmesh Shah – co-founder of HubSpot – has even launched a new platform, Agent.ai, described as “the professional network for AI agents” . (As Dharmesh puts it, “Agents are the new apps.” ) The concept of an agent hub is to host a plethora of specialized AI agents (each with their own tool sets or skills) and route tasks to whichever agent (or set of tools) is best suited. Crucially, dynamic tool (and agent) selection is what makes this scalable – an agent-of-agents can broker which sub-agent or tool to invoke, rather than a monolithic model trying everything at once.

Towards AGI

Why is this hub model so exciting? It hints at a path toward general-purpose intelligence. Instead of trying to cram all knowledge and skills into one giant model, we assemble a society of narrower AIs that collectively cover immense ground. A hub of interoperating agents, each an expert in something, starts to look like an architecture for general intelligence. In such a network, any single agent can tap into the specialized abilities of all the others on demand. A platform like Shah’s AI Agent Hub could thus serve as a foundation for broad AI: it’s essentially an app store or professional network where complex tasks are solved by orchestrating multiple agents in concert. This modular approach echoes how human organizations solve problems – not by one über-generalist, but by teams of specialists coordinating. By dynamically selecting the right “colleague” or tool for each sub-problem, AI agent hubs may turn the dream of a general problem-solving AI into a practical reality. It’s a vision in which breadth is achieved through connection. Hubs of tools and agents ensure that as we scale up the number of capabilities, we don’t lose effectiveness – instead, we route intelligently and maintain quality. In the long run, such networks of AI agents could be “the new apps” ecosystem and perhaps the skeleton of more general AI systems. The frontier insight here is that integration – via dynamic selection and hubs – might be as important to AI’s future as raw model size or training data.

Memory Engineering

Graphiti

As we push the boundaries of what AI can do, a crucial question re-emerges: Can our AI systems accumulate and synthesize new knowledge in real time? Large language models today are mostly stuck with their training data knowledge. If something new happens or if they need to learn a new fact, how do we enable that? Enter Graphiti, a new approach to LLM memory that offers a fresh take on this problem. Graphiti (from the team at Zep) provides a real-time, editable memory layer for LLM-driven agents, built on knowledge graphs. In essence, it lets an AI agent autonomously build and update a knowledge graph of facts and relationships as it interacts, while preserving temporal context. Unlike a static database or a long list of chat history, this memory is structured and dynamic – nodes and edges that can evolve when the world changes or when the AI learns something new.

What makes this noteworthy is the level of control and context it gives. Graphiti’s knowledge graph isn’t a fixed corpus; it’s grown and pruned on the fly, meaning the AI’s understanding of a domain can update in real time. For example, if a user’s preferences change or a new entity appears in conversation, the graph can be adjusted immediately to reflect that . It’s “temporally aware” memory – aware of history and change – rather than a dumb log. In practical terms, this is like giving the AI a notepad where it can write down new facts or revise old ones as it learns, and organize those notes logically. The memory becomes editable: both the AI (and potentially developers/users) can correct or insert knowledge. This capability is a significant step in addressing the limitation that “an LLM can only do this out of the box for knowledge it was trained on.” In other words, without something like Graphiti, an LLM is largely stuck with pre-trained knowledge and perhaps some retrieved documents. With Graphiti, we inch closer to an AI that can learn continuously.

Synthesizing New Knowledge

This development ties into the broader conversation about whether LLMs can synthesize new knowledge. Some argue that LLMs, given the right prompting or fine-tuning, can indeed combine bits of known information to produce novel insights. (In a previous article questioning whether LLM’s can synthesize new knowledge, I demonstrated that an LLM conditioned with hypothetical scenarios could reason out new implications – but also noted the model’s limits when stepping outside its training data .) Others argue that it can’t without knowledge graphs (see Ryan Casey’s article on Probabilistic Knowledge Synthesis). The key challenge is out-of-sample knowledge: how to handle facts or concepts the model didn’t originally train on. Traditionally, there are a few approaches: one is retrieval (RAG) – pulling in external info as needed – and another is fine-tuning the model on new data. Each has downsides: RAG can struggle with truly synthesizing or updating knowledge structures, and fine-tuning is resource-intensive and can cause a model to “forget” old knowledge if not done carefully.

Graphiti’s real-time memory offers a complementary third path. Instead of storing new knowledge in the model’s weights, it stores it in an external knowledge graph that the model can query. It’s like having a living encyclopedia or a persistent memory bank the AI can reference and update at will. This can work hand-in-hand with RAG (indeed Graphiti is described as memory for a “post-RAG agentic world” in some circles). The AI can retrieve from its Graphiti graph just as it would from a vector database, but now the information retrieved is structured knowledge that the AI itself helped build. That might help the AI not only recall facts but also see connections between them, supporting deeper synthesis.

Lamini Memory-Tuning

On the other side of the spectrum, we have approaches like Lamini’s memory-tuning. Lamini’s researchers recently showed you can equip an LLM with a Massive Mixture of Memory Experts (MoME) – essentially fine-tuning many small “expert” model components on factual data, so that the LLM can pull in specific facts when needed. In their work, they report storing millions of discrete facts in a network of memory experts that are “retrieved dynamically” during generation. This is a bit like the model having thousands of mini-extensions, each an expert on a particular piece of knowledge, which it can consult as needed. It’s an internal approach to giving the model new knowledge – baking the facts into the model via fine-tuned adapters (e.g. LoRA modules) . One advantage is that the model, with these memory experts, can operate without querying an external database – the knowledge is inside its neural network. A downside is that updating this knowledge means more training (to update the experts), and the process to add or correct one fact is not as straightforward as editing a graph database.

Memory Engineering

So we have two frontier approaches: external dynamic memory (Graphiti) vs. internal memory engineering (Lamini). They aren’t mutually exclusive; in fact, they might be used together in a robust system. But philosophically, they reflect different bets on how LLMs will handle new knowledge. Graphiti’s camp might say: keep a flexible, editable memory outside the model where human developers and the AI can jointly maintain an accurate knowledge base – this avoids retraining and lets the AI’s “mind” consult an external knowledge store much like we consult notes or the web. Lamini’s camp might respond: ultimately, for fluid reasoning, it’s powerful to have the knowledge within the model where it can be deeply integrated into the neural reasoning process – and if we can do that without catastrophic forgetting (using modular adapters and careful tuning), the model itself can hold a growing corpus of facts in its “weights” . Notably, my previous analysis on knowledge synthesis leaned toward the view that fine-tuning an LLM (perhaps with techniques like Lamini’s) could allow it to extend a knowledge base and that “LLMs alone are enough to synthesize new knowledge” when augmented in this way . Graphiti’s existence suggests an alternate view: maybe LLMs plus structured memory are the path to synthesizing knowledge safely and transparently.

For an intelligent but busy observer, the upshot is this: AI systems are learning how to remember and evolve. Real-time memory graphs like Graphiti give AI a kind of working memory and knowledge vault that can grow with each interaction, while memory-tuning methods give AI a way to internalize knowledge without retraining from scratch. Both push the envelope of AI’s ability to learn new things on the fly, moving us closer to LLMs that aren’t stuck in 2023 or 2024 forever, but can truly keep up with a changing world.

Planetary Computation and Freedom Cities

Planetary Computation

Switching gears from AI to the world at large, there’s a big-picture concept gaining traction in tech philosophy: Planetary Computation. Coined and explored by thinkers like Benjamin Bratton, this idea starts with a simple observation – our planet is increasingly enveloped in a computational layer. “The Earth is in the process of growing a planetary-scale technostructure of computation — an almost inconceivably vast and complex interlocking system of sensors, satellites, cables, communications protocols and software.” In other words, all our digital systems, from the internet backbone to smart cities to billions of devices, form a planetary-scale stack of technology. Bratton argues this is not just a technical fact but a philosophical shift: as computing and networking permeate everything, our old ideas of geopolitics, sovereignty, and governance must adapt to this new reality. We’re moving toward a world where cities, nations, clouds, and data are layers in one integrated system – the Stack, as Bratton calls it.

Freedom Cities

Now, how does this connect to startup cities and “freedom cities”? These terms refer to a nascent movement of building new cities (or city-like zones) from scratch, often led by private entities or special coalitions, with the aim of rethinking governance and infrastructure. They are essentially experiments in sovereign urbanism, frequently driven by tech entrepreneurs or political outsiders who feel constrained by legacy regulations. Trump’s proposal to create 10 “Freedom Cities” in the U.S. – pitched as futuristic hubs on federal land – brought the idea into mainstream discussion in 2023 . The vision is bold: imagine cities “free from certain federal laws” , where cutting-edge industries (think anti-aging biotech, nuclear tech, AI development) can operate without the usual regulatory friction. Advocates see these zones as sandboxes for innovation and economic growth, while critics worry they circumvent safety and democratic accountability. Either way, what’s happening is a push to reorganize physical governance around a new principle – often termed digital or networked sovereignty.

Many of these startup city projects explicitly draw inspiration from the tech world’s ethos and even its organizational structures. They borrow from ideas like charter cities (cities with their own charter and laws, independent of their host nation’s usual rules) and network states (Balaji Srinivasan’s concept of cloud communities forming new jurisdictions). In fact, the Freedom Cities concept “draws heavily from… semi-autonomous urban zones governed by distinct legal frameworks”, and its intellectual roots are tied to “so-called ‘network states’ advanced by Balaji Srinivasan” . What does it mean to base a city on “digital sovereignty”? It means the city’s identity and governance might originate from a network (say, an online community or a consortium of global investors) rather than from the traditional nation-state it’s geographically in. For example, Próspera in Honduras is a real-life startup city that operates under special legal allowances – it’s backed by international venture capital (including Silicon Valley figures like Peter Thiel) and effectively functions as a tech-driven city-state within a host country . Its leaders are now involved in the Freedom Cities Coalition pushing for similar zones elsewhere . These are cities that see themselves less as subordinate municipal governments and more as autonomous nodes plugged into the global network of capital, talent, and data.

Governance Divorced From Geography

From the lens of Planetary Computation, this is fascinating. If the planet is one big computational system, these new cities are like startup nodes on that planetary grid – consciously designed to interface with global flows of information and money, rather than just local surroundings. They often tout smart infrastructure, digital governance (e.g. e-residency, blockchain-based services), and seamless integration with global markets. In essence, they treat cities as platforms. Just as an internet platform can serve users worldwide, a startup city aspires to host residents and companies from anywhere, under a new governance model that competes with legacy nation-states. This reframes what a city is: not just a place on a map, but a service provider in the cloud of Planetary Computation. We can see the early signs of this reframing in language used by proponents – they talk about cities in terms of software analogies (operating systems for living, upgradable governance, etc.) and emphasize choice: if you don’t like your country’s rules, perhaps in the future you “download” a new city’s app and relocate to a jurisdiction that runs on a different stack of code (laws). It’s a radical departure from the idea that governance is tied to geography alone.

Of course, this trend raises big questions and contrasts. Some view freedom/startup cities as utopian experiments that could unlock human potential; others see them as “a radical blueprint to create tech-driven city-states that challenge the authority of the nation-state itself.” In other words, are they the next Silicon Valley-esque innovation zones, or are they secessionist enclaves for elites? The truth might be a bit of both. But stepping back: the very fact that such cities are being seriously proposed (and in a few cases, built) signals that the planetary stack is shifting. Power is becoming more distributed and tied to technological capability. A city can now be conceived as a node in a network first, and a physical place second. And as Bratton and others note, once you think in terms of planetary-scale systems, you start to entertain governance models beyond the nation-state. These startup cities plug into that idea: they leverage the planetary computation layer (global internet, global finance, global talent mobility) to bypass or “route around” traditional governance like packets in a network finding an alternate path.

In summary, Planetary Computation provides the context that our whole planet is wired up in new ways, and startup/freedom cities are a manifestation of that context – attempts to architect new nodes in the global network with their own rules. It reframes cities as nodes in a planetary stack: each city (especially these new ones) is not an island but part of a connected system, potentially as interdependent and information-rich as servers on the internet. This paradigm sees cities less as purely geographic communities and more as integral components of a worldwide computational governance network. For an innovator or investor, it means opportunities to build new “operating systems” for society at the city scale. For policymakers, it raises challenges about jurisdiction and sovereignty in an era when location matters less than connection. The Idea Frontier here is that the future of cities may lie at the intersection of physical urban planning and digital network design – a frontier where building a city starts to look like building out a node on the planetary web, complete with its own bespoke governance protocols.

Conclusion

From AI agents that intelligently choose their tools, to LLMs that can grow their knowledge in real-time, to cities being reconceived as networked platforms, we’re witnessing a common theme: complex systems finding new ways to organize and adapt. In AI, the move is toward modularity and orchestration – enabling many specialized pieces (tools or agents) to work in concert, hinting at more general intelligence via collaboration. In AI memory, we see a drive to break past static limits – through external graphs or internal experts – giving models a way to continuously learn and update their worldview. And in the realm of society and technology, Planetary Computation and startup cities show how even our physical world is reorganizing around digital connectivity, treating information flows and governance innovation as the new city-building blocks. The through-line is clear: the frontier of innovation lies in connecting the dots – whether between AI tools, knowledge and memory, or cities and the global stack. As busy leaders and thinkers, understanding these patterns helps us see the bigger picture. The Idea Frontier is expansive and dynamic, but by distilling these major shifts, we get a glimpse of the new paradigms forming at the edges – where intelligent systems, be they silicon or urban, push toward greater integration, adaptability, and autonomy.

Sources

  1. https://achan2013.medium.com/how-tool-complexity-impacts-ai-agents-selection-accuracy-a3b6280ddce5
  2. https://achan2013.medium.com/how-tool-complexity-impacts-ai-agents-selection-accuracy-a3b6280ddce5
  3. https://aman.ai/primers/ai/RAG/
  4. https://seanfalconer.medium.com/the-future-of-ai-agents-is-event-driven-9e25124060d6
  5. https://chiefmartec.com/2025/01/ai-agents-are-the-new-ipaas-and-the-next-frontier-of-intense-competition-in-digital-ops-orchestration/
  6. https://www.reddit.com/r/LLMDevs/comments/1f8u0xk/graphiti_llmpowered_temporal_knowledge_graphs/
  7. https://ar5iv.org/pdf/2406.17642
  8. https://skillenai.com/2024/09/26/can-llms-synthesize-new-knowledge/
  9. https://www.wired.com/story/startup-cities-donald-trump-legislation/
  10. https://www.linkedin.com/pulse/rise-freedom-cities-robert-muggah-hubgf
Jared Rand

By Jared Rand

Jared Rand is a data scientist specializing in natural language processing. He also has an MBA and is a serial entrepreneur. He is a Principal NLP Data Scientist at Everstream Analytics and founder of Skillenai. Connect with Jared on LinkedIn.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.