Building an artificial brain to power the metaverse

The first ideas that would later become the intelligent environment Atlas appeared in 2014. We were discussing the monumental success of virtual worlds in gaming, centering on three aspects. The worlds were modular, the experiences collaborative, and the possibilities infinite.

How might we apply these principles to simulating real life?

Originally, this seemed like an excellent idea for a game or virtual reality platform; many have attempted varying success.  The prospect of creating such innovation was stirring enough that we discussed it again and again, trying to pin down what parameters would allow such complex virtual simulation. 

We envisioned a vast collaboration network of developers and users filling in an interconnected virtual world, with fully compatible objects created on the fly, usable in any world. A network comparable to the Internet itself, in three dimensions. Such a simulation would require a unique 3D engine, specifically designed for such a task, so we set to build this engine, which we named Gaia, mother Earth.

We quickly realized pure modular functionality in the finished worlds would require a fully modular engine. The implications of this went deep: modular editor windows and tools, meaning modular parameters within the engine itself. Such modularity was unheard of in any game engine and is really a property of code itself. 

The lack of a proper editing interface posed a significant challenge. We wanted users of all experience levels to fully immerse themselves in as large a degree of customizability as possible. Initially, we went to work on a Natural Language system to allow such interactions between users and the engine to take place. 

This, in turn, created a new problem: to modify the world with language, a certain standardization of the objects in the world would be necessary, abstracting them from mere objects. We created the Concept Container, a wrapper allowing anything to be described and interacted within the virtual worlds.

By 2017, we had done it. The system for complex simulation with natural language interaction was designed (though at this point far from finished being built). But a new idea had slowly permeated the entire process. Language and simulation are deep-rooted cognitive processes that make up our everyday intelligence, are they not? 

After two years of hard work behind us, we ended up with more questions than answers. We moved from designing and building a simulation engine to testing and pushing the boundaries of its possibilities. This engine would be smart-very smart. Smart enough to understand descriptions of how banks work, how molecules interact, and to communicate results of simulations to the user. In thinking up applications for the engine going well beyond games and virtual reality, we realized the full potential of the system we had devised.

We slogged on, chipping away at the bedrock, trying to find the diamond that was Artificial Intelligence. For three years, the completed VR engine was ignored, and all work was devoted to AI. We refined the language and refined the Containers to fit the language. We read for hundreds of hours, attempting to intake the full conversation surrounding AI. The work we did to push the technology forward strained the entire team and our families. 

If we dug a little deeper into language, a little further into AI, maybe the product would reach its full potential. As 2019 ended, we completed a viable version.

The unit of AI in the system we built is the Agent, a piece of code with tasks to search or interact with the simulated environment, moving through it, acquiring and discriminating information relevant to its goal. 

The Agents became the user interface to access, interact with, modify, and read data from the simulation. The two core features of the Gaia engine, simulation, and language, remained the same, but the environment was new. Instead of discussing Virtual Worlds, we discussed Contexts and Domains instead of objects and actors, Concepts, and Agents. The name Gaia was scrapped and renamed Atlas, the Titan holding up the world.

As the primary Agent, Atlas is a localized brain for your computer. It learns about concepts you describe to it and simulates their interaction. It has no data-driven intelligence, no cloud computing. It is fully compatible with any AI and any code. It’s swift and scales indefinitely.

It is intelligence for software.

S. Tricha

Share this post

Share on facebook
Share on twitter
Share on linkedin
Share on email