5 min read

Podcast Recommendation: Ben Goertzel on TWIML

Podcast Recommendation: Ben Goertzel on TWIML

This is a quick recommendation. I've recently watched Ben Goertzel, the CEO of SingularityNET, on the TWIML Podcast. It's a wild ride, galloping through more topics than you would believe can happen in an hour. And there is something interesting and insightful to learn about each of them.

Here is the podcast on Youtube:

The episode also introduced me to Ben Goertzel. I hope he wouldn't mind the comparison, but I'm reminded of the fantasy trope of tinkering Goblins building crazy complicated machines – part engineers, part pragmatic magicians.

Note: When Ben talks about the things he builds, the density of jargon goes up by a lot. I don't understand all of what I'm putting in the list and I might have made mistakes characterizing some of it. I tried sticking as close as possible to what he says. Also, don't look at this and decide to not listen to the podcast, there is a lot of stuff in there that is easier to follow and is really interesting/thought-provoking.

Here is a list of things he mentions building:

  • AGI: Ben seems to be pretty confident his team might be the first to get to AGI – within the next couple of years. It's quite the claim.
  • The hybrid dialogue engine for the Sofia Robot: Ben is among the people responsible behind the continuously updating AI architecture for the robot Sofia by Hanson Robotics. It uses a mix of OpenCog, rule-based chat system and GPT-3 plus some system to allow the various other systems to have some shard notions of what has been and is going on.
  • Star Trecky sounding AI stuff: Ben says he is currently developing "hybrid systems that put together neural net symbolic logic systems and evolutionary learning systems, so sort of neurosymbolic evolutionary systems". This is opaque jargon to me, but as far as I understand it he wants to combine various existing techniques about complex neural networks and use LLMs (like GPT) as an interpretative layer that helps us to make sense and interact with those complex systems. He also wants to mathematically show that symbolic versus neural approaches are really just the expression of a same higher-order thing. There is every chance that I'm mixing things togther here that aren't supposed to be mixed together. I'll try to understand this better over the next couple of weeks/months.
  • Generative music models: He is apparently working on generative music models, saying he has nothing as good as MusicLM yet, but achieves similar results with less training data. He also wants to make them more creative by "using beam search and making up some mathematical measure of what's interesting? Like you use mutual information or relative entropy or some information theoretic surprisingness measure and then specifically try to milk a continuation out of MusicLM which is a valid continuation with reasonably high probability but also has a high level of surprisingness value, right?" I think I get the drift of that, but to me it's a bit Star Trecky again.
  • Scalable infrastructure & dedicated AGI board: He (his company) developed a new programming language, MetaMetatype Talk, in OpenCog Hyperon as well as new distributed knowledge graph to scale up their earlier experiments. This also develop a dedicated AGI board being developed in cooperation with a company in Florida. The board apparently features a chip for hyper vector manipulations Ben Goertzel designed for large-scale graphs pattern matching and a deep learning chip all wired together very tightly with the CPU on the same board. I have no way of knowing, if this is spectacular or not. He sounds very happy about it.
  • Developing more sophisticated math for large scale graph manipulation: This apparently so they can run the graphs large scale over GPU server farms for massive scaling.
  • Building the real Truth GPT: At one point he says they are following a couple more approaches in parallel, and then describes one of them as such: "one approach is just taking LLMs, extracting all the knowledge from an LLM into a distributed logical knowledge base, doing some symbolic logical inference on there and see if you then get something LLM-like but that can compare its other instances now with its other instances in the past and that can reason about whether its various thoughts and statements are consistent logically coherent with each other, right? And that's not AGI but that would be the real truth GPT."
  • Building virtual worlds with little agents in Minecraft: He wants the OpenCog Hyperon system run a simulated little society of agents in Minecraft. It sounds a bit like the Smallville paper, but it seems he is a bit more ambitious, e. g. he hopes his agents will develop their own language.
  • Infrastructure for global, decentralized AGI roll-out: This is about the idea to launch AGI in a way that is accessible to all humans everywhere at ones – and in a way that prevents governments from shutting it down. (Let's ignore, wether that is a good idea for now.) SingularityNET apparently has "100 server farms sitting in different countries and pieces of the network underlying this running in all these different places. And these all coordinate together without any one central controller because using blockchain for decentralized coordination."

For context

I feel I should also mention some things I learned when doing some research on who this Ben Goertz person actually is. As mentioned, he is the CEO of SingularityNET, an AI research company. In the podcast, I mostly perceived Ben as a researcher and enthusiast. Some kind of multi-talented AI geek genius, tinkering in his labs.

By now I've learned that he is quite an established persona in the "AI scene" and that his company isn't a tinkerer's project, but a serious business that has Ben's ideas and views on things at its core. To illustrate, this PR video by SingularityNet's COO, Janet Adams, includes a lot of what Ben is talking about in the podcast, only in form of a concise, self-confident boast about three trends to come:

The three trends she identifies are:

  • Rise of generative models: Succesful music models and trading models (times series prediction and development of trading strategies). Ben did not mention generative trading models in the podcast, but the music models seem to be a thing of his. Though, it's hard to say, if the music models he talks about in the podcast are the models Janet Adams talks about in the video.
  • Return of AI to complex neuro-cognitive architecture: She says the great focus on narrow AI is over and generative AI gives us greater control over complex architectures and allows us to layer hybrid models atop neural networks using generative AI as understanders, as verbalizers of the meaning of those models, so we are seeing a return to vastly more complex, more powerful AI. This jives more or less 1:1 with what Ben was talking about in the podcast.
  • Rise of neuro-symbolic AI: She says that for the first time "the deep techy mathematicians come together with the deep neural network practitioners to create all new all more powerful artificial intelligence techniques, bringing neuro symbolic, bringing dynamic knowledge graphs, bringing advanced probabilistic logic, to bring reasoning, to bring intelligence to these models."

Now, it doesn't make Ben Goertzel a bad man that he talks about and seemingly believes in what his company does. But I think it is important to realize when a tech CEO speaks about things that are at the core and center of their company.

I also looked at Ben's twitter. My current impression: Within the confines of the AI scene, he is quite the public figure. What I have perceived from his public persona so far is that he seems to be some kind of opinion leader in a movement that seems to be very strongly rooted in the OpenSource community. Please take this with a grain of salt, I'm not very familiar with the multitude of subcultures in the larger AI community. I'm new to all of this. Suffice to say that Ben seems to belong in some kind of ideological corner or another. It's probably worth listening to other perspectives as well.