11 Comments

Welcome back! I loved this post so much

1. Hope your revival of twitter goes well! Let me know what it's like to be back

2. Really great treatment of the hallucination problem. I for one didn't know that the definitions for hallucination were so broad! And interesting to also hear how some are espousing the benefits of the hallucinations too

Expand full comment
author
Apr 24Author

Thanks for your kind feedback, Zan. Glad you loved it.

Returning to Twitter? Actually, it was my best decision. Still the best place to discuss everything AI-related.

Expand full comment

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment
author
Apr 24Author

It sounds like this approach is making significant strides in understanding consciousness. The real-world application with the Darwin automata is especially interesting. Do you think this theory will soon lead to breakthroughs in creating machines with higher-order consciousness? It's exciting to consider the possibilities!

Expand full comment

The "philosophical zombie" robots based on large multi-modal models will probably be here in the next few years. But they won't be biologically conscious like humans. In a very real sense, they won't be any more conscious than a chess program.

The theory and experimental method that the Darwin automata are based on is the way to a conscious machine. Imo, machines with "just" primary consciousness will be hard enough, and are more than a hundred years away. Machines with fully developed language (higher-order consciousness) are even further in the future.

Expand full comment

Nat is back!

I was surprised to see there's someone quantifying and keeping track of the hallucination rate of each LLM. Curious!

And yes, in creative areas where facts and empirical accuracy aren't that important, hallucinations can definitely be seen as a feature that nudges LLMs out of their somewhat bland "comfort zone."

Fascinating stuff.

Expand full comment
author
Apr 24Author

Thanks for the valuable feedback, Daniel. I agree with your POV here. I think we're just scratching the surface.

Expand full comment
Apr 23Liked by Nat

Wow! I have never heard of WebSim before. I thought hallucination was something bad but now I see its benefits. It should be allowed at least to some degree but we need to keep the balance.

Expand full comment
author
Apr 24Author

Thanks for reading, Sergio

Expand full comment
Apr 23Liked by Nat

Missed you, Nat! Thanks for this fantastic piece!

Expand full comment
author
Apr 24Author

Thanks, Ann.

Expand full comment