13 Comments
May 22, 2023Liked by Nat

Fantastic! It is thorough, organized, and articulates complex concepts in a digestible manner, making it accessible to a broad audience. Additionally, your personal insights and reflections bring depth and relatability to your piece.

Expand full comment
author

Thanks for your kind feedback, dear David. Appreciate it.

Expand full comment

Very clear writing and a good review of an interesting book! I have an immense amount of respect for Jeff Hawkins--both as an entrepreneur and, even more significantly for me, as a neuroscientist who has pushed the field and used his own resources to help us all have a better understanding of the brain.

I find the Thousand Brains Theory of Intelligence quite compelling and intuitive; Hawkins argues for it cogently (and you do a good job summarizing those arguments) and I'm particularly glad that he's written a book that is accessible for a non-expert audience.

My priors on AGI are that we don't need to understand intelligence to create intelligent machines. As you said, the advancements in LLMs have come not by taking the lessons of neuroscience, but by creating a more generalizable architecture. I guess I'd say that saying we have to fully understand intelligence in order to create AGI is a bit like the Hard Problem of Consciousness--an interesting line of debate and one that may be settled, hopefully in my lifetime, but which now is mostly the domain of speculation and evidence that is more correlative than anything else.

Expand full comment
author

Thank you for your kind words! I’m delighted that you enjoyed my review of A Thousand Brains. Jeff Hawkins is indeed a fascinating figure and his work on the Thousand Brains Theory of Intelligence is impressive. It’s a compelling and intuitive explanation of how the brain works and I’m eager to see how it evolves in the future.

I concur that we don’t need to fully comprehend intelligence to create intelligent machines. As you mentioned, advancements in LLMs have come from creating a more generalizable architecture, rather than taking lessons from neuroscience. This is an important point that is often overlooked in the AGI debate.

The Hard Problem of Consciousness is also intriguing and may never be fully resolved. Nonetheless, it’s crucial to continue exploring this question and I’m optimistic that we’ll make progress in the future.

Thank you again for your comment! It was a pleasure hearing from you.

🙏

Expand full comment

Very well written post, Nat - and very interesting thoughts! I agree that even if we could make true AGI it would have to learn about the world in ways very similar to those the rest of us have to learn about it. Did you see the move Chappie - where we see a suddenly AGI-enabled robot try to grow up very fast in a tough environement (essentially Pinocchio as a robot)?

Expand full comment
author

Dear Jannick, thank you for your feedback. It means a lot to me. I did see that movie 🙂 By the way, have you seen the TV show Humans? It’s a wonderful drama about sentient synthetic robots.

Expand full comment

Not yet - I will check it out 😎

Expand full comment

How different is this book from his previous one "On Intelligence"?

Expand full comment
author

Thanks for the comment.

In his previous book, On Intelligence, Jeff Hawkins explored the idea of learning and prediction. He called this idea the "memory prediction framework" and wrote about the implications of thinking about the brain in this way. He argued that by studying how the neocortex makes predictions, we would be able to unravel how the neocortex works.

Today, Hawkins no longer uses the phrase "the memory prediction framework." Instead, he describes the same idea by saying that the neocortex learns a model of the world, and it makes predictions based on its model.

Expand full comment

What about connecting multiple LLM's together? Each one feeding inputs to some of the others and then being fed ouputs from others. Sort of an interconnected network. I feel the whole thing would then come "alive".

Expand full comment

I feel the idea of building 'consciousness' is key to AGI. And I also believe 'consciousness' is an emergent phenomena. Which means many of these building blocks have to come together and interact in some way to make this 'consciousness' emerge.

Hence, I think one LLM module or someother module cannot bring about this. We need multiple interacting pieces here.

Expand full comment
author

The concept of machine consciousness is something we're not ready to deal with. At this stage we can only speculate. Should machine consciousness come into fruition, it could raise ethical, philosophical, and practical issues that humanity has never before grappled with. The ethical implications, such as rights and responsibilities of a conscious machine, could revolutionize our understanding of agency and morality. it's too early to talk about it. Remember, our society wasn't ready for cryptocurrencies, we were not ready for Web3

and obviously, we're not ready to seriously discuss machine consciousness. I wrote a note about it a few minutes ago. https://substack.com/profile/23714646-nat/note/c-16438674?utm_source=notes-share-action

Expand full comment

Very well written piece Nat.

I recently came across an article by Yann LeCun who has similar thoughts that merely learning all the text humans have written is not enough to create an AGI, we need to mimic the brain. https://www.noemamag.com/ai-and-the-limits-of-language/

Expand full comment