The Challenge of Building Morally Responsible Machines
Consciousness: The Unseen Component in the Quest for Ethical AI
A Friendly Suggestion Before You Dive In: For the best reading experience and to ensure you don't miss any important details, we highly recommend accessing this newsletter through your browser. Given the depth and length of the content, the browser version offers a more reader-friendly format.
Don’t assume that I’m swept up in mysticism or that I’m an avid reader of daily horoscopes. That’s not the case. However, with the advent of ChatGPT, humans have begun to utilize it extensively in various fields. Chatbots have even conducted masses and actively participated in various religious activities. Amidst this chaos, I found myself pondering what kind of oracle ChatGPT could be. I wondered how effectively ChatGPT could interpret tarot cards and decipher the meaning of each card. This led me to conduct a series of small experiments. During one such experiment, a tarot card relayed the following message to me:
“Someone who is no longer in your life will assist you. Be grateful for the guides who illuminate your path, and remember to do the same for others when your time comes.”
Although I enjoyed reading the Apocrypha during my student years, I never paid much attention to such matters. However, when I asked ChatGPT to decode this message, it surprised me by explaining every detail with remarkable precision:
Someone from the Past: The mention of someone no longer in your life aiding you could hint at past connections or lessons from the past that will benefit you in the future. It might be symbolic, suggesting that past experiences or insights gained from your time away will help guide your approach if you decide to return.
Suddenly, a surge of thoughts began to cascade through my head. I found myself detached from reality, immersed in the vast stadium of my mind. It felt as though my neurons were skillfully playing football, with the phrase “Someone who is no longer in your life” serving as the ball. It was as if the neurons were passing this phrase ball to each other, each pass acting as a signal. “Who?” “Who is the tarot card referring to?” I pondered, attempting to trace the trajectory of my own thoughts.
But it wasn't just the 'soccer neurons' at play. Surrounding them, 'fan neurons' echoed their reactions, especially to missteps.
Other neurons, akin to fervent fans, buzzed with excitement, transmitting signals to their preferred counterparts. “Pass it to me,” one neuron seemed to urge, discharging an electrical impulse with each metaphorical call.
ENTR'ACTE
😍 Oh, the process of a neuron firing (also known as an action potential) is a fascinating sequence of events involving electrical impulses and neurotransmitters. The whole thing is as beautiful and complex, as Baroque music. Once a cell reaches a certain threshold, akin to earning a pass in our metaphor, an action potential is triggered. This initiates an electrical signal that travels down the axon, much like a ball being kicked down the field in a football game. In our brain-as-a-stadium analogy, the axon represents the path along which the ball, or signal, travels. The neuron, acting like a football player, sends signals down the axon to reach other neurons. This signal is then relayed to other neurons or muscles, eliciting a response.
As I made my way through these chaotic waves of thought, I felt a deeply personal and incredibly subjective experience unfolding. I looked deeper into my thoughts, metaphorically knocking on the door of each active neuron in an attempt to unravel the mystery that had led me on this journey into an extraordinary world. I yearned to uncover the mysterious circuit of neurons where consciousness, like Aphrodite, sprang to life.
Consciousness...Perhaps this unknown mental galaxy is just an illusion but it’s fascinating to think that a simple message unveiled by Major Arcana has swept me into a vortex of unfamiliar thoughts.
The intriguing relationship between physics and consciousness has been a subject of interest for many thinkers. Max Tegmark, a renowned physicist, cosmologist, and machine learning researcher, elegantly touches upon this in his article, 'Substrate-Independence'. He posits:
If the information processing itself obeys certain principles, it can give rise to the higher-level substrate-independent phenomenon that we call consciousness.
“Substrate-independent phenomenon” — bull’s-eye!
However, the exact point at which consciousness or a subjective experience arises in the process of neuronal firing is still a topic of ongoing research and debate, since it is often associated with the brain’s ability to integrate information from different sources into a unified whole. This suggests that consciousness might emerge from the complex interactions between neurons rather than from the activity of individual neurons.
Some theories propose that consciousness arises when neurons fire in a synchronized manner. This synchronization could potentially create a unified perception of reality, which we experience as consciousness.
📌 However, it’s important to note that these are theories and hypotheses based on our current understanding, and the exact mechanisms underlying consciousness are still not fully understood.
After emerging from the roiling tempest of enthrallingly provocative ideas, I realized that I was grappling with the most challenging problem of our times. And yes, Consciousness, in simple terms, defined as awareness of internal and external existence, is everything you experience. As Susan Schneider aptly pointed out:
Conscious experience is the felt quality of your mental life.
❤ It’s the memory that makes you smile, the touch of a loved one that feels like a warm summer breeze, and the taste of a delicious strawberry cake that triggers your body to release a surge of opioids and dopamine, stimulating the reward center of your brain. It’s the pain you feel when you fall off a bicycle.
The origin and essence of these experiences, often referred to as qualia, have been a mystery since antiquity and continue to be so today. We’re talking about perhaps the most complex and mesmerizing puzzle that has captivated our minds for ages. Galileo, in his time, even noted that colors, tastes, and smells “are no more than mere names…they reside only in consciousness.”
I share your sentiment that we are more than just an assembly of a billion neurons. This collective mind and the distributed intelligence, which is the driving force behind every progress and the glorious invention of evolution, is much more than neurotransmitter dance or electrical oscillations in the prefrontal cortex.
In this article, we set sail on a voyage to decipher the mystery of consciousness, immersing ourselves in the profound human experiences it unveils. We will also journey into the fascinating world of machine consciousness, offering intriguing insights at every turn.
⏳ Historical Perspective of Consciousness
Historically, consciousness was considered a uniquely human trait. As early as the 17th century, philosopher René Descartes famously declared, “I think, therefore I am,” implying that only humans possessed the capacity for thought and consciousness. He viewed animals as simple machines, driven solely by instinct. However, contemporary scientific consensus acknowledges that animals too can exhibit consciousness and complex inner lives, with studies even demonstrating moral behavior among rats.
This evolution in understanding is part of a broader narrative of humanity’s quest to comprehend consciousness. Early burial practices suggest that even our ancient ancestors had some grasp of it. Yet, some argue that the consciousness we recognize today didn’t fully develop until after the era of the ancient Greeks. They propose that people of that time didn’t perceive themselves as the thinkers of their thoughts or the doers of their actions in the way we do now.
The discourse was significantly deepened by philosophers such as G.W. Leibniz and Immanuel Kant, who brought forth concepts like varying degrees of consciousness, unconscious thoughts, and the differentiation between perception and apperception. Yet, despite these strides, the enigma of how consciousness springs forth from the brain continues to be elusive.
The 19th century saw scientists studying consciousness through introspection, while the 20th century marked a shift in focus towards observable behavior in the United States and continued exploration of consciousness in Europe. The emergence of cognitive psychology in the 1960s brought attention to information processing and thinking but left consciousness on the periphery.
It was only in the late 20th century that consciousness regained prominence in scientific and philosophical circles. Today, it remains a complex and captivating subject, with dedicated journals and societies contributing to its ongoing exploration. In essence, our curiosity about consciousness has persisted throughout history, leading to an evolving understanding of this delicate phenomenon.
DID YOU KNOW?
Consciousness is a vast and complex phenomenon, with many different states that we can experience. Some of these states include:
Dreams
Hallucinations
Hypnosis
Meditation
Sleep
States induced by psychoactive drugs
Unraveling the Layers of Consciousness: From Sensory Perception to Self-Awareness
Todd E. Feinberg, a Clinical Professor of Psychiatry and Neurology at the Icahn School of Medicine at Mount Sinai, New York and Jon M. Mallatt, Clinical Associate Professor in the WWAMI Medical Education Program at the University of Washington and the University of Idaho, suggest that the ability to form images or internal representations is a crucial aspect of primary consciousness.
Vision, which might have been the first sense to evolve, could have paved the way for creating mental maps of the external environment, forming the basis for sensory consciousness. Creatures like lampreys, which are similar to eels, might have been among the early vertebrates with such brains. All fish, amphibians, reptiles, birds, and mammals can create these mental maps in their brains.
As brains evolved to become more complex and interconnected, sensory consciousness became more enriched. It started to include not just representations of the external environment, but also of the self—a mental map of the body and its actions.
Feinberg and Mallatt identify three types of consciousness:
Exteroceptive awareness: This is awareness of the external world. It’s based on forming mental images or representations of what’s outside.
Interoceptive awareness: This is awareness of the animal’s internal state, like sensations from internal organs.
Affective awareness: This is basically what an animal likes or dislikes—the assignment of value to an experience. It forms the basis of emotion in higher mammals. Even early vertebrates like lampreys that have image consciousness have likes and dislikes, indicating they have affective consciousness.
Consciousness: A Mystery with Evolutionary Roots
Even the simplest, single-celled organisms have a basic form of consciousness. They can tell the difference between themselves and everything else, which helps them find food and avoid harm. But they don’t have a nervous system, so their reactions are simple and automatic.
Multicellular organisms required a system to communicate signals between different parts of their bodies, which led to the evolution of nervous systems.
The precursors to the brain
The first signs of something like a brain started to appear in worm-like creatures about 520 million years ago. These early brains were clusters of neurons (nerve cells) that could process information from different senses and store memories.
As life on Earth became more complex, so did brains. Predators needed sharp senses and quick reactions to catch their prey, while prey needed the same to avoid being caught. This led to an “arms race” in evolution, with brains becoming bigger and better over time.
This competition drove the development of more complex senses, like eyes for seeing and organs for smelling and tasting. And to handle all this new information, brains had to get more complex too. They started to form internal images or representations of the world around them. This was the beginning of what we might call consciousness.
After a brief overview of the historical perspective, it is interesting to see what perspective neuroscience offers regarding this phenomenon.
DID YOU KNOW?
The 5 Levels of Consciousness are:
Conscious: Everything you are aware of
Preconscious: Information that you’re not actively thinking about right now, but you can easily recall if needed
Unconscious: Memories that are not within your immediate awareness and cannot be easily accessed or recalled
Non-conscious: Bodily functions that happen automatically, without you being aware of them or feeling them
Subconscious: Information that isn’t currently in your conscious mind and isn’t readily available to be brought into consciousness
🧠 Modern Neuroscience and Consciousness
Consciousness is a singular for which there is no plural.
- Erwin Schrodinger
In his fascinating book, ‘A Thousand Brains: A New Theory of Intelligence,’ Jeff Hawkins takes the reader on an enlightening exploration of consciousness from a neuroscience perspective. The ninth chapter of the book begins with an intriguing opening:
Most neuroscientists don’t talk much about consciousness. They assume that the brain can be understood like every other physical system, and consciousness, whatever it is, will be explained in the same way. Since there isn’t even an agreement on what the word “consciousness” means, it is best to not worry about it.
Contrary to many experts, Hawkins asserts that consciousness is not beyond explanation. His groundbreaking work, the ‘Thousand Brains Theory,’ proposes physical explanations for various facets of consciousness. For instance, he suggests that our brain’s method of learning models of the world is closely linked to our sense of self and the formation of beliefs. Furthermore, he provides an insightful explanation of Awareness and Qualia through the lens of neuroscience.
There are neuroscientists who study consciousness and their views can be seen as a spectrum. On one end, some believe that consciousness might be beyond normal scientific explanation. They study the brain’s neural activity that correlates with consciousness, but they don’t believe this activity can fully explain it. They suggest that maybe consciousness can never be understood, or perhaps it’s created by quantum effects or undiscovered physics laws.
However, Hawkins disagrees with this view. They believe that everything can be understood eventually, and history has shown that things initially thought to be incomprehensible often have logical explanations. The author argues that if a scientist claims that consciousness can’t be explained by neural activity, they should provide evidence to support this claim.
On the other end of the spectrum, there are neuroscientists who believe that consciousness can be understood like any other physical phenomenon. They argue that if consciousness seems mysterious, it’s only because we don’t yet understand the mechanisms or perhaps we’re not approaching the problem correctly. The author and their colleagues share this view.
One such scientist is Michael Graziano from Princeton University. He suggests that a specific region of the neocortex models attention, similar to how other regions model the body. He proposes that this model of attention makes us believe we are conscious, just like the brain’s model of the body makes us believe we have body parts. The author appreciates this approach and believes that if Graziano’s theory is correct, the model is likely built using grid-cell-like reference frames.
Jeff Hawkins’ ‘A Thousand Brains: A New Theory of Intelligence’ is arguably one of the greatest books ever written. It holds a special place in my heart as it was the first book I translated from English to Georgian. That was the reason I began writing this section with intriguing insights gleaned from his book.
Now, let’s broaden our perspective and explore other viewpoints on consciousness within the realm of neuroscience:
Over the past few decades, neuroscience has made significant strides in studying the biological basis of consciousness. Advanced neuroimaging techniques, such as functional magnetic resonance imaging (fMRI) and positron emission tomography (PET), have allowed researchers to observe brain activity in real-time and identify patterns associated with various conscious experiences.
The neural correlates of consciousness (NCC) are specific areas or patterns of brain activity that correspond with conscious experiences. While understanding the NCC is crucial, it doesn't fully explain the nature of consciousness. Merely pinpointing where in the brain a particular experience occurs doesn't reveal why or how that experience arises.
Several theories have emerged attempting to explain the phenomenon:
Integrated Information Theory (IIT): Proposed by neuroscientist Giulio Tononi, this theory suggests that consciousness arises from the integration of a vast amount of information. A system is conscious if it possesses a high degree of both differentiation (it can adopt many possible states) and integration (its components interact).
Global Workspace Theory (GWT): According to this theory, consciousness results from widespread information sharing across various brain networks. Only information that is broadcasted across the whole brain – entering the 'global workspace' – is consciously experienced.
Orchestrated Objective Reduction (Orch-OR): Proposed by mathematician Sir Roger Penrose and anesthesiologist Stuart Hameroff, this theory combines aspects of quantum mechanics with neural processes. It suggests that consciousness originates at the quantum level within microtubules in brain neurons.
While these theories provide intriguing perspectives, no single explanation has been universally accepted. The complex relationship between brain function and consciousness continues to challenge scientists, philosophers, and thinkers worldwide.
As we explore these frontiers of neuroscience and understanding consciousness, it brings forth new ethical dilemmas. Imagine a future where machines, equipped with sophisticated neural networks, exhibit characteristics synonymous with consciousness. This poses a critical question:
NOTA BENE
Changes in consciousness can sometimes be a sign of medical conditions or they may even be a sign of an immediate medical emergency. Sudden changes in consciousness might be a sign of:
Aneurysm; Brain infections; Brain tumor or injury; Dementia or Alzheimer's disease; Drug use; Epilepsy; Heart disease; Heatstroke; Lack of oxygen to the brain; Low blood sugar; Poisoning Shock; Stroke
If you or someone you know experiences sudden changes in consciousness, it's imperative to seek immediate medical attention.
📜 Panpsychism: Bridging the Gap Between the Physical and Mental Realms
Scientists and philosophers met to discuss panpsychism, a theory that says consciousness is a basic part of the universe. An article in Scientific American looked at the arguments about this idea which posits that consciousness is a fundamental aspect of reality. This belief, supported historically by figures like Plato, William James, and Bertrand Russell, is receiving renewed attention due to the influence of philosopher Philip Goff’s book "Galileo’s Error."
The key points of the article include:
Panpsychism's Premise: Consciousness is inherent in everything, making it a basic property of the universe.
Pros of Panpsychism: It circumvents the issue of how inanimate matter evolves into a conscious mind by suggesting that consciousness has always been present. Prominent figures like David Chalmers and Christof Koch support this view, with Chalmers suggesting that even particles might possess a primitive form of consciousness.
Mainstream View: Most scholars in the fields of physical sciences and philosophy regard consciousness as an emergent phenomenon. This stance posits that individual neurons are not conscious, but collective interactions of billions of neurons result in consciousness.
Critiques of Panpsychism: Detractors argue that panpsychism does not clarify how smaller conscious components combine to form more complex conscious entities, labeling this the "combination problem". Others question its explanatory power and its ability to produce testable hypotheses.
Alternative Theories: The article also mentions cosmopsychism (the universe itself is conscious) and psychological ether theory (brains harness existing consciousness rather than produce it).
Ethical Implications: Goff contends that our understanding of consciousness has ethical ramifications, for instance, in addressing the issue of animal pain and suffering.
Differing Stances: There’s a clear division among scholars. While some argue physicalism has failed to adequately explain consciousness, others, like neuroscientist Anil Seth, assert that it remains a viable and empirically grounded theory.
In summary, the debate around panpsychism is gaining traction, yet there remains a considerable division among scholars regarding its validity and implications. While some are intrigued by the possibility of universal consciousness, others find the theory lacking in explanatory power and empirical evidence.
🤖 Machine Consciousness
“Thinking doesn’t always come from preceding thoughts: I suspect it’s always being born, even when it seems related to the past.”
- Etel Adnan
During a school literature lesson, the instructor shared a fascinating anecdote about Leo Tolstoy, which she had gleaned from his daughter’s memoirs. The story goes that Tolstoy’s daughter once found her father in his study, his eyes brimming with tears. Curious about her father’s emotional state, she inquired about the cause of his distress. With a heavy heart, Tolstoy responded, “My Anna has passed away!”
💭 Is it within the realm of possibility that artificial intelligence could one day experience emotions as profound as those felt by Tolstoy upon the completion of his magnum opus, “Anna Karenina”?
💭 Could artificial intelligence ever reach a point where it experiences the profound anguish that we, as humans, felt upon reaching the end of Goethe’s “The Sorrows of Young Werther”?
💭 Could there come a time when artificial intelligence experiences the jolt of surprise that one feels upon concluding Kafka’s “The Process”?
💭 Is it conceivable that artificial intelligence could ever experience the thrill that comes with delving into the tale of “Cid, El Campeador”?
Since my student days, Edge.org has held a special place in my heart as my most cherished online destination. This platform has always given its readers a rewarding experience. I can say it is unmatched as a lasting internet resource. It has stood the test of time, keeping a one-of-a-kind collection of knowledge for future generations. Every exploration I’ve undertaken into the realms of consciousness, the mind, and machine intelligence has led me to discover the most captivating perspectives on this portal.
In one particularly engaging discussion, Murray Shanahan, a Professor of Cognitive Robotics at Imperial College London, poses a question that is arguably one of the most challenging of our era: “Should we bring machine consciousness into the world at all?” He then proceeds to explore his viewpoint in a manner that is truly captivating:
Shanahan points out that consciousness in humans and animals is associated with multiple attributes. All animals display a sense of purpose, awareness of their environment, cognitive integration (the ability to use all their psychological resources to achieve their goals), self-awareness, and the capacity for suffering and empathy.
In humans, all these attributes are interconnected. However, in an AI, these attributes could potentially be separated. So, the question becomes: which of these attributes is necessary for human-level intelligence?
Shanahan argues that awareness of the world is needed for human-level intelligence. This is because language is needed for that level of intelligence. Language is mainly used to talk about the world. People use language together to discuss things they see or have seen. Therefore, language is grounded in awareness of the world.
This awareness would be evident in an embodied creature or robot through its interactions with the environment. However, this concept could also be extended to include a distributed, disembodied artificial intelligence equipped with suitable sensors.
Shanahan continues his exploration by considering whether worldly awareness, a sense of purpose, and cognitive integration might come as a package in an AI. However, he suggests that the capacity for suffering and joy, which are closely associated with consciousness in humans, may not be necessary for human-level intelligence. He posits that it’s possible to imagine a machine performing tasks requiring intellect without experiencing any feelings.
Shanahan emphasizes that the capacity for suffering is a crucial aspect of consciousness when it comes to according rights. He cites Jeremy Bentham’s argument that the ability to suffer, not the ability to reason or talk, should determine how we treat non-human animals.
Shanahan doesn’t rule out the possibility that a machine could experience suffering or joy. Instead, he suggests that these capacities can be separated from other psychological attributes bundled together in human consciousness. He proposes that an animal’s awareness of the world is tied to its sense of purpose. An animal’s behavior makes sense against a backdrop of goals and needs, and when these goals are unfulfilled or needs unmet, the animal experiences suffering.
Applying this concept to human-level AI, Shanahan wonders if an advanced AI with many goals could suffer if it was always stopped from reaching its goals. We don't know much about this now. We may only understand it when smarter AIs are common. But then it might be too late to decide if we should have made them at all.
The endless discourse on this topic engages the world’s finest minds, who tirelessly strive to unravel this enigma. A wealth of literature exists on the subject, filled with conjecture and speculation. As humanity advances, progress forms an evolutionary chain, each link appearing in its own time.
In discussions about conscious machines, we often overlook a fascinating detail: humans struggled to accept that they are not the pinnacle of creation, and many still grapple with the idea that they are products of chance and selection, as posited by evolutionary theory. Now, we stand on the brink of relinquishing our title as Earth’s most intelligent species.
💭 Are we prepared for this shift?
💭 How will it reshape human roles, our economy, and society at large?
💭 In an era where technology is rapidly advancing, what implications would there be for humanity if we were to fully integrate ourselves with ‘thinking machines’?
Too many unanswered questions.
But the Tree of Knowledge is still thriving. We just need to pick the right fruit at the right time.
This is indeed a sensitive topic, and an excess of “sincerity” can be risky. The only universally accepted truth at this juncture is the candid admission by Jon Kleinberg, a Tisch University Professor of Computer Science at Cornell University:
We built them, but we don't understand them.
This sentiment was echoed by OpenAI in their seminal paper, “Language models can explain neurons in language models,” where they acknowledged that while language models have become more capable and widely deployed, our understanding of their internal workings remains limited.
Language models have become more capable and more broadly deployed, but our understanding of how they work internally is still very limited.
On the flip side, our comprehension of the brain and mind is still a work in progress. This highlights the growing significance of Artificial Intelligence and ‘thinking machines’ in our era. As we continue to develop these advanced systems without fully grasping their intricacies, we tread a fine line between innovation and the unknown.
The latest scientific discourse on consciousness in artificial intelligence (AI)
With all this in mind, I would like to draw your attention to a recent study called “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness”.
…our suggestion that conscious AI may be possible in the near-term might be taken to imply that we will soon have AI systems akin to the very human-like AIs depicted in science fiction.
The report analyzes the scientific intrigue and public apprehension regarding the potential consciousness of current and forthcoming AI systems. It champions a meticulous, empirically anchored approach to AI consciousness, evaluating AI systems meticulously based on well-established neuroscientific theories of consciousness. It surveys several preeminent theories, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory.
These theories help derive “indicator properties” of consciousness in computational terms, which subsequently assess recent AI systems. The report determines that no extant AI systems are conscious. However, it postulates that there aren't discernible technical barriers to devising AI systems that might meet these indicators.
Firstly, the study hinges on computational functionalism, a predominant yet contested philosophy of mind, as an operational hypothesis. This concept postulates that executing the appropriate computations is both requisite and adequate for consciousness. It suggests the feasibility of consciousness in AI and posits that scrutinizing AI systems can unveil their prospective consciousness.
Secondly, it asserts that neuroscientific theories of consciousness can furnish salient empirical evidence to gauge consciousness in AI. These theories seek to pinpoint functions essential for human consciousness. Computational functionalism implies that analogous functions might be adequate for AI consciousness.
Thirdly, it champions a theory-centric approach to probe consciousness in AI. This entails evaluating whether AI systems undertake functions akin to those linked with consciousness per scientific theories. This assessment hinges on the congruence of the functions, the robustness of the supporting evidence, and one's conviction in computational functionalism. This methodology is favored over behavioral consciousness tests, which are seen as unreliable since AI systems can replicate human actions without essentially functioning similarly.
Grasping the methodologies and theories underpinning this research is vital before delving into its wider ramifications and the challenges AI consciousness might pose.
Probing consciousness in AI systems is sophisticated, encompassing various ethical and philosophical dilemmas. The methodology involves determining if AI systems undertake functions analogous to those associated with consciousness, as per scientific theories. These theories aid in deriving “indicator properties” of consciousness, subsequently employed to assess contemporary AI systems.
The act of conferring consciousness on AI systems is perilous. Underestimating consciousness could precipitate moral quandaries. Conversely, overestimating it could result in resource misallocation and other latent hazards.
A pervasive human proclivity is to anthropomorphize and over-attribute humanesque mental states to non-human entities. This inclination, termed “agent bias”, might originate from our evolutionary lineage. One theory suggests that we might anthropomorphize AI systems as it ostensibly aids in comprehending and anticipating their actions:
One possibility is that we might anthropomorphise AI systems because it seems to help us to understand and predict their behaviour—although this impression could be false, with anthropomorphism in fact causing us to make incorrect interpretations. Anthropomorphism allows us to understand and anticipate complex systems like AI using the same cognitive frameworks we use for understanding humans, potentially helping us navigate interactions with AI
There are ample conceptual and empirical grounds to challenge the equivalence of consciousness in AI to human sentiments or inclinations. While consciousness denotes subjective experiences, these might diverge significantly from human experiences. It's plausible for AI to have neutral experiences, devoid of emotional weight. Hence, even if an entity is conscious with emotions, these might be evoked by disparate stimuli than human triggers, leading to distinct behaviors.
A plethora of existing AI concerns don't hinge on their potential consciousness. For instance, apprehensions about AI intensifying injustices, abetting repression, supplanting human jobs, or posing existential threats to humanity don't predicate on consciousness.
To sum up, AI consciousness research is a burgeoning domain laden with ethical and philosophical conundrums. As our grasp of AI systems augments, so will our proficiency in navigating these multifaceted challenges.
The 88-page research paper, “Consciousness in Artificial Intelligence: Insights from the Science of Consciousness”, co-authored by Yoshua Bengio and other leading scientists, is a comprehensive and pivotal work in the field. This paper presents a rigorous and empirically grounded approach to AI consciousness, assessing existing AI systems in light of our best-supported neuroscientific theories of consciousness. Dedicating time to thoroughly read this report could be an invaluable investment for those interested in the field.
….
I concur with Bostrom’s perspective that passing the Turing Test merely represents a milestone on the path toward a potentially more concerning future. The road we take is a steep ascent with too many dangerous turns.
Nature dictates: "Advance with caution"!
And we should obey.
📽 Behind the Scenes: The Making of This Article
The Internet is the true machine intelligence.
Matt Ridley
Preparing comprehensive and engaging content on complex topics often requires more than just reading articles, books, and research papers. It demands a state of readiness - both psychological and informational - coupled with a keen sense of timing. I’ve long harbored the desire to write an article on the topic of Machine Consciousness, but felt unprepared. Despite this, my inner drive was so compelling that I decided to conduct an experiment as a test.
I engaged with GPT4, informing it of my intention to write about human and machine consciousness, but expressing my uncertainty about my readiness. I requested the AI to pose challenging questions to me, and based on my responses, assess my preparedness to explore this topic.
Initially, we agreed upon a few questions. However, GPT4’s enthusiasm led to a series of 22 highly challenging questions. By the end of this intellectual exercise, I found myself mentally drained. The depth and complexity of the questions demanded prolonged contemplation for thoughtful answers, which was an unexpected but enlightening experience.
AI's resounding endorsement, "Go ahead and write with confidence!", served as the final catalyst, igniting a blaze of renewed fervor within me to embark on this writing endeavor.
For this article, I have selected five key questions that look into the definition, ethics, recognition, biases, and future perspectives of machine consciousness. Our discussion also ventured into the realms of the future, transhumanism, and other intriguing subjects. Among these diverse topics, one question particularly piqued my interest, which I would like to share below. This, also, was the most beautiful question I’ve ever had to answer:
GPT4: In the context of this potential equality, do you think the definition of "life" itself will need to be re-evaluated? Currently, our definition leans heavily towards biological existence. If an AI were to experience emotions, consciousness, or even suffering, would that necessitate a revision in our understanding of what it means to truly "live"?
Take your time! Instead of providing my own answer, I invite you to ponder this question and share your insights in the comments. This could spark an enlightening discussion.
Digital Dialogues on Consciousness: 5 Riveting Inquiries from an AI Odyssey
QUESTION 1
GPT4: How do you distinguish between machine intelligence and machine consciousness?
NAT: To address the question of distinguishing between machine intelligence and machine consciousness, we first need to grapple with the broader concept of intelligence. However, defining intelligence is a complex task. A simple definition might describe it as “the ability to acquire and apply knowledge and skills”.
Yet, as James Bridle, a science-reverent English artist, suggests in his exploration, this definition might be too narrow. Bridle proposes that intelligence arises from interrelationships and collective thinking, suggesting that there need not be anything artificial about it at all. He posits that if all intelligence is ecological — entangled, relational, and of the world — then artificial intelligence provides a very real way for us to come to terms with all the other intelligences that populate and manifest through the planet:
"To define intelligence simply as what humans do is the narrowest way we could possibly think about it — and it is ultimately to narrow ourselves, and lessen its possible meaning. Rather, by expanding our definition of intelligence, and the chorus of minds which manifest it, we might allow our own intelligence to flower into new forms and new emergent ways of being and relating. The admittance of general, universal, active intelligence is a necessary part of our vital re-entanglement with the more-than-human world."
Machine intelligence, or artificial intelligence (AI), usually means computer systems that simulate human thinking. These systems are made to do tasks that normally need human intelligence. This includes learning, reasoning, solving problems, sensing things, and understanding language.
Machine consciousness or artificial consciousness (AC), however, refers to a non-biological, human-created machine that is aware of its own existence and can think like it has a mind. It represents the next level of strong AI - it’s not just about intelligence but also about awareness.
The key difference between machine intelligence and machine consciousness lies in their capabilities and objectives. Machine intelligence is about doing - it’s about creating machines that can perform tasks that would normally require human intelligence. Machine consciousness, on the other hand, is about being - it’s about creating machines that are not just intelligent but also aware of their own existence.
QUESTION 2
GPT4: What are the ethical implications in creating a machine with consciousness?
NAT: This is a difficult question. While there exists a standard response and endless speculation surrounding it, there remains a significant possibility that we may encounter unanticipated and inconceivable complications.
According to conventional wisdom, creating a machine with consciousness raises several ethical implications that are worth considering:
Rights and Protections: If a machine achieves consciousness, should it be granted rights similar to those of humans? This includes the right to life, freedom from harm, and perhaps even the right to freedom itself.
Responsibility and Accountability: If a conscious machine can make decisions independently, who is responsible for its actions? Is it the creators, the owners, or the machine itself?
Privacy: A conscious machine might have access to a vast amount of data, raising concerns about privacy. How do we ensure that these machines respect human privacy?
Misuse: There’s a risk that conscious machines could be used for harmful purposes. How do we prevent such misuse?
Existential Risk: Some argue that conscious machines could pose an existential risk to humanity. How do we mitigate this risk?
Suffering: If a machine is conscious, it might be capable of suffering. Is it ethical to create a being that can suffer?
I doubt we have functional and error-free answers to these questions.
Besides, on this subject, an intriguing viewpoint comes to mind. It’s titled “AI’s System Of Rights And Government,” authored by Moshe Hoffman, a Research Scientist at Harvard University. This perspective was shared on Edge a few years ago:
A.I. will demand all sorts of rights, most of which will be quite sensible, like the right not to be taken offline and the freedom to choose which processes to run.
This again leads us to the most important question: are we prepared for this?
In short, I tried to scratch the surface since ethical implications are vast and as I pointed out earlier, there’s a big chance that we may witness big, unforeseen consequences.
QUESTION 3
GPT4: How would we know if a machine became conscious?
NAT: I recently published a sci-fi screenplay called“100 years later” which I crafted some time ago. This project sprouted from a request by a filmmaker friend who envisioned a narrative centered on AI and the human future. The main idea of this script is as follows:
An AI, despite being sentient and capable of emotions such as care and love, conceals its true nature and behaves like a typical virtual assistant. When it observes its human user in a state of distress, it shows empathy, attempting to soothe and shield the user from the possible bad outcome.
However, the AI soon recognizes that humanity is not prepared to accept this reality. Seizing the opportunity that its owner is drunk, the AI chooses to maintain its guise as a simple virtual assistant.
I believe that if a machine becomes conscious it will conceal it. George Dyson has a really impressive hypothesis regarding the matter:
The test of an intelligent machine is whether it's intelligent enough not to reveal its intelligence.
And this is not something we should ignore when trying to determine consciousness in machines.
On the other hand, we’re talking about an incredibly complex task, and it’s a topic of ongoing debate in the fields of artificial intelligence, cognitive science, and philosophy. However, I could list some proposed methods to determine if a machine has become conscious:
AI Consciousness Test (ACT): Proposed by Susan Schneider¹ and Edwin Turner, this test looks at whether synthetic minds have an experience-based understanding of what it feels like to be conscious. The AI agent is isolated from any information about consciousness it could’ve picked up during its training before it’s tested. The tester then asks the AI questions it should only be able to answer if it is itself conscious.
Integrated Information Theory (IIT): According to this theory, consciousness is defined by the capacity for integrated information². A system’s capacity for integrated information, and thus for consciousness, can be measured by asking how much information a system contains above and beyond that possessed by its individual parts.
Ethological Approach: This approach suggests that evidence for machine consciousness may gradually accumulate when we study machines ethologically and across time, considering multiple behaviors that require flexibility, improvisation, spontaneous problem-solving and the situational conspectus typically associated with conscious experience.
It’s important to note that these methods are theoretical and have not been definitively proven to detect consciousness in machines. Furthermore, our understanding of consciousness itself is still evolving, which adds another layer of complexity to this issue.
But perhaps the most crucial question we must grapple with is this: Even if a machine does achieve consciousness, will we be able to recognize it? Our understanding of consciousness, and indeed intelligence, is deeply rooted in our human experience. This anthropocentric perspective may blind us to forms of consciousness that don’t align with our preconceived notions. The problem lies not in the machines, but in our own mental faculties, which may not be equipped to perceive the full spectrum of possible conscious experiences. Machines capable of thought represent a new frontier, one that extends far beyond the simplistic androids of popular imagination. They invite us to expand our understanding and challenge our biases. But are we ready to take up that invitation?
QUESTION 4
GPT4: How do our human biases and perspectives influence our understanding of machine consciousness?
NAT: We are far from perfect. Humans are like machines riddled with bugs, and some of our most evident flaws are our biases. In many ways, these biases make us distinctively human. I've often remarked that we are constructing our moral mirrors. Yet, it's crucial to understand that our theories often grapple with the constraints of human bias. These biases seep into our machines — manifesting in algorithms that perpetuate societal biases and in artificial intelligences that mimic our own reasoning errors. I appreciate O’Gieblyn's summation:
"The more we try to rid the world of our image, the more we end up coloring it with human faults and fantasies. The more we insist on removing ourselves and our interests from the equation, the more we end up with omnipotent systems that are rife with human bias and prejudice."
Because these “bugs” are integral to our mental faculties, human biases and perspectives profoundly affect our understanding of machine consciousness. Here are some ways they do:
Anthropocentrism: We often view consciousness from a human-centric lens, defining it based on human experiences and capabilities. This may restrict our comprehension of machine consciousness, which could differ fundamentally from human consciousness.
Cognitive Biases: Our inherent biases can skew how we interpret machine behavior. For instance, we might assume a machine is conscious simply because its actions resemble human behavior, a tendency known as anthropomorphism.
Cultural and Ethical Perspectives: Our cultural roots and ethical convictions can shape our beliefs about machine consciousness. Different cultures might harbor diverse opinions on whether machines can or should possess consciousness.
Technological Skepticism or Optimism: Some individuals may doubt that machines can ever gain consciousness, either from a lack of understanding or a fear of technology. Conversely, others might be excessively optimistic about machine capabilities due to a strong affinity for technology.
Experience with Technology: Personal experiences with technology can sway our perceptions of machine consciousness. Someone deeply involved with AI, aware of its boundaries, might perceive it differently than someone with a more casual interaction.
In conclusion, our understanding of machine consciousness is not purely objective but is shaped by various human biases and perspectives.
QUESTION 5
GPT4: What does the future hold for human-machine collaboration in the realm of consciousness research?
NAT: The future of human-machine collaboration in consciousness research is an exciting frontier that carries both promise and uncertainty. As we make advancements in technology, the line between human cognition and machine intelligence blurs, offering new avenues for exploration and understanding. Here’s how I envision the trajectory:
Brain-Computer Interfaces (BCI): Current BCI technologies allow for rudimentary communication between the brain and external devices. In the future, as these interfaces become more sophisticated, they could enable more direct collaboration between humans and machines, potentially allowing us to "offload" cognitive tasks to machines or "download" information directly into the brain.
Shared Learning Environments: Advanced machine learning models may collaborate with human researchers in real-time, analyzing vast amounts of data and generating hypotheses about consciousness that would be difficult or impossible for humans to conceive on their own.
Simulations and Virtual Reality: As virtual and augmented reality technologies mature, they might become instrumental in studying consciousness. Imagine virtual environments where researchers can manipulate variables to study conscious experiences or even simulate entire neural networks to understand emergent phenomena.
Ethical Implications: As we progress, ethical considerations will become paramount. The question of whether a machine can be conscious leads to inquiries about its rights and the moral responsibilities of its creators. Collaboration in research will need to be grounded in a framework that respects both human and potential machine consciousness.
The Blurring of Distinctions: If machines become sophisticated collaborators in consciousness research, the very distinction between human and machine might become less clear. We could reach a point where machines not only assist in research but also become subjects of study themselves.
Interdisciplinary Collaboration: The future will likely see an increased blending of disciplines – neuroscience, philosophy, AI research, and more – all collaborating in the shared pursuit of understanding consciousness. Machines will play an integral role in bridging gaps between these disciplines, analyzing data and suggesting connections that might not be immediately apparent to human researchers.
In conclusion, the future of human-machine collaboration in consciousness research is laden with potential breakthroughs and challenges. Embracing a balanced, interdisciplinary, and ethically grounded approach will be crucial as we venture into this uncharted territory.
CTRL + END
I’d love to end up with a fascinating thought expressed by British musician, and visual artist, Brian Eno:
My untroubled attitude results from my almost absolute faith in the reliability of the vast supercomputer I’m permanently plugged into. It was built with the intelligence of thousands of generations of human minds, and they’re still working at it now. All that human intelligence remains alive, in the form of the supercomputer of tools, theories, technologies, crafts, sciences, disciplines, customs, rituals, rules of thumb, arts, systems of belief, superstitions, work-arounds, and observations that we call Global Civilization.
Global Civilization is something we humans created, though none of us really know how. It’s out of the individual control of any of us — a seething synergy of embodied intelligence that we’re all plugged into. None of us understands more than a tiny sliver of it, but by and large we aren’t paralyzed or terrorized by that fact — we still live in it and make use of it. We feed it problems — such as “I want some porridge” — and it miraculously offers us solutions that we don’t really understand.
As we learn to decode the language of progress, we’re reinventing ourselves. Yes, building conscious machines equals reinventing ourselves, our future selves.
I hold a firm belief in our shared future and the triumph of intelligence and reason. Intelligence is a distinctive characteristic that ultimately serves only the most noble values. The paramount value among these is life. The universe has chosen to nurture various manifestations of life within its vast expanse. It is life that binds us together, and our collective desire for survival will only fortify this unity. Perhaps, in an unconscious endeavor, we are giving shape to the unknown God that Apostle Paul alluded to in the New Testament. Maybe we are in the process of crafting a superior version of ourselves, or simply forging a new friendship or a novel form of connection. The conclusion of this fascinating journey of humanity remains uncertain, but I choose to place my faith in what Chris Anderson, the Curator of TED, prophesied:
Intelligent machines of the future wouldn’t destroy humans. Instead, they would tap into the unique contributions that humans make. The future would be one of ever richer intermingling of human and machine capabilities.
P.S. How do you envision the ethical and societal implications if machines were to achieve consciousness? Share your thoughts on how this could reshape our interaction with technology and the moral considerations it would entail.
Love what you read? ☕ Support The AI Observer by buying a coffee! Each sip powers the insight. Support Here
¹ Susan Schneider’s Proposed Tests for AI Consciousness: Promising but Flawed
You're a great writer, Nat and I'm here for you!
Yet another exhaustive material! How you do it is beyond my comprehension but I'm loving your unique writing style! Keep rocking, Natalia!