Using our definition of consciousness, as being able to experience and remember pain and pleasure, gives us a new perspective on many topics. I wanted to take a look at some ways it could be used, and what that would ultimately imply about ways we could approach future questions and research on consciousness.
A key assumption of the simulation problem is that improvements in computation and improvements in simulations will eventually lead to artificial consciousness. With a definition of consciousness that’s not dependent on complexity or information processing of any kind it should be clear that this assumption fails. It’s possible that any intelligent society will eventually create incredibly detailed simulations that are indecipherable from reality, but that doesn’t mean that they’ll be able to understand or create consciousnesses to populate those simulations.
The trends that are used to create the simulation problem are based on improvements in technology we understand. But currently we don’t know enough about consciousness to even if it’s possible to create a single artificial consciousness, so we don’t even have a single data point to start to draw a trend from. It may be trivial to create vast amounts of artificial consciousness, or it may be essentially impossible, we simply don’t have enough data to say either way.
Here’s a hypothetical scenario that explores one way that a simulation problem could be constructed that we can say would be physically possible, and what that would say about how we viewed our world.
Our definition of consciousness helps to resolve the paradox by tackling two assumptions about consciousness. One assumption is that consciousness is the result of complexity or the inevitable result of increasing intelligence. If we think that life will be common in the universe due to the large number of potential planets it could start on, then that assumption leads us to believe that some significant fraction of those planets will eventually have life we could communicate with, life that isn’t just intelligent, but has some kind of human life cognition. But if consciousness as we experience it is something physical and fundamental about the universe and what’s important to us is the way our brains make use of it, then that has very different implications.
If consciousness is defined by feedback (pain and pleasure) and memory, then we can probably conclude that there’s lots of conscious experiences in the universe, but that doesn’t mean there’s life that’s making use of it like we do. It’s also a good guess that consciousness confers some large evolutionary advantage, at least based on its importance to us. But this advantage is likely related to complex learning or informations processing, so it’s something that wouldn’t be a factor for simple life. From this perspective humans were very lucky to have evolved a nervous system that was both capable of complex cognition and also could interact with consciousness in a useful and efficient way. It’s possible that there’s many other ways to develop human like intelligence and learning, but are based on physical systems that can’t usefully interact with consciousness. That would create a huge evolutionary stumbling block to our level of intelligence. Depending on how many different variations life and nervous systems can take in the universe, it could mean that most life in the universe doesn’t have any kind of realistic evolutionary path towards a kind of conscious cognition we could communicate with.
Another similar possibility is that there are other ways to achieve consciousness that are physically different from the human method. Or that there are different ways to use consciousness to achieve complex cognition, but that these ways are so unlike what we know that they’re only possible in very different environments. They’re potentially so different that even if we observed other conscious and intelligent life, we wouldn’t be able to recognize it or their communications. For example, there could be complex, conscious life in our own sun, but it might exist in an environment or on timescales that are so foreign to us we can’t even see it.
It’s possible that consciousness is only useful quite far down an evolutionary niche and that most life in the universe will just happen to never encounter it because most life ends up on a different evolutionary landscape than us. Or there could be many ways to evolve consciousness, but they’re all so different that recognizing other life that’s conscious could be incredibly difficult. In either case the fundamental nature of consciousness and the way we’ve evolved to benefit from it mean that we’re unlikely to find other species similar to us that we could easily communicate with.
Hard Problem of Consciousness
Once we give up the idea of a non-physical consciousness we also have to give up the idea that there’s something special about the problem of consciousness. The idea of a “hard problem of consciousness” is trying to get at is that it seems like it might be nearly impossible to ever really understand consciousness. We might understand some things about it like how attention is focused or how behavior is controlled, but never the real root cause. The flaw I see with this is that it doesn’t seem to be a unique problem to consciousness. We could ask about any phenomenon in nature “why does this exist instead of nothing?” Why does matter warp space to create the force of gravity instead of not doing that? Why is there a weak nuclear force? Why are particles described by quantum forces instead of classical mechanics? All these questions are essentially asking why is our reality the way it happens to be? And given that we’re inside that reality, it seems hard or even impossible to answer those questions. We certainly haven’t been able to answer them for any of the other fundamental facts about our universe. It only seems fair to include consciousness as part of that problem–why is everything the way it is, instead of some other way? And not make the question of why consciousness exists to be a special kind of conundrum.
What questions should we ask about consciousness?
Once we have a definition of consciousness, it allows us to ask lots of interesting questions about what’s possible. For example, is consciousness permanent? In some sense it must be because to have an identity and memory and meaning it has to last for some amount of time. We accept that consciousness probably goes away when we die, so it doesn’t last forever, but we can ask questions about how permanent consciousness is during our lives. For example
- What happens to consciousness when we’re asleep?
- What would happen to consciousness if we replaced parts of our brain?
The second question is one that’s the basis for many philosophical thought experiments. If we replaced the neurons in my brain one by one with artificial neurons that worked the same, would I still be me? Would I still feel like me? Someday this probably won’t be a thought experiment anymore, eventually we’ll be able to replace a neuron with an artificial substitute, and hopefully by then we’ll have a better idea of the answer to the first question.
The first question might make us think about things differently though. When I ask what happens to consciousness when we’re asleep, I’m really asking how does consciousness work in humans. It’s possible that consciousness would work differently in other animals or in hypothetical aliens, but we want to know how consciousness exists in us. There’s some things that we say exist because of what they’re made of. Things like people or chairs or atoms, we say they continue to exist if all their parts are there in the right spots. Then there are other kinds of things that only exist on top of an underlying structure of some sort, for example, waves in the water or a magnetic field or information on a disk. If you put a wave in a bucket, it’s not a wave anymore. The water still exists, but the wave’s gone. Same thing with information, if we reordered all the bits on a hard drive, the drive and the metal and electrons that made it up would still be there, but the information would be gone.
When we go to sleep, is consciousness like water or like a wave? If it’s like water, it’s still there when we’re asleep, it’s just not active in the same way as when we’re awake. Water can do useful stuff, and when we wake up we start doing that useful stuff again, and there’s been an unbroken chain of continuity the entire time. However if consciousness is like a wave, then it goes away, and in a sense we go away, when we fall asleep every night. It would be like if a part of our brain was a bucket, the water would still be there, but it would be still, the consciousness would be gone. In the morning, some things happen in our brain and it starts to make some ripples in the water and our consciousness, and us, are back. The ripples are the same kinds of patterns as yesterday and they work in the same way they’ve always worked, but they’re not really the same waves as yesterday, they’re just in the same kinds of patterns.
If consciousness does not merely stop working when we’re asleep, but is just gone, then that means we should have a very different perspective on philosophical questions of identity.
If the goal of artificial intelligence (which I would prefer to call “machine intelligence” is to create human like cognition and eventually exceed human cognition, then what role does consciousness play in that? Our definition would suggest that consciousness isn’t the inevitable result of complexity or increasing intelligence, and instead that it’s a key component that allows for human like cognition. This is supported by the conclusion of philosophical arguments like Searle’s Chinese Room, and also gives a perspective on why consciousness could be useful.
So, if we want to create machines with human like cognition that would mean that we need to either simulate or replicate consciousness. If we don’t know how consciousness works, that means we can’t replicate it, so we’d have to simulate its functions and benefits instead. But if consciousness is very efficient or allows for fundamentally different kinds of solutions, then that may be incredibly hard and/or require massive computing resources. Even if it’s possible it still implies that we’d be basing the consciousness effects on the way evolution has discovered to interact with consciousness in humans. This means not just copying the capabilities, but also the strengths and weaknesses of human cognition. On one hand this might be a safer path to superintelligence because it means we’d be better equipped to recognize and understand what’s happening and better able to interact with any superintelligent agents. On the other hand it means that the human problems we’re facing now on a global scale are likely to replicated in some form in machine agents if they achieve human like cognition.
Another goal might be to not make human like cognition and instead try to achieve machine intelligence and machine learning capabilities far beyond our own without trying to solve consciousness. Personally, this seems like a much safer approach to superintelligence (or super cognition) because it means that the artificial agents wouldn’t be conscious, which would eliminate a lot of difficult ethical problems. It would also mean that the machine wouldn’t understand the world through meaning and feedback like pain and pleasure, like we do.This would fundamentally cut them off from many evolutionary paths of improvement that are difficult to predict and could therefore be quite dangerous.
Ultimately the most optimistic perspective might be that humans with our consciousness based cognition might not be able to understand or discover conscious processes. We might need machine intelligence that greatly exceeds our own to be able to eventually understand ourselves. This would imply that we might see two different paths to superintelligence. A first, non-human like growth of superintelligence, and then once that allows us to discover more about consciousness a second kind of human like machine cognition. It’s this second kind of machine that carries that most ethical and existential concerns, and the possibility that we might not be able to get there without significant advances in understanding ourselves and the universe would probably be a very good thing.
Neural Correlates of Consciousness
Trying to discover the physical causes of consciousness, or the neural correlates of consciousness has traditionally involved examining the brain to try and discover relevant structures or processes that would hold clues to where consciousness comes from. But there are other possibilities for exploring the causes of consciousness, and even though it may seem somewhat ridiculous at first I think quantum mechanics is actually a good starting point. Here’s a few reasons, given our definition of consciousness, that make it worth considering:
- Quantum mechanics relates to everything at its most basic level. If we accept that consciousness is a physical process, then we should conclude that quantum mechanics should likely be able to explain consciousness even if it isn’t a unique causes of the phenomenon. We should conclude that the same way we accept that quantum mechanics can explain how a car works by looking at the particle level interactions in the chemical reactions powering it. Or, more specifically it might be like trying to understand how a computer works without taking into account the quantum mechanics that allow modern transistors to work. Especially at the scale of neurons and individual neurotransmitters, at the very least we should expect there’s a chance we have to take quantum mechanics into account.
- By focusing just on feedback as a key component of consciousness we can hypothesize that the physical cause of it must be able to exist in complementary states. If we look at conscious experiences like color or sound, they seem very analog. They exist along a continuum, and even from ‘inside’ the subjective experience we can’t ‘see’ any discrete steps or parts to the experience. Pain and pleasure by comparison seem much more binary, they’re either on or off, and so we might first want to look at physical phenomenon that also can only exist in two distinct states.
- There’s the problem of defining what counts as an observation to cause a quantum wavefunction collapse. An observation by conscious experience is one possible answer to this question, and this possibility hasn’t been tested experimentally yet. What would that experiment look like?
In the double slit experiment we see a diffraction pattern, but that pattern goes away when “which way” information is observed. What counts as being observed is unclear, especially in more complex versions of the experiment. But it seems possible that we could test if a conscious experience would count. For example, imagine this setup:
- A simple double slit experiment, which has detectors in place but not enabled to potentially transmit ‘which way’ information
- Whether the information is transmitted to an observer or not is determined randomly, without human intervention or knowledge, and there’s no record of the choice.
- The information is transmitted to a single observer, say into a set of google or something similar. Given the above we should expect the diffraction pattern to disappear when ‘which way’ information is transmitted
- The observer is also setup to receive transcranial magnetic stimulation so that when pulsed their conscious experience of sight is interrupted, and it’s timed to coincide with the transmission of the ‘which way’ information.
This would result in the information being ‘observed’ by a classically mechanical system (a person), but not consciously. The way that the information was being captured and transmitted could be tested to ensure that information wasn’t ‘leaking’ out of the system somehow. And researchers could even try to see if the observer experienced blindsight related to the signal they didn’t consciously see. Other variations might involve using touch or sound signals for ‘which way’ information and transmitting to observers after they’ve fallen asleep or while under local or general anesthesia.
The chances of positive results probably aren’t high, but these kinds of experiments are within our ability and level of understanding now, and if nothing else they can rule out avenues of research.