Learning

L

Learning: Efficiently encoding responses to stimuli

I want to distinguish between learning and memory. These two ideas are often used interchangeably. For example, if I’m learning to hit a golf ball, we might talk about muscle “memory,” and if I memorize a fact, we might say I’ve “learned” it. In neuroscience, what we typically called memory is often referred to as declarative or explicit memory, and what is typically called learning is known as procedural or implicit memory. There are three arguments I’ve heard for lumping learning and memory together like this:

  • They both involve changes in the past which affect actions in the future.
  • We know that the brain can make long-term changes to how it’s wired and that it reacts by changing the connections that neurons make to each other. This could underlie both learning and memory, and we don’t know of any other way to make those kinds of long-term changes.
  • In a lot of situations, learning and memory seem to overlap or to be happening at the same time, and it can sometimes be difficult to tell them apart.

I don’t find these arguments convincing. Let me respond quickly to each, and then I’ll lay out the reasons that we should treat learning and memory as explicitly different:

  • The past changing the future is how all of physics works, and we wouldn’t want to say that steel learns to be a bridge even though it goes through changes in the past that affect how it acts in the future. And we wouldn’t want to say that a valley helps rain learn to be a river or that a valley is the memory of old rivers, at least not in the sense of being literally the same as a person’s memories of the past.
  • We know that neurons can change how they’re connected and it seems very likely that this process is mostly responsible for learning as far as we can tell. But there are a lot of things happening in our brains that we don’t understand at all—we have no idea how they’re caused or what effects they may have. Until we can show that changing a neuron’s connections leads directly to a conscious memory being forgotten or a new one being formed we shouldn’t assume that that’s the process responsible for memories. Even if it is the same basic process as learning, the ways memory and learning are expressed are so different that it would likely still make sense to clearly call them different things.
  • The fact that two things can interact in complicated ways some of the time but not always seems like a good argument for calling them by two different names to help distinguish them in these situations. Sometimes we can’t tell them apart, but that’s likely due to our own lack of understanding or our inability to perceive the underlying mechanisms, and using confusing names probably isn’t helping.

So, what are the reasons to clearly separate learning and memory as different things?

  • Learning can happen without us being aware of it and gets stronger through repeated practice. It can be recalled without actively paying attention to it or trying, and it leads to predictable responses to stimuli.
  • Memory needs us to be consciously aware to form and requires consciousness to recall. It’s seemingly formed instantaneously and can degrade with repeated exposure or even recall. The responses to recalling memories aren’t predictable. For example, the memory can change.

In short, at every stage of the process—formation, strengthening or weakening over time, recall, and the resulting behavior—memory and learning appear to be completely different. They’re so different that it would be like calling arms and legs the same thing or walking and grasping the same thing. We might categorize them under some high-level heading like “limbs,” but we wouldn’t call all four of our limbs “arms” and then differentiate them as “floor arms” and “upper arms” the way we do with implicit/explicit memory. The labels we use and the way we categorize them should be clear and logical. This is mostly a critique of the formal or scientific definitions, but even in everyday usage, I think the ideas would be clearer and more useful if we defined the boundaries of what we call learning and memory more clearly. Memory appears to be closely tied to consciousness. That’s a topic that we still have a lot to learn about, however, so for now I’m going to limit myself to trying to carefully define what learning is.

We got lucky that machine learning was called that because so far, it’s captured the idea of learning really well. The things that machine learning is good at are the same kinds of things humans get good at through learning. Learning works great for things like walking, recognizing images and faces, and language translation. There are lots of situations where we want to perform a certain kind of action because of a certain kind of stimulus. For example, we want to go somewhere, so we walk. The combination of the stimulus to move plus the feedback from our feet causes a series of actions and reactions. Neurons fire to move our muscles and we end up placing one foot in front of the other. We’re not consciously remembering how to walk. Instead, the wiring of our brains and bodies has changed over years of doing the same kind of thing over and over again. That’s what learning is—responses to certain situations encoded in our brains through repeated exposure. It’s connecting responses to stimuli, whether that’s moving our mouth to produce speech, walking, or knowing many kinds of information.

Memory often plays a key role while we’re learning complex topics, but it’s something that interacts with the process of learning, not learning itself. At some point in our lives, we were told that 4 x 5 = 20. The first time we heard that or saw it on a chalkboard, we didn’t immediately learn it—new long-term connections in our brains weren’t immediately formed—but we could remember it. Later, if asked what 4 x 5 was, we could consciously remember the experience and say “20.” We probably had to go through that process many times, recalling the memory over and over again. Or perhaps we recalled it a few times, then forgot and had to be told or figure out the answer again, and then recalled that new memory over and over again. Eventually, though, we’d been asked what 4 x 5 is so many times that we didn’t have to consciously remember one of those experiences—we could just give the answer right away without thinking. At that point our brain had learned to give 20 as the response to 4 x 5 without effort, the same way that we don’t have to think about where our nose is, or how to say our name, or any of the other simple or complex tasks we can do without thinking about them. We’ve done them so many times that the response is automatic.

One example I use to differentiate what kinds of things we can learn to do and what kinds of things we need memory to do is to look at sleepwalking people. Someone who’s sleepwalking isn’t conscious. They can’t remember what they’re doing and seemingly can’t consciously recall anything either. But they can do surprisingly complex tasks, if they are tasks that they’ve done many times before, and there are no new surprises. People can walk, talk, and even cook or drive while sleepwalking. That’s kind of rare, but in a way many people might already know what it’s like to “sleep drive.” I’m sure many of us have had the experience of driving some route we’ve covered many times before when our minds start to wander and, sometime later, we suddenly start paying attention and realize we don’t remember how we got where we are. Our brains had learned how to process all the stimulus and feedback necessary to do the basic tasks of driving without us having to consciously pay attention or think about it at all. I imagine this is very much what sleepwalking must be like, except without the daydreaming at the same time.

But learning isn’t the same as being programmed or hardwired to do a specific task. If you learn how to catch a ball, you can catch it even if it’s thrown higher or faster than before. So learning isn’t just a simple one-to-one matchup of one set of stimuli with one set of responses. If an engineer built a robot and programmed it to do the exact thing same thing over and over again, we wouldn’t say the robot had learned anything. Also, learning doesn’t have to be perfect. If you learned to hit a golf ball and had 100 perfect swings in a row but then completely screwed up the 101st, no one would say you forgot how to golf or that you’d never really learned how to swing a club. The definition of learning has to capture these two ideas: it can be extrapolated to new but similar situations, and it doesn’t have to produce a perfect response every time.

These two ideas remind me of another concept from computer science—data compression, which works by finding patterns or repetitions in data and using them to store the data in a more compact way. The way our brains work to allow us to learn to do things is a lot like a physical version of data compression. When I learned to catch a ball, my brain didn’t dedicate one set of neurons to catching the ball when it was thrown high and another to when it was thrown low, or one set for a basketball and another for a baseball—it reused most of the same neurons for all those situations because the stimuli and responses were very similar. Instead of being hardwired for every different variation of ball and every different way it could be thrown, all those sets of wiring were compressed into one overlapping network that could reuse patterns and take advantage of repetition to be more compact. This is incredibly useful because all the interactions we ever have in the world come with a ready-made set of patterns and limitations to make it easy to compress them: the laws of physics.

We never have to worry about learning to catch balls that suddenly change direction or speed mid-flight. Gravity is always going to be the same and so will things like the ground being solid and our bodies being a certain size. These are all limitations that make it easier to learn because they are rules that make the compression easier. But this also means that the rules—the laws of physics and facts about us and our planet—are encoded in the way we learn. If one bucket of water is twice as big as another, that means we’ll have to lift twice as hard to pick it up, and if it’s three times as big, we’ve already learned, or encoded, the kind of effort we’d need to lift it. Which is just to say when we observe patterns we can make predictions. Learning is using the physical facts of the world around us to compress our responses to stimuli, and it results in us being able to predict what’s likely the appropriate response to a new situation we’ve never seen before. As long as it’s similar to something we’ve experienced before, we can extrapolate a good response. And the more experience we have, the better our prediction will be.

This isn’t just limited to throwing balls and lifting buckets of water. There are certain rules of logic and physics that limit just about all of our interactions in the world. If you’ve known someone for a long time you can probably predict how they’ll finish their sentence without even thinking about it. Math follows rules as well, so we don’t need to learn the answer to every multiplication problem—we can learn the process for getting the right answer to any class of question. The same goes for everything from music to video games. If there’s some structure to the stimulus, our brains can find efficient ways to store the patterns that will produce the right response in similar situations. It might take some trial and error, and we won’t always perform perfectly, but that’s what we’ve always known learning to be. Thinking about learning as efficiently matching responses to stimuli helps make it clear why learning has the limitations we know and makes clear what kinds of things are easy to learn.

If we want to understand our own behavior it makes sense to break it down into small, understandable, and logical chunks. For a long time, we’ve thought of learning and memory as two kinds of the same thing, and we’ve used learning to mean a lot of different things that don’t actually have a lot in common. This just makes it difficult to think about and understand ourselves and skews the way we think about new areas of research like machine learning and AI. Instead, we can understand learning as a physical form of data compression, and this gives us a better way to understand how we interact with the world. Defining learning as “efficiently encoding responses to stimuli” isn’t based on our historical usage of the word, but instead is created by looking at the things we call learning and finding the underlying logic that ties them together. It’s logically consistent, and it isn’t limited to just human experience or current ideas of how our brains work. This definition doesn’t have to become the new standard to be worthwhile. It can still be a useful way to think about ourselves and how we interact with the world. For example, we could describe the process of evolution as a species learning to efficiently fit into its environment by influencing the way the next generation reacts to the stimuli they’re likely to encounter, and it gives us a new way to think about emotions, which we’ll look at next.