Join our expert Alexa Ruel and our guests Kelly-Anne Moffa and Mo Carr to discuss the paper “Interplay of Approximate Planning Strategies” by Quentin Huys, et. al. During our discussion, we touch on whether the human brain works like a computer, why people avoid large negative outcomes even when they could lead to even larger positive ones, and more!
Article PDF
Episode Transcript
Jamie Moffa: Welcome to In Plain English, a podcast where we discuss scientific research in terms that are accessible to everyone, not just the experts. I’m your host, Jamie Moffa. Before we get started with today’s episode, a few reminders. You can download the paper for each episode at inplainenglishpod.org by clicking on the episodes link in the main menu. You can also listen to previous episodes there. We believe in open access science for all, so the papers we choose will always be free for you to download. If you have a question or comment about a previous article, you can submit it under the Continue the Conversation tab. In future episodes, we will begin by reading and responding to some of the questions and comments that you send in. If you are interested in being a guest or presenter for a future episode, you can click on the Become a Guest tab on the website. You can also reach out to us on Facebook or Twitter @plainenglishpsi, that’s P-L-A-I-N-E-N-G-L-I-S-H-S-C-I. To listen to this podcast, you can find us on Google Podcasts, Spotify, SoundCloud, or wherever you listen to podcasts. With that out of the way, on to today’s paper.
[Transition music]
JM: On today’s episode, we will be talking about decision making, and the strategies we use to simplify complex choices. Our expert presenting today’s paper, titled “The Interplay of Approximate Planning Strategies,” is Alexa Ruel. Alexa, would you like to introduce yourself?
Alexa Ruel: Yeah, thank you. My name is Alexa. I’m a fourth year PhD student studying decision making at Concordia University in Montreal, Canada. And I study decision making by examining the mechanisms people engage in when deciding, and how this changes across the lifespan. So I have an interest broadly speaking about the planning, the actual decision making, and the choice people make.
JM: Joining Alexa for this discussion are our two guests for this episode, Mo Carr and Kelly-Anne Moffa. Mo, Kelly, would you each like to briefly introduce yourselves?
Mo Carr: Sure. My name is Mo. I have a degree in music education and so I’m a music background, that kind of thing.
Kelly-Anne Moffa: And my name is Kelly-Anne. I am a public policy student at the University of Southern California.
JM: Awesome. So without further ado, Alexa, take it away.
AR: Awesome. So I guess I maybe should start with a little background about why I chose the paper. First things first, I think it goes without saying that it’s a topic that interests me. It’s also something that’s highly complex. So I think it really brings itself nicely to a discussion like this where we’re trying to bring it down to simple terms and easy concepts to understand and explain.
And the topic itself is planning. So the idea that any decision you make or any kind of sequences of decisions that you would make involves planning beforehand, and that in the everyday, in our everyday world, this can be very complex and therefore it has not been studied in kind of a very realistic environment. They talk about in the beginning of the paper how planning has been studied for a long, long time using heuristics and focusing on specific heuristics, which are kind of like people’s shortcuts or people’s way of taking something complex and making it easier for them to understand. But not many people have studied it in the way they’re doing it in this paper, which is kind of like have people do a task and then just see what they do and see if we can try to explain that with mathematical models that relate to these heuristics we’ve studied before. So kind of looking at it in a more real world like environment.
Okay, so a brief summary of the paper, again, it’s kind of motivated by this idea that human planning, so making—planning ahead in order to determine what the sequence of decisions you’re going to have to do should be is something that’s highly complex and therefore not often studied. So that’s the motivation to this paper and what they’re interested in looking at is: which heuristics or which strategies people use when they’re forced to engage with this type of planning task with very little guidance. So these participants were not told what to do or what strategy to engage in, they were just told kind of do as best as you can, try to get as many points as you can in this task.
And the task they’re presented with is kind of obscure in the sense that you’re presented with a screen, and there’s many boxes on the screen you’re presented with. And you have to figure out which box leads to which other box, so they would like light up or some type of indicator which box you’re currently at. And each time you transition, you would either get a positive number of points or a negative number of points, and you would keep navigating through this as in that way. And at one point they would stop you when you start over again, so you get kind of several trials at getting as many points as you can.
So that was kind of the setup here, and they were interested in figuring out, based on the heuristics that we know people use, what are they going to do? So this is a good point to talk about the heuristics that we do know humans use when planning. And the first one they mentioned in the paper is suboptimal pruning, they also called it hacking, which is essentially the idea that when most humans are presented with a large immediate loss, so you’d get a large number of negative points, this tends to turn most people off and they will go away from this option and try not to explore in that direction again.
The second choice or second strategy rather that they talk about is chunking. So this would be similar to a lot of us, our strategy when we are given a phone number, you kind of divide it into chunks in order to have something more memorizable or more approachable instead of having to memorize something more complex. In this case, chunking refers to simply kind of taking this complex task and trying to divide it into chunks in order to have it—make it easier to represent for the participants.
And the third strategy they talk about in the introduction, so these heuristics that have been previously studied, but on their own, is essentially memorization, where the idea here is that you’ll reflect on past sequences that you’ve done, what the outcome was, so was that a good sequence of decisions I made, was it bad, how many points did I get, and simply commit those that were more positive to memory in order to be able to repeat them again in the future.
So those are the tasks, sorry, I keep saying tasks, the strategies they talk about, the heuristics that have been previously studied either by them or other colleagues. And now they’re throwing people at this complex task and essentially it’s just more free form. So as opposed to studying one of the three, they’re going to look at how participants are approaching this more complex planning task and seeing, do all three of these come into play? Do they kind of combine? Are people just really bad at this? What happens?
And I think it’s really cool because what they find is that in the end, humans are remarkably optimal in how they plan to make decisions in these complex environments. They approach this in fitting models to the behavior they’re seeing. So in other words, people do the task, they hit buttons, and then they get points or they don’t get so many points depending on which buttons they press and which kind of boxes they’re exploring. And over time, what they’re doing is a model is kind of like, it’s just a big fancy equation in some sense, that they’re going to try to see if this makes sense or fits the data. So people made a given sequence of decisions based on what they planned ahead. Does a given model match that? Does the model predict this pattern or does it not? And based on how well your model fits, then you can conclude that if your model was about a given strategy, then if it fits well, participants must have been engaging in that strategy. And that’s essentially what they do is they take these models for those three strategies they introduce and they try to see, does it fit?
And their main conclusion is that participants seem to be doing kind of a combination of all three. So they break it down in the paper and going over each of the strategies and explaining how this came out in participants’ behavior when planning ahead in this really complex task.
JM: So do you all have any questions sort of on that introductory part of the paper?
MC: No, so far, I think you’re summarizing beautifully and I’m even understanding more of what’s going on just with the summarization.
KM: Yeah, same. Most of my questions from the introduction were just explaining the different types of decision-making and ways that we break down information. So the way Alexa is explaining it right now is really good because it answers my questions that I had on the introduction part of the paper.
AR: Awesome. Yeah, I think the most complex part of understanding these papers is just you have to chunk their paper in some sense, is break it down and be like, okay, this, they’re just trying to get this message across, right? And then you keep going. And in some sense as well, kind of like people would actually do in the task is you had—kind of have to go through it more than once, you’re going to get more out of it the more you go through it.
But yeah, so the basic idea of the introduction is these heuristics, these kind of strategies people engage in intuitively to take something complex and make it more simple. And most of the time, these strategies or heuristics are triggered. So it would kind of be like you’re in a situation in which, well, the easiest way to memorize a really long string of numbers, for instance, would probably be chunking. So you’re kind of put in a situation where that’s the strategy or heuristic that’s naturally going to be elicited in participants. But in this one, they were kind of like, what happens if we just give people this task in which they have to plan ahead, what are they going to do? Like are they going to use all these heuristics, are they going to focus in on a few? And when they do that, are they actually getting closer to being optimal and doing all this or are they moving away from being optimal?
MC: That’s a good point. I just had a really quick anecdote about chunking. So I had a friend once who memorized 400 digits of pi, which I thought was fucking ridiculous. I was like, how did you even do that? He’s like, well, I just memorized 40 phone numbers, like a 10 digit phone number and just did it. I was like, okay, you’re insane, but like go off.
KM: I still feel like memorizing 40 phone numbers would be very difficult. [Laugher] I don’t think I could do it, and then say them in the right order.
AR: Yeah. So coming back to like, so that was their, the motivation for the study rather, they’re like, okay, people do really cool and maybe or seemingly or somewhat optimal heuristics when they’re kind of prompted to do so. But what happens when you throw something more complex at them and you’re like, go, like just plan ahead and get as many points as you can. And this is why I thought it was super interesting because that’s closer to what the real world is like, right? No one’s going to be like, okay, and now I will present you with 400 digits and you’ll remember them and tell them back to me. It’s more like plan the next two years of your life and there’s all these factors to consider. That’s a really scary hypothetical, but nevertheless, it’s not as straightforward as just having one heuristic you can rely on.
It’s more likely you have all these possibilities and it’s like, well, first it’s just inherently interesting about which heuristics are people going to choose. And then once you’ve decided which heuristics you’re going to use to plan ahead, is that actually optimal or do we just suck as human beings to plan ahead?
So yeah, so they give these participants these like this task, it’s like a screen, you’ve got these boxes displayed on it, and you have to plan ahead and navigate through this, these boxes on the screen. So it’s not particularly engaging and it’s highly demanding. So hopefully participants were well paid for this, but yes, you presented with this task and you’re navigating through it. And one thing they found all these kind of really cool strategies people used. So although the task where the boxes were displayed in kind of an ellipse shape or circle shape on a screen, you can lay out the transitions between the boxes and kind of like a tree. So you would start somewhere and then you would have kind of each box would have two boxes it can lead to. And then that—each box has kind of two more and two more. So you end up with this tree with one option at the top and many, many options at the bottom.
They started looking at, well, what strategies do people seem to be engaging in? And they modeled each of these strategies or each of these heuristics and then tried to figure out, well, does the model that would explain someone engaging in this heuristic actually fit the data, right? If it fits well, then that’s probably part of what they’re doing. If it doesn’t fit well at all, then it’s likely that that’s not what participants are doing or a model is wrong, right? In this case, they had previous work to show that their models are likely right or ways to verify that their models are likely right. So it’s more about seeing like what are participants doing?
And one of the first things that they found that I thought was super cool is that subjects kind of solved it hierarchically. So if you think of the task as in this tree configuration, they’re actually going to solve kind of exploring different options in one sequence and then doing that sequence again in order to kind of create fragments, right? Which would kind of be another way of saying chunking. You’re kind of creating little, you’re breaking up this tree into smaller pieces to try to figure out what is leading to what, right? So in kind of the decision-making literature, you would say that they’re exploiting a given path, right? They’re doing it again and again and again as opposed to exploring, which would be kind of, oh, I’m going to do one given path on the first trial and then a completely different path on the second, which is kind of clearly going to be a lot harder to figure out and to kind of keep in memory or use in the future rather than exploiting.
So that was the first thing I was like, well, that seems really smart. And as I’m going through this, I don’t know about you guys, but I was always like, would I have, would I have done that? Or would I have done like, like, does that make sense because I’m reading it? Or does that make sense and I would have done the same thing. I don’t know what you guys think about that.
MC: Right. Yeah. I think it’s really interesting because to me, like, I come sort of from a science background as well, my father does science, so does my sister. So like the way you’re describing it almost sounds like somebody doing, obviously an experiment, like a scientific experiment. You got to control as many variables as you can, and then like change one. So like you go all the way down this one, you’re like, okay, that’s cool. I know how that ends. So you go all the way down except for the last one. And then you may take that one little change and see how it differs, like how your result differs. So I thought that was like, in my science brain, totally makes sense because you want to keep as much data as, like—limit the variables as much as possible so you can come up with the best solution, I guess.
KM: Yeah. I mean, I have a question about just the experimental design. Are they, can they see, so they’re navigating through the boxes in a circle? Can they see the payoffs that come like from navigating to each circle or to each square?
MC: That’s a really good question.
KM: Like all at once? So then if so, what I would have done is I would have kind of drawn it into that tree and then worked backwards from what end square has the highest payoff and how do I get there? But if they don’t know that and you just know at each decision what you gain, then that strategy makes sense where you just keep doing the same one and then like change the end a little bit and then change a different part a little bit.
MC: That’s actually a really good question. Or is it just like, you move from one and you kind of see that and you have to memorize what that move was and so on and so forth. Yeah. So from going from one box to next, you have to memorize what happens there or is it already on the screen? That’s a really good question. I like—
KM: Yeah. Because if we’re thinking about planning—
MC: Yeah.
KM: —at least in my mind, when I think of planning, it would be planning the best route through the, these sequence of boxes and how do you get where you need to go? And in the paper, just that screen looks confusing, even though, yeah, it’s just like six boxes in a semi—or in a circle.
MC: Right.
KM: Like, right, if you can only go to certain boxes, like you have to plan out a route.
MC: Yeah. But if you don’t know what this is, what that value is going to be, it becomes a lot harder—
KM: Then it’s both memorization and planning for the next time you do it by remembering what you’ve done in the past.
AR: Yeah. I think what’s really cool is you guys are touching on like a really, I find a really cool concept and decision making is that it goes hand in hand with learning. If you’re making a decision, hopefully an informed decision, then you’ve learned something in the past that helps you make the decision, right? If I’m like, which beverage is more caffeinated, orange juice or coffee, it’s because you’ve learned sometime in the past that coffee has more caffeine than orange juice, right? So when I’m like, decide which one you’re drinking, if you want to feel the most alert, well, the learning’s already happened, right? In these tasks and a lot of decision making tasks like this one, the learning has to happen like then and there. Some experiments have it like all in one block or chunk, haha, in which you would start the participants off knowing nothing and then see how quickly they learn, how quickly they show that they’ve kind of mastered an understanding of whatever you’re presenting them with.
In this case, participants were actually trained ahead of hand in understanding which box leads to which box and what the outcome would be of each of those transitions. So you like learn everything ahead of time, and then at the actual test phase, they present you with the same kind of layout, the same boxes, but now they’re like, okay, plan for the most points starting here, and then they give people some time and then think, think, think, go and then start hitting buttons. And then, okay, now you’re going to start here at a different box, plan, okay, go.
So the learning had already occurred and therefore once you can, I mean, you would test the learning to make sure they’ve actually learned properly, but once you’ve done the learning stuff, then you can actually just focus on the planning and then see what our participants doing, like how are they navigating through this complex environment. And just so we actually mention that on this, there are six boxes, and then each of them transition to two of the other boxes, and you have to learn which transition is a good one, which one’s bad, which one—which box leads to which, so there’s a lot of stuff to learn here. And that’s why it gets so interesting when you start participants off at a certain box and you’re like, go. Because there’s so many things that are happening here in order for participants to be able to do this successfully.
JM: Right. And it’s also important to mention that they could start the participants off at any of the boxes during this testing phase, so it’s not like they were always starting at one place. So they could start at box one, which leads to box like three and five, or they could start at box six, which leads to two completely other boxes.
AR: Yeah, which makes this like even more complex, even harder for me to be like, would I have, would I have done that? Like how would I have approached this task? And I think it’s really cool that they have these like general patterns come out. Kind of like, all the human beings we tested on this did similar things. And the conclusion was that it was overall quite optimal. And saying that like, you know, just this first finding of like people were able to fragment and then exploit these fragments in order to kind of get that understanding, or get that planning in their—clearly in their mind, I think that’s super cool on its own, given the environment and how like these boxes are just gray squares on a black screen, like that’s really cool on its own.
KM: I have a question. Were there—or are there, in the field of decision making—are there other strategies that aren’t mentioned in this paper that perhaps like you find cool or think they should have explored?
AR: That’s a really good question. There’s so many strategies, like these—so what they’re focusing on here is strategies or heuristics for planning, which happens before the decision making, right? So if you kind of look at it, the decision making process from beginning to end, there’s actually many things happening, right? So you’ve got learning way in the beginning, which we’d already talked about. And then if you’re then presented with an environment that involves some type of decision with some type of complexity, there’s planning. And that’s what they’re focusing on here, is like before you’ve actually done anything, you’re just sitting there thinking. How are you thinking about this? What are you doing? And then if I’m like, ok now do it, or show me what you’re thinking about so I can actually get a measurement of your planning, that’s what they’re kind of looking at here. And I think they cover these planning strategies pretty well. It’s not again my domain of expertise, but I think in terms of my understanding of it, they’re covering kind of the most common ones, especially in this type of task where you have several decisions to make, and there’s kind of different options or different choices that are optimal, some that are not as optimal. And then once you’re done the planning, you actually enter kind of the decision making phase. So now you’re deciding—like you’ve planned what you’re going to do, and now you’re moving through these decisions. And then there’s different ways of moving through the choices you’re making. And that’s what I typically focus on.
And then after, or kind of like once you’ve decided, then there’s the actual choice you make, which is in this case, clicking the button, or like it’s like locking in your answer. In terms of my like dorky orange juice and caffeine or coffee example, it would be like, actually being like, oh, I’m going to have a cup of coffee, right? Deciding is the stuff that happens before that when you’re like, hmm, okay, well, which one meets my goal of being most alert, okay, thinking, blah, blah, blah, and that would be decision making. If you have many decisions to make in order to make a choice, it doesn’t really apply in this example, but then you have planning in addition to that.
So to answer your question, I think they’re doing a really good job in covering the strategies or the heuristics here. There’s a whole bunch of other strategies we talk about when we talk about decision making that are kind of not planning, it’s not the actual choice people make, and that’s a whole other ballgame. And I think the main one that would apply here would be like a goal directed decision making strategy or a model based strategy where you need to have a representation of what you’re dealing with in order to kind of navigate through it in that goal directed way.
The next thing I thought was really interesting is that they were able to, in this paper, get an idea of how these different strategies differed at an individual level. It’s a little bit beyond my expertise or understanding here, but I think it’s really cool to nevertheless see that, you know, participants were using these fragments to plan optimally, or almost optimally as they say in the paper, but this also varied on an individual level. So people weren’t necessarily creating the same fragments, but they were all doing something equally optimal, right? So it’s kind of like overall, the group of humans did really well. Everyone did it a little bit differently, but everyone did it really well regardless of their strategy. So that for me like restored hope in a lot of humans, I was like, cool, look, we’re all different, we’re all doing things a little bit differently, but we’re all doing pretty well. And I thought that was really interesting.
MC: Yeah, that was cool, like every student, like there are different ways of teaching, you know, there’s oral, there’s visual, there’s tactile, stuff like that. So like everybody’s got a different way of doing something. That doesn’t make one thing wrong and one thing better, but as long as everybody’s, you know, getting it. And yeah, it just kind of here, kind of proves that like everyone’s probably going to do it a little differently. One way is not better than the other so long as it works for them, which I think is pretty freaking cool.
AR: Yeah, I think it’s awesome. I think a lot of times, like especially when I think of cognition or like most people think of cognition, it’s like, cognition is like something everyone should do roughly similarly, like thinking hard about a specific problem, like it should be, you know, it’s kind of universal. Everyone thinks hard, everyone should do it roughly the same, but there’s these huge individual differences when you look at the data. And I think that’s really cool is that even something that is kind of very human is to think hard and to solve, like to have the ability to solve these types of problems that there’s still a lot of variability, and that is super cool.
MC: Yeah, everyone’s got a brain and just use it all slightly differently.
AR: Exactly.
MC: I did have one question about verbiage—not verbiage—just so they use a word memo—memoization, which I’ll grant, they’re all British, so they might just be pronouncing memorization wrong, but I have a feeling it’s not quite right. So do you by chance know what memoization means?
AR: I have no clue. My slightly dyslexic brain just read that as memorization, like didn’t even second guess it. Now that you’re pointing it out, I’m reading the word and I’m like, that’s not spelled right. [Laugher] Like that’s not how I use it.
JM: I actually just did look it up. Actually it turns out that while it is a distinct word from memorization, the computing concept it’s referring to, it’s originally a computer processing concept of storing a saved result for the same search. So it’s actually when referred to humans, it actually ends up meaning something similar to memorization, which is to say, if you’re asked to do the same task again, you’re just going to do the same thing that you did last time, so long as it worked out more or less okay.
AR: That makes total sense to me. It’s like, it’s like, you know, when you go to a website and it like gathers cookies, it’s like brain cookies.
[Laugher]
AR: It’s like caching. It’s like if you’re taking how a computer works and it saves cookies or it caches data, it’s like your brain, it’s like brain cookies or brain data caching.
MC: Interesting.
KM: So we’re robots basically.
MC: But are we?
AR: Ooh, that’s a really interesting question. Really interesting question. I, so a long time, and this is like currently still debated in like decision making, because a lot of what we do is like, we take human behavior and we take this super complicated mathematical model that’s like, this is how the human mind should work, according to kind of like, if we’re performing optimally, this is what we should get. Then you apply it, right? So there’s kind of this, this merge between all this like computer lingo and brain lingo in a lot of cognitive domains. And a lot of time, like for a long time, because the models fit kind of well, and it was kind of, it kind of made sense, right, to be like, ooh, brain cookies and your brain is like caching data. People thought that the brain was kind of like a computer and that you could think of it as a computer. But we’re finding out more and more that our human brain is really not rational all the time, and not optimal, like even when things like when you stop and you think about it on kind of like the experiment or side, like, oh, people should do this, right? Because this makes most sense when you actually run these experiments, people don’t do that at all, because they’re considering things that the computer can’t consider. And therefore there’s kind of this slight overlap between a brain and a computer, but also like huge discrepancies between the two.
KM: I really, like, that’s something that I love thinking about, because it overlaps a lot in like behavioral economics, which I have somewhat studied and learned about. And one of my favorite things is talking about, there’s a study that was done where people gave like undergrads, mugs on like the first day of class, and like would bring you coffee in subsequent classes, and then they would ask you like a couple classes in like, oh, can I buy the mug back from you? And people would value it way higher than just like that same mug like in the bookstore, because they had like attachments to it, they had like, they don’t want to give it up because it’s special to them now. And like, so in a lot of fields, like it happens in economics and obviously like in other fields like decision making, I would think as well, you think people are going to think rationally, but then they don’t. And I mean, it makes perfect sense, like think about your favorite sweater or your favorite coffee mug or stuffed animal or whatever, like that means way more to you than what it’s actually like monetarily worth, you’re not going to give it up. So yeah, we’re not, we’re not robots to answer that question that I posed earlier.
MC: Also, who wants to buy a used mug anyway, that seems a little weird. [Laugher] Just throwing that out there.
AR: I mean, if you go on eBay, I’m sure you can buy tons of very famous used mugs.
MC: But see, that’s that’s different though, because that has other attachment value, I guess, because somebody famous used it. I mean, I don’t really get it, but like, go off people who buy stuff on eBay like that, I don’t know. Also, it just, it just fascinates me that they’ve come up with a mathematical equation that represents the brain and how we think, like, that’s just wild to me.
AR: It’s like many equations, right? It’s kind of like, you know, it’s like, okay, well, let’s describe in a mathematical function, or let’s kind of like, instead of just y equals ax plus b, it’s like a lot more than that, right? But it’s essentially that, it’s just expanded, there’s a whole bunch of stuff happening. They’re very long to come up with, and then validate and test. Most people just use models that other people came up with.
MC: Building on it, there’s, there’s a tie in here somewhere just building on what other people learned.
AR: Yeah, no, for sure. I think it’s, it’s super cool though, when you think about it, like, if you do get this fit between, or you’re able to explain human decision making or human behavior with a, with some type of model, then you can actually start seeing like, okay, if this, you know, if this complex human behavior can be explained at least to some degree with this mathematical model, then if I do this, if I have humans do this slightly different, then does it fit the model or can I start to describe what’s actually happening, right? And this is really interesting from a cognitive perspective, because a lot of times like, people do things, like they’re planning in this task, and we don’t really know what’s happening. So if you have no way of like quantifying it, it’s really hard to look at like 600 trials of this data and be like, oh, yes, I can see this participant is engaging in this strategy. Like, no, you have no idea. So you have the computer figure that out for you. And that’s really cool to be able to do that.
JM: Do you—have people done experiments like this where they ask the participants to say what strategies they were using, and then how well does what people think that they’re doing match up with what the equation thinks that they’re doing?
AR: So sometimes you do want to do it. Sometimes you don’t. And the reason for that is because sometimes it does correlate with what the model is saying people are doing, they’re actually aware of what they were doing. But other times people are totally unaware, or they think they were doing something, and they’ll tell you something like, oh, yeah, well, I was trying to chunk up the task or I was trying to do this in order to make it easier to represent. When you look at their data, you’re like, well, that was not successful. That’s not what you actually did. Maybe you intended to do that, but that’s not what you actually did. And a lot of this, it’s cool on the experimenter’s end to be like, oh, yeah, okay, people did actually report that. But it’s often not something we include in papers because each person’s description will be slightly different. It’s very hard to kind of have a clear description of everyone’s description. So it’s just easier to be like, well, you know, their data fit this. And then if it does match anecdotally, then you’re like, oh, cool, it did match. But depending on the task, depending on the complexity of the task, sometimes people are really good at describing it, and sometimes they’re very, very, very bad at it.
I wanted to go back to this, the brain cookies. And what I thought was really cool is that it kind of—they describe it very, very computationally in that specific aspect of the paper.
So they describe it as they computed the probability of someone visiting or repeating a fragment over time. So it’d be like, how often did you do this thing, and how did that vary over time? And I thought that was very interesting to think about defining it in that way. Because it’s like, to me, it was just, it’s just memory, but it’s defined in a way that’s very computational. What did you guys think of, I don’t know about you, but halfway through the paper, they talk about a Chinese restaurant process?
KM: I was going to ask you about that. I was so confused.
MC: What does that mean? [Laugher] What does that even mean?
AR: So that, I thought, was super cool. Actually read just those three words many times. So I was like, am I reading this right? Like why are we talking about a Chinese restaurant all of a sudden? And it took me a minute to stop laughing and then be like, OK, what does this actually mean? Because it’s clearly important.
MC: Yeah. As a person who currently works in a restaurant, I was just sitting there like, what are you trying to say? What is going on here? I still have no idea.
AR: Okay. So the outcome of the process is a distribution of probabilities. In other words, it’s just like a bunch of numbers in a certain distribution. But the idea is that these probabil—this distribution is not just coming from one past computation or one past concept, it’s coming from two. And again, I’m not really sure how Chinese restaurant fits in this, but what I understood is it’s like: you’re taking both into consideration the frequency of past occurrences of a certain thing. So if you’re thinking of those fragments, you would take into consideration the probability of that fragment in the past, and also considering the baseline probability of that same fragment over the entire course of the time. So you’re not just considering, you’re kind of considering the past but also overall. That was my understanding of it and I’m trying to make it simpler for both me and you guys to hopefully understand this.
KM: So is it like you know that when you’re flipping a coin, tails is going to come up 50% of the time. You know that overall, that’s like if you have a regular coin, that’s it. But like in your past 10 flips, it’s landed on tails eight out of 10 times. So like that impacts your decision.
AR: That—I think, if I understand this correctly, I think that’d be a great example because it’s kind of like biasing towards past frequencies. So like if you’re getting a lot of tails in a given coin flip series and then someone asks you like, what are you going to bet on next? You’re probably going to bet on tails, right? Even if you have 50-50 chance of getting heads and tails, you’re like, oh, I’ve been getting a lot of tail lately, so maybe I’ll just keep going with that one. And that’s kind of what they’re talking about here in terms of these fragments that people were exploiting, is that they applied this Chinese restaurant process model and they kept, so they kept adding models in this paper. So they started with one, kind of like thinking about it as the simplest of your very complex equations, right? It would be like a very complex version of y equals ax plus b and like, okay, that explains some degree what participants are doing and there’s individual differences, but it seems to be explaining a little bit. And then they considered another strategy or another heuristic and they added that model to the first one and kept piling it on. And then when they got to this like memorization, they added this model to it, which is this Chinese restaurant process model, and that explained more so what participants were doing. And it was specific to kind of their memorization or repeating of past sequences.
JM: I was trying to look up why it’s called a Chinese restaurant process. And it’s like based off of a theorized problem that somebody proposed about seating people at a Chinese restaurant, and like the distribution of people selecting tables at a restaurant based off of how many people are already sitting at a particular table.
AR: I think Kelly’s description of—her description of the coin flips was a really good one. You have kind of this baseline distribution that you know that you’re going to get either side of the coin about 50% of the time. But if the past instances have shown you that one seems to be more likely than the other, so you’ve gotten more instances of tails than heads, if someone asks you, well, what do you think is going to happen next? You might be more likely to say tails again, given that that’s happened in the past. So you’re combining your experience of past instances with this baseline understanding and that’s going to kind of guide your next choice. And that’s what they’re using here as a model to explain kind of this memory concept. So people are looking at, well, what have I done in the past? What are the options overall? And they’re kind of weighing both to my understanding when it comes to this, like, which option is best? Which one should I actually commit to memory? And how important is that path relative to my memory of other paths or sequences I’ve done?
Yeah. So by the end of like, when you get to this model, then they’re talking at one point of like the baseline, they’ve now like they have this long model, where is it? Baseline plus restricted fragmentation plus stochastic memoization. That is essentially just them being like, well, we started with one model, then it did pretty well. We added this other thing to the mix and it explained even more, like it fit our data better. Then we added this Chinese restaurant model and that worked even better. So that’s how they were able to conclude that participants are engaging in kind of a combination of these three, because each time they added more to their model, they compared it to the previous models they tested. And this is kind of getting like what we’ll say in like a lot of these computational fields and a lot of the work I’ll do is like a goodness of fit, which kind of sounds like what it is, is like, how well does my model fit? You get some type of numerical value that you can then compare to how well the other model fits and you can be like, well, this got a better score than that. So this one’s better. And that’s what they’re essentially doing here. And they get to that point where they have a bunch of models added together, which fits the data nicely. And they’re like, well, it keeps fitting it better and better. So therefore, like it outperformed our other models. And therefore that’s probably what participants are doing. And that’s how they’re able to kind of conclude that it’s a combination of kind of a fragmentation, this memorization process, and maybe chunking and all of that, but like it’s a combination of strategies that participants relied on.
KM: Can you think of like a real life example where we would be a planning process where a person would be doing something, I guess, similar to the experiment. So they have to plan. I’m asking you, Alexa, to theorize how they would be going about using all these different planning techniques.
AR: The thing that’s interesting is I don’t know how aware people are of which heuristics they’re thinking of. Like when you have one problem, like coming back to memorizing, what was it, the 400 first numerals of pi? Something like that, like you’re clearly thinking about chunking it when you create your chunks, because it’s such a defined problem. As the problem becomes more complex, there’s more things happening and you’re less aware of what you’re doing, which comes back to like, if you’re asking participants, it might not be great in this specific case. And therefore, I forgot your question as I was talking.
KM: So just if there was a, maybe an experiment that happened that was testing like people making real life decisions and so just moving between boxes on a screen.
AR: Yeah, I don’t think there is yet. I think this is the closest we’ve come, which is either really cool or really sad, depending on how you see it. But it’s something that’s so complex. There’s so many things to consider that if you bring this into the real world, you’re just adding more complexity to it. But you can think of, I can think of at least one example in which I would see this somewhat applying, is let’s say you’re in like a new city, or you’re in Montreal or wherever. You’re in a city and you’re trying to get somewhere. So your goal in this instance would not be get as many points as you can. It would be get to your destination, which is home or work or wherever you’re trying to go, and you have like multiple public transit systems you can work. And you can take like the Metro, the bus, you can walk, you can take a cab, you can, you have all these options and you can combine them, right? You don’t have to take the underground all the way there. You can do it for a few stops, come up, take a cab, walk, bike, and then that’s the most efficient way to get there given traffic and whatever. So that would be an example in which people, like we do that every, well, every day, maybe you don’t plan between all the different transit systems every day, but it’s something you have the ability to do and that people do quite well. So I think that would be a really good real world example to say, well, what would this look like? I think that is an instance in which we do this, right? You have experience in which system is fastest, right? It gives you an idea of reward based on the path you’re taking. You can think of each transition or each kind of step you’re making towards your end goal as being something you can represent as a reward. So is this a good or a bad thing? Obviously if I choose to take the Metro between my first and second stop, that’s going to be much more rewarding because it’s much faster than if I decide to crawl there, which is really negative. That’s going to be really long and maybe painful. So you can think of how each of these kind of transitions and the task could be similar to transitions between steps moving towards a goal when you’re taking public transit.
Oh, one thing I thought was really cool. They have a really small paragraph in the paper where they examined the correlation between verbal intelligence. I know—
MC: Yeah, something about IQs and how it didn’t really, their verbal IQ doesn’t—be it higher or lower, it doesn’t really affect which strategy they used.
AR: Yeah. So I thought that was cool is that I don’t know much about this, but they were looking at the correlation or the potential relationship between subjects’ ability to decompose into larger chunks. So the longer your chunks are, apparently is somewhat related to verbal intelligence or IQ, verbal IQ. And then they report the statistics. And this is what surprised me is that they conclude that there’s no significant link, right?
MC: And my understanding was something like they suggested that the longer the fragments, the more the higher the IQ, and I think they found that that was not the case necessarily.
AR: Yeah, I think it’s cool to think that there would be a relationship in the first place or kind of interesting or puzzling to me.
JM: So what is verbal IQ specifically in relationship to like as a component of overall IQ? Verbal intelligence is your ability to analyze information and solve problems using language based reasoning. So it’s kind of, I mean, that would translate to like when you’re thinking about something, at least my understanding of this would be like when you’re thinking of something, there’s like a voice in your head kind of reasoning through it in words. That would be some type of an example of a verbal intelligence. So you’re reasoning through it, kind of thinking about or planning using words, maybe not out loud during the task, but there’s some type of kind of internal verbal dialogue that would help you navigate or plan. And they suspected that that would have a relationship to the length of the fragments people are creating in the task.
I think it’s just interesting to think that people would actually have some type of dialogue or like something verbal happening there. To me, it would seem much more conceptual or spatial maybe.
JM: I think to me actually that makes a lot of sense, because I am terrible at spatial problems and I’m like worried that I would not do well at this, not because of like making decisions optimally or not, but by getting confused about which things in the circle go to which other things, because my mind doesn’t work like that, so I’m thinking I totally would solve this verbally by being like, well, talking through the path that worked out well for me before, talking through like, I think they had said that participants tried to end on like the two to one transition or whatever one that gave them the most points and so they would try to work through that. And so I feel like I would be in my head talking through like, okay, in order to get to two and one, I have to go blah, blah, blah, blah, blah. And then the longer I guess that you can reason through the steps from where you start to getting to the two to one fragment is, I think they were thinking correlated to verbal IQ, because you can kind of plan out that route longer because it gets harder every step you go away, but it turned out not to be the case. I’m really curious about why that’s not the case.
AR: That’s really interesting because I am not, as you can see, struggling for words. I’m not a verbal person. Things happen in my mind and then things are happening and then I try to speak and it’s just like, and then I speak of like, I’m literally generating fragments and it’s hard to understand me. So I don’t know if I would have done this task thinking through it or is it really kind of verbal sentences or words? Is it actually verbal in my mind or is it more conceptual? Because I’m definitely more spatial. My spatial reasoning is on point, but the verbal part’s not great. So kind of the opposite of what Jamie just described is—and then if most people kind of approached it in that way, maybe there was no correlation, but if you also kind of have half the people doing either thing, then you’ve got nothing overall as well. That would be a cool question to ask people like, did you talk to yourself as you completed this task? What were you telling yourself?
KM: I just always have a voice in my head going and talking about like, what are you doing? What are you doing next? So maybe my verbal IQ score would be really high because there’s just always someone in there going like, what are we doing? What are we doing now?
JM: I think that kind of goes back to people using different, slightly different, goes back to people using different processes to get to kind of the same solution, is maybe there wasn’t a correlation because some percentage of people were thinking about it verbally, and if you were able to do a sub-analysis of those people, it might be significant. But overall, not everybody was solving it that way. And so it wouldn’t matter for all of the spatial people whether or not they had good verbal IQ because that’s just not a paradigm they’re using.
AR: Yeah, it’d be interesting to see if it relates to any kind of other ability. It’s the type of thing that you’re like, okay, if you’re good at planning, it’s always been interesting to me is, if you’re good at planning, what other thing are you also good at? Like what does that come hand in hand with?
MC: Yeah, what’s like, how is that going to apply to other skill sets?
AR: Yeah, because what I find really interesting, and this might be a little bit of a tangent, but in terms of like, not the planning strategies, but the decision making strategies, that has a tight correlation or relationship with working memory, at least in young adults. In older adults, we seem to kind of lose that relationship and we’re not sure why.
MC: So what’s, sorry, what’s your like, your age range for young adult, older adults?
AR: Typically we define young adults as 18 to 28. Yeah, which is a tight range, and you know, which means that as you complete your PhD, you kind of slowly exit the young adult range. Old adults, then you’re looking at 65 and older.
KM: But I was going to say, I went to a happy hour the other day, that was for like young professionals, which I’m assuming is probably similar to young adults, and there was a much larger age range than what you just said. I was by far, I think the youngest person there, and I’m 22.
AR: Yeah, and I mean, the young adult range I just described is kind of almost purely for these cognitive tasks, because we’re trying to capture the peak of your kind of brain development, and all the abilities that come with it, not to say that at 28, it just immediately declines and it’s downhill from there. It’s just based on individual differences to start plateauing, et cetera. So we’re kind of trying to capture that.
Oh, the last thing I wanted to kind of mention, and then ask you guys about, I think we’ve covered pretty much the rest here, is this idea they introduced in the introduction, and it comes out in the paper as well, is people’s avoidance of a negative outcome that presents itself early on.
MC: Yeah, the whole, like, if your second choice is going to be a big deficit, you’re not going to take, like, you’re just going to totally cut that one out and keep going in that one path.
AR: Yeah, so that’s their, what is it called, hacking or suboptimal pruning, which to me, like, if they called it suboptimal pruning, then clearly it’s not, like there’s some—maybe there’s something adaptive about it, but it’s also something clearly suboptimal about the strategy where, like they say, large immediate losses at branch points led subjects to eliminate the future outcome on later branches. So you get this big loss off the bat, and then any possible positive outcome from a subsequent branch that would have come off from that big negative branch off the beginning, you just ignore because you didn’t like starting with the negative outcome. So you ignore that whole possibility, and you opt for the other option, which is better. And what I thought was really cool is that later on, they mentioned how this is, like, probably Pavlovian, which essentially just means it’s reflexive. So it doesn’t matter how, like, the difference between the negative and the less negative or the positive one is, you still don’t like the worst outcome.
And I was thinking, like, is why would we do that? On one hand, it seems like, OK, if you’re starting off negative, maybe, you know, we’re averse to negative outcomes, right? We don’t like to get bad news or bad things happening to us, and we go with the positive. But what if there’s something more positive down that branch, and we’re just like, no, I don’t want that? Like, what did you guys think of that specifically in this context where there could be something better further down that branch or another task? Like, if we think of this just generally, as opposed to specific to this paper, I thought that was slightly, like, I think I can see it as both adaptive, but also really not great as a strategy.
MC: Did they know that there was a potential for, like, a bigger, better thing down the negative path? Or was it like they hit the big negative and were like, nope, and immediately, like, that’s my first question.
AR: Yeah. So I’m not sure participants did a whole bunch of that in the planning task here, because they would have learned about all the different transitions before engaging in the planning part. So you’ve kind of, you’ve done the learning section of the task in which I’m assuming you learn that if there were to be kind of a better, more positive outcome after that really bad one at the beginning, that you would have learned it and then you would have gone down that route just the same when you’re planning. But in a task where, like, you’re not, like, say just in the learning part, isn’t that a weird thing to do when you’re learning something?
KM: Yeah, I don’t know, especially if it doesn’t have, like, real-world effects. Like, why wouldn’t you explore it?
JM: I’ve interacted with a lot of people who study, like, reward. And when you do reward studies in people, people’s brains are really good at generalizing reward and punishment signals to things that are super abstract. They’ve, like, found that even when people are doing tasks like this that aren’t associated with any real-world consequence, there is still a negative hit to getting, like, minus points. Even though the points don’t mean anything, it still feels bad to do. And so people might, like, have a strong aversion to that because their brain reacts as though they’re losing something real. And that also, like, it works the other way, that we will be very motivated to get points that don’t mean anything.
AR: What you’re touching on here is, like, the basis for a lot of our studies, is that if we have you do a, like, in a lot of the experiments we do in the lab, you’re not doing as many kind of decisions as you’re doing in the experiment that’s presented here, but you’re doing kind of one or two decisions again and again, like, many, many trials. And by many, many, I mean, like, four to six hundred times, maybe eight hundred times. And we’ll break that up in blocks. Yeah, the wide eyes is a normal reaction to that number. But every time you go through a trial, you’re trying to do it in a way that’s going to give you the green points on the screen. And you can program a task in which it’s, like, good, you got 10 points and the, you know, the 10’s green and there’s an exclamation point.
And you can do the same task or a similar task in which instead of saying 10 points, if they got it right, you put, like, two cents, right? You’re actually making it into money as opposed to something more abstract, like points, where the points are going to be converted to money at the end of the game anyways. But what you see in, like, when we look at the electrophysiology, so just people’s brain reactions in terms of these electrical signals, which are really, really time precise, so you can look at exactly what was happening in terms of what the brain was doing when people saw, when they get these feedback, and you see exactly the same thing. If it’s points or money, and then you see the same thing for negative, is you see this opposite pattern when you have a negative reward on the screen. So like, you got no money, like you didn’t do it well, and it’s in red text and whatever. People have a huge response to that too, and they’re, like, exactly opposite.
So all this to say, I forget we’re talking about, but I got dragged into this idea of reward. I think it’s super interesting that the brain responds similarly to kind of something that’s more abstract, like, oh, you got points, good job, versus something that’s more concrete, more useful to us in terms of everyday life is like money, like you got $2 for answering that right.
KM: Yeah, I think we were talking about this because we were wondering why people didn’t explore the big negative in the learning process. Did they explore that big negative even, like, if it meant taking a big negative?
AR: Yeah. I think I would say that yes, and when you think about it, I know there is some research about, like, why do we not like negative outcomes? I think it, like, evolutionarily speaking, it makes sense, right, like, you’re not aiming to get the bad stuff if you’re told, like, hey, there’s going to be positive points somewhere, try to find them, and you off the bat get something completely opposite to that goal. You’re maybe going to explore it, but it’s not going to be the first thing you’re going to go to because it’s not aligned with your goal, so if you’ve made a first choice and the first feedback you get is really negative, then you’re going to try to move away from that quickly. Not meaning that you wouldn’t ever go back to it and explore it again or try to see, okay, well, I’ve checked out everything else, what maybe is here, but it’s not the first thing you’re going to go to. I just think it’s a really cool finding to have in these types of planning tasks, too, is that people won’t plan to go down that route unless, I mean, unless you’d have to kind of find that the points or the positive reward that comes in the further branches after that really bad negative one actually outweighs the negative to a substantial amount to not necessarily only counteract numerically the effect, but also emotionally kind of have that person be like, oh, it’s okay, I got a negative, you know, reward off the bat, but I got all this other positive stuff, and now I feel okay in terms of what my goal was and how I’m attaining it.
JM: There’s another concept called temporal discounting that is related to this, and it just means that if something is going to happen later, either further down a decision tree or later in time, but people tend to care about it less. And the example that came up that ties these two together in my mind about like, why won’t people take a big negative if they know that there’s a big payout at the end? It’s the exact same question as why won’t people do anything about climate change? If we know that like the big payout at the end is like people get to still be here and live and live comfortably, psychologically speaking, there’s a large negative at the beginning that makes people want to find all these other like fiddly ways around it. And ultimately, like you either have to make that choice or game over.
KM: Yeah, with discounting, how I’ve learned about it is definitely in terms of climate change and how, especially if you’re trying to figure out a price on carbon, so either a carbon tax or market based solutions, cap and trade programs to decrease carbon dioxide emissions, because you’re trying to put a price on carbon, trying to figure out what is the cost of this thing that no one’s paying for right now. And a lot of that has to do with determining the effects that will happen in the future and the cost of those. And because they’re happening in the future, people put a discount rate on them. So instead of like, it’s kind of like an interest rate, I guess, but the value of something in the future is much less than the value of it today. And that can be applied to a lot of different economic concepts, but the way I’ve learned about it is in terms of pricing carbon.
AR: So in terms of like decision making, and this is where you see this kind of similarity between or influence of decision making on economics and economics on decision making, is we talk about temporal discounting paradigms, in which it’s like, you’ll have participants do these really simple tasks where it’s like, I can give you $5 today, or $10 tomorrow. And then you have people choose, like, which one do you want, and then they’ll click. And then, okay, and then you kind of, you can have the task learn over time and depending on how you program it, whatever, you can just have people do various comparisons. And you know, there’s a clear preference for something where if I’m telling you, I’ll give you $20 today, or $21 in two years, you’re going to take the $20 today, because that future amount has been discounted by so much that why would I choose that? And then there’s these interesting findings again, you know, different populations, people with addiction, people with different psychopathologies and what they do based on brain chemistry and whatever, that’s a whole other tangent. But that’s one type of task.
And I think that definitely applies here, right? There’s definitely an aspect of discounting specifically, because, you know, you’re thinking about making or planning decisions over time, and you’re thinking about what you did in the past, you’ve got your memory aspect, and you’ve got your future and kind of, well, as you make your choices or as you plan ahead, what happened at the beginning will be discounted, but it’s also in the moment that negative you’re getting right off the bat is really, really intense. So all these, all these things kind of come together in these types of tasks, and it’s very interesting to think about.
Fun fact is that these discounting tasks also apply to kind of a lot of situations that, say healthcare professionals would face in which, you know, you have a doctor and it’s like, oh, you have to, you can save two patients at a certain probability or kill one at a certain probability, and then you have this discounting that’s there too. It’s not temporal in that same sense, but there’s also those concepts there. So discounting is something that you see in a lot of fields in a very, it’s very broad as a concept.
But I think it’s really cool that their like main conclusion is that humans are more optimal than you would maybe think they would be in such a complex situation, and that there’s a lot of factors that come into play to that, and that even if it’s not well or clearly instructed that people do remarkably well.
KM: No, I was just going to try and relate it back to trading computational cost for performance. I thought that would be a really cool point to touch on, talking about kind of optimality and how humans get there and how we try to do it in like the least cost way to ourselves. We don’t want to do the computations every time we’re going through the steps. So I thought that would be a cool thing to talk about as we like talk about the conclusions of the paper.
AR: Yeah, absolutely. What I think is really cool specifically in this task is that like when you think about like, so essentially a cost benefit analysis, right, like how much effort am I willing or going to put into a specific task or computation based on how much I’m getting out of it. The idea about just that cost benefit analysis, so like, is it worth it to think hard? And then there’s another aspect, which is kind of where all these heuristics come into play is, can I, like, am I actually capable of representing this entire environment in my mind and planning through it without using any strategies or any shortcuts or heuristics? Probably not, right? And that’s when these heuristics come into play, and then they are not only titrating this cost benefit analysis, but they’re also helping you get around this kind of, I guess, limitation to some sense when you think of like, my brain is not a computer. I can’t. At one point in the paper, they talk about how many combinations or computations you’d have to do if you were to map this whole thing out in your mind and think through each one. And it’s something ridiculous, and what people are actually computing or kind of the combination people are actually getting or computing with all these heuristics is something like 10, which is much more manageable than I forget it’s like 200 or some, some really large number when you actually do the math for it. And that these heuristics seem to not only be working in your favor to kind of make your benefits closer to the amount of effort you’re putting into the task, but also counteracting the, the fact that like you couldn’t possibly do 200 combinations or computations in this.
MC: No, that would be far, far, far too many for one person to do in one sitting. Couldn’t do it.
JML And also, with the fact that there is also a time pressure imposed on this, what it makes me think of is you might be able to think through more of those options. For example, if you’re like an internal medicine doctor on a floor where you’re managing a lot of people’s care, but everybody is more or less stable, and you have a lot of time to think through a lot of very complex decisions, you might be more inclined to use different strategies or think through things more long form, and actually walk down mentally each of these different decision trees and figure out which one is going to be the most optimal one. If you’re an emergency medicine doctor, or if you’re an EMT and you work on an ambulance, you don’t have time. You can’t think through all of your decision trees because like your patient is dying right now.
MC: Nah you gotta make decisions like right this second.
JM: This is a very real life example where all of these decision-making processes and shortcuts will come into play. You obviously want to avoid early on decisions that will be very costly, either costly in terms of time, costly in terms of your patient’s health, obviously, costly in terms of resources, all these things. And you also want to do things that you’ve done in the past because you’ll be very quick at them and you kind of know how they go. A lot of people in emergency medicine or other urgent settings have protocols of if this then this, and you don’t have to think about it. It’s a cached, it’s that memoized cached brain cookie.
AR: I think what’s cool is that all these complexities, at least to me, is what makes studying decision-making so interesting. It’s like can we, if we can find a way to get this to work in the lab, then we can start understanding what are people actually doing, like what are the cognitive processes that you’re engaging in when you’re kind of looking at that list or the protocol that’s been established and there’s maybe some decision-making involved there or there’s none. How does that decision-making differ from decision-making when there’s more freedom in it? What type of strategies are you engaging in? It’s a huge field. It’s very complex. I think just the fact that this paper has a very straightforward task and it’s kind of somewhat distant to the real world shows you that it’s hard to study decision-making and planning, but it makes it so interesting when you find this small piece of evidence that you’re like, oh, that means something. We can use this. We understood how this human mind, this like black box sometimes works and that’s amazing.
MC: Yeah, the human mind is so freaking cool. The more I learn about it is it’s just the more you learn, the more you realize you don’t know, you know?
JM: Then I think it would be cool to kind of end with either the coolest thing you took away from this discussion or something you’re really, really excited to see from this general field of research in the future.
MC: I’m really curious to see, because this paper was published, what, in 2015, I think? I think it would be really interesting to see where it has come in the intervening seven years. You know, what have they built upon and how much more can we understand about this?
AR: Hopefully a lot, because I mean, it’s a substantial amount of time, but it’s such a complex subject that sometimes the increments of how fast we’re moving forward is really small. But sometimes there’s these kind of big stride, like move forward slowly, slowly, and okay, based on all these little pieces of information, I was able to figure out that boom, something big, right? And this paper is in a big journal. So clearly this was like, this was something huge.
MC: My sister, who is a scientist knows like, this is a big, this is a big deal, this is a big journal.
AR: I think the thing that’s interesting to me and what I’d like to see or curious about and it’s something I’ve been thinking about relative to my research a lot lately, is how changes in cognition across the lifespan affect all of this planning. So a lot of my work right now is about how decision-making strategies change across the lifespan. And we’re finding these really cool changes and it kind of comes down to, like, if we think about as we age, there’s this cognitive decline and how that impacts our general cognition and how do those changes in general cognition change the way we make decisions? Does it only impact the decisions we’re making or does it also impact the way we plan? As we age, is decision-making hard because we just start to struggle with decision-making, or do we also struggle with planning? And the same thing for children. You’re in this development, so you’re trying to, you know, your cognition is improving over time. So you see changes in the opposite direction, improving as we age in childhood towards adolescence, and you see these big changes in decision-making strategies and how people engage in them and why and the neural activity, la, la, la. Well does that also translate to differences in planning or is planning something that everyone does similarly across the lifespan? So that’s something that I’m starting to investigate now and to look into is trying to understand, well, does that change? Does that the same? Does this relate to abilities like working memory does with decision-making and all that, all that wonderful stuff?
KM: I think for me the most interesting lesson that I took out of this discussion was that people can approach planning and decision-making in very different ways and still reach optimal outcomes, or almost optimal outcomes, and I just find that really fascinating because I know how I think of things, maybe if I don’t know the phrases and terms and scientific ways that my brain is working, but I know how I approach things and I know that people approach things differently and it’s cool to see data put to everyone’s different ways of thinking, leads to optimal outcomes, which is just a very impressive feature of humans.
JM: Yeah, absolutely. Well, thank you all so much for coming on this podcast. This was such a fascinating discussion. Yeah, I really appreciate all of the time and effort and work you all put into this.
KM: Thank you for having us.
MC: Yeah, so I was going to say thanks for having us, Jamie. It’s been really interesting.
AR: It’s been really fun. I think it’s really cool to talk about these topics with people who are outside of my field too, because the ideas that come up, the questions that come up are always different and kind of gets me thinking too, as opposed to talking to the people who know a lot about it. The questions are drastically different, and gets me thinking about kind of where this is going and where the kind of general audience or general interest is in kind of understanding this better.
JM: Thank you all again for joining us on our third episode of In Plain English. We’ve been talking about the paper, “Interplay of Approximate Planning Strategies,” with our expert, Alex Ruel, and our guests, Mo Carr and Kelly-Anne Moffa. You can download the paper for free on our website at inplainenglishpod.org. And once again, you can follow us on Twitter @plainenglishsci, that’s P-L-A-I-N-E-N-G-L-I-S-H-S-C-I, or find us on Facebook at facebook.com/plainenglishsci. And you can find our podcasts on Google Podcasts, Spotify, SoundCloud, or wherever you listen to your podcasts. We’ll see you next time for another paper presented In Plain English.
[Outtro music]
Hi Jamie,
I really enjoyed hearing Kelly-Anne at the discussion table with you on this episode..
I wasn’t able to listen to the end tonight, due to another commitment But I gain something from every program you do. The title on this one (Are Humans Rational) was provocative…My understanding, bottom-line, is that decision-making also involves the heart and irrational emotion plus the whole body for that matter. So, the process is not confined to strictly cognitive function But god bless brain researchers. Something can always be learned.
,
LikeLike