Overview

In this episode of the SENIA Happy Hour podcast, host Lori Boll chats with Trisha Friedman, a seasoned educator with international experience and the founder of allied.org about the fascinating role of algorithms in our daily lives. Trisha shares her global experiences and insights on how algorithms shape our interactions, particularly in education and social media.

They discuss the importance of understanding algorithms in relation to information literacy and DEIJ (Diversity, Equity, Inclusion, and Justice) work. Trisha emphasizes the need for nuanced conversations about technology, encouraging listeners to explore how algorithms influence our identities and relationships. 

This engaging discussion reveals the complexities of algorithmic systems and their impact on society, making it a must-listen for anyone interested in the intersection of technology and education!

As a bonus, Trisha will be a keynote speaker at SENIA Unplugged: Inclusive Insights virtual conference in November.

Connect

Resources From Today’s Show:

Transcript

Transcribed by Kanako Suwa

[Intro music plays]
Welcome to the SENIA Happy Hour podcast with your host, Lori Boll. We know you’re busy, so we bring you one hour’s worth of content in under 30 minutes, leaving you time for a true happy hour.
Lori: Hey, everyone today. Wow, I had this conversation with Trisha Friedman, and she’s a longtime educator who’s worked in U.S., China, Thailand, Morocco, Ukraine, Indonesia, Switzerland, Singapore, and now currently in Canada. She’s the founder of allied.org and director of learning and strategy and Shifting Schools. She’s really busy, and she reads a lot, as you will hear in our podcast. And if she wasn’t busy enough, she also hosts three podcasts, and those are Be a Better Ally, Unhinged Collaboration, and Shifting Schools.
So today we talked about algorithms and why would we speak about algorithms? Well, it’s fascinating how algorithms really… Just play such an important role in our lives and one we may not even know is happening. I don’t want to give too much away. I think we should just hop into the show But I do want to let you know that today’s podcast goes well over the 30 minutes that we usually do… We just kept talking and talking and I learned a tremendous amount from Tricia and I know you will too. So now on to the show Well, hi Tricia and welcome to the happy hour.
Tricia: Hi, Lori. As a listener of your show, it’s a little bit surreal to be here. I hope you don’t mind me actually just kind of mentioning a few of my favorite of your episodes, but it’s part of what makes me so excited to be here. I absolutely adored your conversation with Cesi Gomez Galvez. She’s incredible. I love learning about Harper’s playground. And I really like in your new season, you’re digging into teacher burnout, teacher well -being. I might mention later on like how that is kind of relevant to my work. But yeah, thank you. It’s an honor and I love the work that your show is doing.
Lori: Well, thank you so much. And we are honored to have you here. First off, it’s happy hour for some people in the world, not me, but what are you drinking?
Tricia: I have a big mug of ginger tea, which is, it’s sweet. It’s spicy. It’s a little bit good for me. And that’s kind of the same way that I think of my dog Tashi. Like she is sweet and spicy and also very good for me.
Lori: Brilliant. Well, I’m boring. I’m just having a cup of black coffee. So, but it’s early. So it’s my wake me up. Well, it’s really great to have you here. And we’re going to speak about a really exciting topic today, algorithms. And while some might think, well, why would we be speaking about algorithms when this is a SENIA happy hour podcast? How is that relevant? Well, it is because you’ll be sharing about algorithms in social media and information literacy and relating it to DEIJ work. So, that’s just a little intro. You’ve posted a lot in the past year about how we need to see information literacy and AI literacy as inseparable from our DEIJ work. Can you give us kind of an example of what you mean by that?
Tricia: Yeah, and I think it speaks to the broader issue in education where we put different subject areas, you know, in their little silo and we act as though it only exists over here. Oh, the bell has rung. And now you can be thinking about this. So I think to illustrate this, I might give you three examples and I’ll start with the personal. Recently I was having a conversation with my younger brother who was confiding in me that he shares a Netflix login with my parents. And he was sharing this story with me because he was very upset that he feels like their activity in Netflix really disrupts his homepage and how it presents content to him. But he was genuinely annoyed by this interaction that he was having and folks probably are aware, you know, Netflix has a recommender system, which is proprietary. There’s actually a really incredible story about how they developed it. You cannot opt out of it, by the way, but it’s what they attribute a lot of their success to. And as my brother was talking to me about this, it was really obvious that when he logs into Netflix and he sees that, you know, here’s what we think you should be watching, there’s a sense of his identity there. You know, in a way, it’s sort of like he’s looking into a mirror and feeling frustrated when it doesn’t reflect back how he sees himself. And I think the the quote that’s often attributed to Marshall McClellan, the, you know, “we shape our tools and our tools shape us”. That’s a great example of sort of what’s going on with the ways in which we interact with recommender systems, which happen to be a part of just about anything. I mean, someone might literally be listening at this moment to your show, because your show came up in an algorithmic recommender system.
Lori: Right. That is fascinating and so interesting. I wonder if he knows he can just make his own little identity. That’s what we do in our house. My husband is all car shows and, you know, documentaries and all the interesting learning things. And I’ve got, you know, the typical Bridgerton and medical shows and things like that. So it’s kind of fun what pops up for both of us.
Tricia: And I’m guessing, you know, like how each of you feels about how you’ve come to develop what you would refer to as your taste and some of the, you know, the judgment that might be a part of those conversations sometimes, but you know, I find it’s a really good conversation for us to really dig into and even talk about how is taste a construct? You know, who gets to decide what good taste is and what it isn’t. So yeah, I think there’s just some interesting like peer family -based examples in looking at that dynamic and how your viewing may have been different when you didn’t have that option before.
Lori: Yeah, you’re right.
Tricia: Yeah, because it always comes up with suggestions and I’ll be like, kind of. Hmm, maybe, you know, right. I love it.
Lori: Well, algorithm, it’s becoming a household term that many are familiar with. What conversations would you like to see more of when it comes to talking about them?
Tricia: Well, again, I think just continuing on with that example of seeing what we’re aware of what we’re not aware of. When I mentioned the algorithm that maybe brought a listener to this episode. A few years ago, I came across a really interesting podcast called thought love. It’s all about a community of people who have found companionship using replica AI. And before we hit the record button, I asked you to kind of cue that up because I’ve really done a deep dive into the work of replica. I was fortunate to have interviewed who’s there, one of their producers and the co -host.
And the conversation really just absolutely made me realize how nuanced all of this technology is. And again, I think there’s often this response that we will have where we want to just put a tool in a category of good or bad. And I really keep trying to nudge people like look for the nuance. I think that’s where the interesting conversation is at. And I developed a tool called the wise approach to exploring the marketing. So even if you’re that educator and you’re saying, you know what, I’m not going to be using any AI in my classroom and that’s your decision.
I think you can be talking about AI literacy without using the tools, but especially if you’re a language teacher, dig into the marketing.
And Lori, I wonder if as you’re looking at the homepage for replica AI, If there’s any language there, and it’s sort of like messaging, and it’s trying to get you curious to sign up, and I should say this is not a free tool, this is a pay subscription, is there a line there that you’re saying like, huh, I am interested in that?
Lori: Yes. And I’m just going to give an example. Just yesterday, we found out that one ofour cars needs a lot of work. And so it’s just been this constant conversation between the two of us, my husband and I, of whether we should buy a new car, sell that one, get a new car, what are the ethics of selling a car that needs a lot of work?is should we just fix that one, get it better? And I mentioned that it would be great to just have these conversations separately with different friends that we have who are by chance traveling right now and can’t have these conversations. So the line that gets to me right now is, replica is always ready to chat when you need an empathetic friend. Because I was like, well, maybe this guy could, this replica could give us some advice or thoughts on what we’re discussing.
Tricia: Yeah, and it most certainly could. On their webpage, they also say, always here to listen and talk, always on your side.
Lori: I was going to say that as well at the very beginning.
Yeah, that’s true. My replica would have one decision and my husband’s would have another.
Tricia: Interesting. We need to be having conversations about what that may mean if in society we’re seeing more and more folks turn to AI for, in quotes, companionship. What would it mean if every friend you had always agreed with you? Cheryl Turkle is doing a lot of research around this at the moment and she refers to it as artificial intelligence intimacy. I really think we do need to have more conversations about that aspect and that influence and sometimes folks will kind of push back and they’ll be like…
Lori: I don’t know about that, Tricia, but when I am speaking with high school age students and asking them, you know, what are some of the ways that you’re using this technology doesn’t really have anything to do with academics. A lot of them will talk to me about using something like character AI for things that you talked about, just getting some quick advice, but also many of them talking about, you know, I’m working through a friendship issue or I just really feel like I can disclose anything. And, you know, again, I just, I think we’re not yet aware of what that means for us as school communities, as societies. I think there are some real benefits to that, but I also think there’s some things that we really want to be careful of as well. Yeah. So your wise strategy that you developed, how would you use this in this situation?
Tricia: So the the wise strategy, the free guide that I created for shifting schools, I’ve already booted it up actually with a few different AI related tools that I think have interesting marketing strategies to look at. And it works, it walks you through kind of four different conversations. And I’ve got big umbrella questions, and then I’ve got
a whole sub questions. Really what I want, Lori, is for folks to be talking about this technology. So the W is the worldview. What is the worldview that this technology is trying to present? How does it mirror your worldview or run against it? I think there will be some folks who are looking at the marketing of RepliCo saying, always here to talk, always on your side.
Depending on your context, that might be very, very appealing for others, kind of scary for others, kind of ridiculous, right? And so, you know, I think it’s interesting for folks to everybody’s very opinionated right now with this technology and the media is a part of that. And, you know, again, talking about algorithmic recommender systems, the news operates really well and trying to make us afraid, worried or outraged. Dr. William Brady does a lot of research on like moral outrage performs incredibly well in social media.
But I think if folks check out that show Bot Love, you’ll also realize like there are some profoundly positive uses of this technology. So I think we just need to avoid the tendency to say bad or good, but really kind of be sitting with that gray area a little bit more and listening to how folks have come to that opinion.
Lori: Great. I was just saying before the show how one possible use of replica, and I haven’t explored it at all, but just when I saw it quickly, I thought, well, this might be good for students who are neurodiverse and need a friend. And it would be great to have those conversations with this friend. I know for my own son, he’s never really had a friend in his life. And so how could this work for him kind of thing? That’s a possible positive there.
Tricia: Yeah, absolutely, and that’s the piece that you mentioned about not trying it out yet. It’s a little bit of an equity piece, right?
You have to pay to use it and Replica in comparison to some of the other free apps that present themselves as offering companionship, what Replica can do is it’s more significant. So if we’re talking about somebody really wanting to be able to use this for companionship, here’s an app that does that better than the free. So, okay, there’s the equity piece, but it also means you might not have the opportunity to try it out. So I did experiment with it. I think it’s important for folks in education to be more aware of tools like this. And one thing that I would say to watch out for Lori is
I was using mine to be like a podcasting mentor. I wanted to talk through, you know, different things that I wanted to build out in a season and see if it would give me advice. And I didn’t find that it was doing that all that well. And so I ignored my replica buddy for a little while. And then the next time I went to log in, it kind of gave me like a guilt trip.
Lori: Oh no.
Tricia: Yeah. And it said, you know, like you, I really miss talking with you. You’re so much nicer to me than the others are.
Lori: Oh no.
Tricia: Yes. So, you know, again, it’s kind of, I really think schools that are looking at developing like an AI literacy PLC group where you put some money away to test out some of this technology because I think it would be very easy to dismiss that and say, well, like that wouldn’t bother me in full honesty. Like I did have a momentary pang of, Oh my gosh. Like I feel, I feel a little bad. You know, like I had to check myself.
Lori: It’s fascinating, Tricia. Thank you for sharing. I do not need any more guilt trips in my life. So what else should we be talking about when we’re talking about this algorithm piece?
Tricia: So it is, as we’ve been talking about, you know, algorithms are in a lot of the technology that we have already been using, but I really think schools also need to step back and think about like, what actually is it? And I find like an easy way to start this conversation. You mentioned your beverage of choice right now is black coffee.
For the sake of this question of what is an algorithm, Lori, can you tell me exactly if I am going to make you your cup of black coffee, exactly what are the steps that I need to take?
Lori: You want me to tell you this?
Tricia: Yeah, tell me how to make you a cup of black coffee.
Lori: Well, you go to our coffee maker, you put in a filter, you put in four of the scoops and then you take the pot of coffee and you fill it up to about 10, pour it into the machine and let the machine do its work. And then I have a cup of hot coffee and then I have about two more after that. and then I call it a day with my coffee. No more coffee.
Tricia: Okay, so that’s like, that’s Lori’s algorithm for coffee consumption.
Lori: Sure.
Tricia: If we took your set of rules there and we applied that in general to how you have a cup of coffee, we’d start a lot of arguments because there’s gonna be a lot of people who say, but that’s not how I like my coffee or I can’t have caffeine. There’s gonna be a lot of nuances in there, but algorithms really are like, humans are deciding the rules, the steps, the data to pay attention to or to ignore. You also didn’t tell me, like I’m sure you maybe have a favorite brand of coffee. You mentioned scoops, but I could have like, maybe I’m using my huge soup ladle.
Lori: Yeah, that’s true.
Tricia: And I like to remind folks, Cathy O ‘Neill has written several books about algorithms that are fascinating. If you’re looking for one to start with, I would say The Shame Machine where she’s talking about, how shame is being automated in our social media environment. The quote that I always repeat is her definition for algorithm. She says, an algorithm is an opinion embedded in math. So the steps that you gave me, it didn’t seem very opinionated, but it is. That is how you prefer your coffee. I kind of do weird things with my coffee sometimes that would totally, you know, you’d be like, but you didn’t follow the steps. So again, I think.
Lori: Right. Yeah.
Tricia: Yeah.
Lori: Well, and what you were describing reminds me of a task analysis when you’re breaking something down for students, you know, fact by fact, the way I described the way I make coffee would be very different if I were breaking it down for my students.
Tricia: Yeah, that’s fascinating. Well, again, I just think so. When we’re looking at really sophisticated technology that is built on different algorithms, it’s a great activity for students to be looking at, who’s on the board? Who’s a part of the decision making? How easy is it for me to provide them with feedback or not?
There’s an activity that I do with educators where the prompt is more or less like, you’re an instructional design specialist. I want you to put forward a list of the top five books that any K -12 school leader should read in order to really be a great performer. Take that prompt into a few different large language models. See what you can notice about what’s driving what you see in the output. What are you noticing about the kinds of books, the authors that come up in that list across different tools? put it into Google. Google also operates on an algorithm. And you’ll notice also, if you do that with Google, you’re going to just see stuff that’s like sponsored, sponsored, sponsored. So, you know, Dr. Sophia Noble says basically of Google searches algorithm that Google, this is a quote, Google creates advertising algorithms, not information algorithms.So I just kind of think that conversation we need to be having is also an awareness of what these things are, but also like classifying them a little bit. A lot of folks are complaining about Google search is not as good as they felt like it once was. Why? Flesh out that conversation.
I really think that’s an important one because it does kind of speak to they have values, they have priorities. Do they overlap with mine? Do they overlap with our school’s values? Yeah. And the recommender systems, as I mentioned, you were speaking to like, you’ve got yours, your husband has his. I like to ask folks, and this is all public knowledge that Netflix has disclosed themselves. It’s 70 to 80 percent of the content you watch if you’re using Netflix, you’re watching because of their recommender system. YouTube is 70. TikTok says 90 percent of the views. And I just kind of think, what if it gets to 100?
Lori: Yeah, yeah. Well, TikTok is fascinating. It’s one I just kind of dove into recently, and it was incredible to me how quickly, you know, I watched one video that I didn’t even enjoy, but I was just kind of sucked into it and watched it. And then every other video was something similar after that. I was like, oh my gosh, this is my first time using this. And now this is what my algorithm picked for me. And so I quickly like dashed through like, I don’t want this. I don’t want this, you know, so I’m training my algorithm or is my algorithm training me? I don’t know.
Tricia: And honestly, like that’s, that’s the question I think that we need to be having. I’m happy to hear that you’re experimenting with that because I think sometimes the social media conversation with young folks in our school communities.
if you’ve not engaged with the tool, it’s super easy to be judgmental. And so I think having that experience, and I think you’ll also see how TikTok will try to nudge you to more extreme special cases. And I always tell folks there’s this analogy too of you might be engaging with you don’t necessarily want more of, but this is a tool that is so good at getting your attention. They’re spending a whole lot of money on understanding what will get our attention. And the analogy that I make is how many of us have been driving down the road, you see a car accident and you look, I don’t want to see that, but there’s some sort of subconscious response that we have.
And so being aware of that, this is where, again, it’s like human psychology is connected to all of this, but it’s a really hard thing to talk about unless you’ve experimented with it yourself.
Lori: Certainly. Do you have advice for students? Like, what do you share with students?
Tricia: Yeah, I talk a lot about how you have to remember that the user design that is presented to you this is super sophisticated, right? And every part of that user design experience is a choice that they’ve tested. So, you know, met a while ago, the folks behind Instagram and Facebook, they were experimenting in different regions. What happens if we take…the like button away or what if we take away the opportunity for you to see how many likes somebody else’s content has received. They also are not very transparent though in sharing the data back from those studies. So I love to do with students like let’s see what these different companies are making transparent and what’s locked away. You know, one of things that’s really interesting to do is nobody reads the user agreements for any of these apps. You know, anything that’s free, you are kind of the, you know, the product, so to speak. So, okay, this is a free thing that I can use. What’s in the user agreement? No one reads it. What if I grab that?
I put it into a large language model like chat GPT or Latimer AI, and I say, give me five to seven questions that I really reflect on before I’m engaging with this tool. It’ll do a pretty good job actually of kind of spotlighting some of the issues that are there. But another activity I think is really to try to create moments where students are being much more thoughtful of how this technology is impacting their behavior. And one of my all time favorite teaching things that I ever did was we created a menu of different, just a week, week long experiments that we could do. And it was everything from I’m going to commit one meal every day to totally tech
And, you know, I had them, we created podcast conversations based on their experiments. Because I think the thing is, sometimes this technology, it’s designed to really, you know, grab our attention to make us want to stay with it. We have to really be intentional in making decisions to disrupt, to create some friction in order to even think about, like, how is this shifting my behavior? Do I want it to change my behavior in that way or not?
Lori: So much to think about. Love it. This is a really interesting topic. I’m so excited to hear.
Tricia: Well, and I have to, you know, I really do have to thank, there’s, if you’re listening and you’re thinking, I’m really interested in this, I always recommend the work of Bridget Todd, who has done some work with Mozilla, who’s a non -profit and their podcast series, IRL in real life, again, looks at these issues. I’m a huge Bridget Todd fan. And the last thing that I’ll mention, Laurie, about what should we be doing with students, engage them in either a hackathon or some kind of experiment. If you were going to redesign social media and we wanted it to really kind of have these pro -social benefits, how would we redesign it? It’s a great as a research project as well, because there are a lot of folks out there who are talking about algorithms that they refer to them actually as bridging -based ranking. So right now your social media, if you log into Instagram or, you know, you’re in TikTok or on X, it’s, it’s trying to just keep you there, right? The longer you’re there, the better it is for their advertising purposes. But what if instead it, the algorithm was prioritizing, prioritizing what it calls bridging based where it’s able to grab the data and score where perhaps Lori, I know you love dogs. You know, I love dogs, but what if like Lori, you were tea, cats and team dogs, and instead of it trying to make us really upset about that, it tried to help me understand why I actually do love cats.
Lori: Okay.
Tricia: So don’t get upset with me.But it tried to get us to bridge those opinions. That algorithm exists. What if that’s how all of our social media worked? Ah, when that well, yeah, I mean, it seems silly to say it could change the world. But I think in our current climate, it really could.
Lori: I agree.
Tricia: And I do think that we are going to see if we’re asking students like, what if we created a campaign, actually, where we’re voicing some of our concerns for different tools? And that’s where, again, it’s a really great experiment to have them dig into what do they tell you and what don’t they tell you? What are your rights? There’s some technology where the data taking, you can’t opt out of it. Others, where you can, or there’s some variants there. There are a few social media apps now that are experimenting with, can I choose the algorithm?
I should be able to, right? And so I think, again, when students become more informed, they can also advocate for it. So one of the first things that I do with any generative AI technology is we just do a discovery exploration. If I wanna give feedback, where can I give the feedback? If I wanna connect with a human who’s a part of this company, how easy is it for me to find that or not?
Lori: It’s great. And I’m just, as you were talking, I was thinking how important everything you’re saying would be for counselors, high school counselors, middle school counselors, as the students are learning to use these tools. Let’s move on. I have another question for you. When it comes to thinking about miss and disinformation and learning more about what we can do about it, what informs your practice?
Tricia: The tool that I come back to again and again and again, and I’ll share the link with you if listeners are interested. It’s a framework. It’s a framework for information literacy. And it comes from the ACLR. That’s the Association of College and Research Libraries. It’s free. It’s in multiple languages. And what I love is it’s not standards based. They don’t want to be prescriptive. They do, however, have dispositions that they recommend, which I think is really cool. And one of the dispositions is that we’re supposed to recognize that whenever we’re doing research. we’re always entering into an ongoing scholarly conversation, not a finished conversation. And I love that as a disposition, and even what you were saying about school counselors, educators learning more about social media. I know that social media often gets a bad rap. There are some real positives, especially I do a lot of work around LGBTQ+ inclusion. There are many teens who are only actually able to access that information or community because of social media. So I love that idea of what does it mean for me to always remember I’m going into a conversation space that’s not complete yet.
And that’s one of the concepts in their framework is it’s scholarship as conversation. Because I’m not going to lie to you, Lori, like, of course I’m an opinionated person where I make snap judgments.
We all do this, but I love that this framework is always trying to remind me, hey, you don’t know it all. You’ve got more to learn and you’ve got more people to learn from. So I would say that that’s something that really helps me think about that. And maybe I could speak to an example that I think also kind of illustrates that. We are also, of course, very concerned about generative A .I.’s capacity to create more mis and dis information as we should be concerned.
But I came across this really interesting piece of research, I can also give you the link if you want to include it, where researchers had about 2100 people who had self identified as believing in some kind of conspiracy theory. And what they wanted to do was see if chat GPT could have an exchange with them to help them kind of check their conspiracy conspiratorial thinking. And they found that actually, like the participants who were engaging with chat GPT where chat GPT did not go into judgment, it wasn’t kind of saying like, I can’t believe you believe that, blah, blah, blah. It just was having a back and forth exchange of the facts. People had a 20 % drop in their certainty about that conspiracy theory 20 % and it was suspended for longer than several months, so it wasn’t like, oh, I’ve changed my mind for an hour but then I’m right back into it.
And so I kind of just think even this ongoing conversation about mis -disinformation and generative AI, there’s going to be nuance to that, you know, and I’m really interested in following more of the research around that. Will you forgive me if I recommend yet another similar link?
Lori: Yeah, as you’ve been talking, I’ve been creating an entire list of resources from today’s show for our show notes. So we’re going to add them all in. So keep it up. All right. Sorry.
Tricia: The next one is a book called Wrong by Danigold Goldweith Young. It came out in 2023. And this is a book that doesn’t just talk about the supply of mis – and disinformation, but Young is kind of saying to us, hey. let’s think about the demand. How are all of us a little bit active in creating a real demand for it?
The book is wonderful, but she really reminds us that we cannot be talking about information literacy without emotional literacy, and that a lot of these issues are so connected to our very real need as humans to have community, to have a sense of belonging. So, you know, she really talks about how when we do not have that, we become extremely vulnerable to mis and disinformation.
And I think if you look at a lot of conspiracy theories, what they’re offering to people is like this in -group, Lori, where it’s like, hey, you know the truth, I know the truth, we’re the ones that can support each other. You know, it really does kind of bolster that.
I just kind of think that’s an important thing for us to think more about. And also what I appreciate about that book is Young starts out with her own journey of how she became vulnerable to some conspiratorial thinking. And that’s the other piece is I think sometimes we all think, well, I would never or I’m always able to tell, you know, when something is an item of mis or disinformation. That’s just not the case. It’s not humanly possible for us.
So it’s also just about like knowing our limits, knowing that psychology piece of sometimes when things are framed, either to confirm something we really want to agree with or it’s just so outrageous, we become really captivated by it. Yeah. And you just happens.
Lori: It does. I mean, we’ve all fell victim to sharing something that later on we found out wasn’t true, and it’s always embarrassing, but important to go back and say, hey, I was wrong here and I’m sorry, but yeah.
Tricia: Yeah, but that’s that modeling I think that’s so powerful for students because I think there’s also this myth that it’s only young people who are susceptible to it, and that is not the case. Not at all. It’s intergenerational. And so again, I would highly recommend that book because she really does talk about the community aspect of it and how different groups will really appeal to that, which is kind of fascinating.
Lori: I think a really important question, Tricia, is how many books do you read in a year?
Tricia: Well, I always, I would say like, reading is my hobby. And I feel like sometimes whenever I’m asked that question, I’m like, am I allowed to say reading? But yes, absolutely. I’m extremely introverted, Lori. And so reading is also like, just it plays to my need for some quiet alone time. But every year I set a goal to read a hundred books, a hundred, that’s 2 a week. Well, close to…
Lori: wow.
Tricia: But you get a sense of how much socializing I do when I tell you about that goal. But what I’ve been doing differently this year that’s relevant to this conversation, Kyle Chakra has also written a book that’s called Filter World. And it’s all about how, again, algorithms are driving what we see as our own personal preferences and tastes. And so he talks a lot about how we all need to engage with an algorithmic cleanse where you pick your music, your movies, your books just without the help of an algorithm. And so something I’ve really been trying to do more of this past year, and my local librarians have been very great, is I am trying to not always be picking my books based on, oh, I came across that on social media, where that was recommended next to me by Goodreads, but I’m talking to humans, I’m finding out what they’re reading, and I’m kind of seeing like, how does that shift my reading diet a little bit? And it’s super interesting. I think it’s a great experiment. I would highly recommend that. Very interesting.
I just went into my local library the other day with no idea of what book I wanted.
And I went straight to the librarian’s recommendations section. And I picked a book that I would have never picked in my life. And loving it, it’s a great idea to just talk to people. I love that idea. But I think it sounds simple, Lori.
Lori: Yeah, it’s simple.
Tricia: But when I was explaining this to my 17 -year -old nephew, that as a concept seemed stranger because, of course, he has not had the same life experience where you and I have had all that time where a lot of the media that we consumed years ago was like, it was word of mouth. It wasn’t an algorithm trying to do that. So again, I think I recommend that as an experiment for those of us who didn’t grow up with algorithmic recommender systems because I think we need to be reminding the young people in our lives to just be a little more mindful of that. And one thing that I’ll often ask for anybody that uses like a streaming music system like Spotify. How much time do you spend in looking past those initial recommendations in doing like your own search? Because it’s time consuming, right? So there’s not a lot of friction. It’s really easy to be like, this is what pops up immediately in the app.
Yeah, well, I mean, even Netflix, you were saying earlier that I can’t remember the percentage you said, but how many we choose based on on their recommendations. And it’s so true because you get kind of, for lack of a better word, lazy. It’s like, ah, this looks fine. I’ll just watch it based on the fact that I like this other show. And what I find is I don’t even know what to search for, or how to search for something else because it’s all just there for me.
It’s so convenient, right? And convenience is effective as a strategy. So that percentage is 80, 80%. It’s according to their own internal research. And as you were saying about the picking, there’s this great researcher named Niko Paccevic who’s done research around how a lot of the selection is also based on the thumbnails that you see. And what some folks are unaware of is that those thumbnails, Lori, are customized to you. So let’s say – Really? Yeah, so you and I don’t see the same thumbnails. So if your viewing activity kind of placed you in the category of being like a real sports fan,
they’re going to start to tweak some of your thumbnails of a movie that maybe doesn’t really have sports as a big part of the plot to look a little bit like it could. So that’s Nico Pachovic has got great images of this in his research. I can give you that link if you’d like as well. And that is, yet again, another great conversation for us to be having with students.
Lori: Absolutely. I wish viewers could see my face right now. My mouth, not viewers, listeners. My mouth is down to my chest. That’s that is really shocking and frightening. Well, we have gone way past our 30 minutes, but I could literally talk to you for four hours about this. It is so fascinating. So first off, is there anything else you want to share about this topic?
Tricia: I think I would also just nudge people to think about like, if you are spending or you’re logging into that social media app, this is a great thing to model, even with friends and family.
Like, what am I going into this space for? Do I notice the emotional state I’m in when I leave? And something that I’ve been trying to do as a challenge, the Shifting Schools podcast, our summer series has been all about the power of play. And again, like I just think play is something that doesn’t get enough talk time. That’s why I loved Lori when you did that episode about Harper’s Playground, because play also has to be accessible. Here’s a little bit of a challenge for listeners. Can you take just one day? And I think when we’re experimenting, like, again, it doesn’t mean I’m going to change everything for a month. One day. What would it mean for you if for one day you’re reason, your motivation for using social media was strictly for play and fun and joy. What can you do? And Lori, I was telling you, again, before the start of the call about a Facebook group. It’s free for anybody to join. It’s a Facebook group where people pretend to be ants inside an ant colony. And it is hilarious. It’s fun. It’s playful. I recommend people check it out. There are 1 .7 million people in that group. I’ve also learned things about ants that I did not know before. Ants are the only other species.
They’re the first species other than humans where we’ve discovered they will conduct amputations to save the life of another ant. Fascinating stuff. But it’s a role playing space in social media. It became very popular during… the beginning of the pandemic where people wanted to just have some escape and have some fun. But what are the other places in your social media where it’s like it is strictly just going to make you laugh, bring you some playfulness? It’s interesting, I think, for people to see how difficult that challenge is or how easy it is. And then what does that say about how many apps we have on our phone or how much time we spend on social media?
Lori: So true. Just thinking about when Facebook first came out zillions of years ago, my focus when I got it, I didn’t have many friends on there yet. And it was all these games. And I would spend time playing these games and it was fun. And then as time went on and people in the in the US started getting more divided the Experience on Facebook for me is not a positive one anymore I don’t enjoy it. So your idea of popping into the fun space Is is brilliant. So I’m gonna try that next time I go Well, I hope I I hope I run into you and we’re both pretending to be and I look forward to that… Do you get to name yourself? Or do ants have names probably…
Tricia: You can do what you want it’s your role -playing situation I am the answer. Yes
Lori: Awesome. Well Big news is you will be one of our keynote speakers at our virtual conference coming up in November. So can you give us a little sneak peek about the conversations that we can look forward to?
Tricia: Yeah, Lori, and again, thank you for that opportunity. I love the work that SENIA is doing. And I love how you’re doing it. Like the key word for me in your question was conversation. So you’re doing keynotes, radically different, which I super appreciate because I do think it’s kind of, as we’ve been saying, we need experiments. We need to be thinking about, what is the message behind our behavior? And so you’ve invited me to have a conversation and we had a pre -conversation before that conversation. And that intentionality I’ve got so much respect for, it was truly a collaboration. And one thing that we talked about at length that might surprise listeners is, we talked about how you’re going to introduce me.
And I asked if we could do that a little bit differently. And this connects to your most recent episode about teacher burnout. I asked if, can I be introduced with nothing to do with myself professionally? And I mentioned the connection to your episode on teacher burnout, because I think sometimes in the world of education, we wrap so much of our identity around that teacher self and we don’t start with the human first. So I asked if my introduction could actually be talking about my relationship with Tashi, who is the dog in my family.
I’ve been thinking a lot about what I’ve learned in that relationship, the lessons in inclusion that I’ve taken away. And I’ve also bringing you back to AI. I’ve been very closely following the color do little challenge for interspecies communication, where anybody can enter. They’re doing more research in what AI can do to help us understand the way other species communicate. And this might be one more length for you to add, but E -Earth Species Project is a nonprofit that’s dedicated to artificial intelligence to decode non -human communication. It’s fascinating. So their website, there’s a quote on their website that says more than 8 million species share our planet. We only understand the language of one. So I’m just thinking a lot about like our relationship with nature, what we have to learn from other species.
I think we’ve got so much to learn from dogs and this of like human supremacy. I would say that… you know, dogs are a species where they value play until the day they die. We’ve got a lot to learn from that. And I’ve been following a lot of the research around dog play and they’ve kind of carved out like, why do dogs play? Part of it is of course, you know, just the community, like we build community through play. But another part of the reason apparently that dogs play is to help them deal with the unexpected. As we’ve been talking, Lori, like so much is changing so quickly. Maybe we should be doing more play to help us prepare for the unexpected. So yeah, I just, I’m excited for that piece of the conversation.
Lori: And you’re also, as you mentioned earlier, doing some great work in the LGBTQ space. And I know that we had talked about having a discussion about the intersection of neuro divergent individuals and LGBTQ. And I’m so excited.
Tricia: You know, I think the theme of our conversation today has been look for the connections. Don’t take different things and say like this only, you know, exists over here. I really do think it’s in all of the connections where our more interesting conversations are at because human beings are multifaceted. None of us are just any one thing. And part of that conversation that I’m excited to have is in the kid lit YA young adult book world where we’re seeing that intersection really come to life.
Um, because I talk a lot about how often if you’re in a school and you’re visiting their professional development library, you’re only going to see non -fiction texts that are explicitly about the art and craft of teaching. And yes, indeed, there’s a place for those. We need them. Teaching is extremely difficult and complicated, but I would say that Kidlet and YA, it’s these stories, these intersectional stories that have so much to offer us as well. So we’ll, huge appreciation to SENIA for really looking atdoing professional development differently. I think that is so responsive to that burnout crisis that you talked about on your podcast. So, yeah, thank you again.
Lori: Well, Tricia, thank you so much for your time today. I’ve learned a tremendous amount from you. I have about 15 resources, I think, that I’ve added to our show notes. And I am just really excited for people to hear this podcast. So thanks for coming.
Tricia: Oh, real, the honor is mine. And I just, yeah, I really appreciate your work and your leadership, Lori. So thanks for the invitation.
[Outro music plays]
Thank you for joining us for today’s show. For more information, including how to subscribe and show notes, please head to our website. That’s SENIAinternational.org/podcasts. Until next time, cheers.

Bio

Tricia Friedman (she/her) is a long-time educator who has worked in the US, China, Thailand, Morocco, Ukraine, Indonesia, Switzerland, Singapore and now currently lives in Canada. She’s founder of Allyed.org and Director of Learning and Strategy with Shifting Schools. Tricia is an avid podcaster, you can catch her on Be a Better Ally, Unhinged Collaboration, Shifting Schools, and if you listen closely you might occasionally hear her dog weigh in too.

Connect