S
4
-
EPISODE
15
Data Masters Podcast
released
June 13, 2025
Runtime:
44m41s

Amplifying Community Voices Through Data with Russell Stevens of MIT Center for Constructive Communication

Russell Stevens
Head of Strategy and Development at MIT Center for Constructive Communication

Data doesn’t just reflect communities — it can help shape more inclusive, constructive conversations. We’re joined by Russell Stevens, Head of Strategy and Development at the MIT Center for Constructive Communication, to explore how qualitative data, storytelling and ethical AI design can bridge gaps in society and decision-making. Russell shares how his team moved beyond traditional media analytics to build a platform that amplifies unheard voices through small-group conversations. He explains why trust, local context and human-in-the-loop sense-making are critical for turning narratives into actionable insight.

I'd rather read the transcript of this conversation please!

In this episode, Russell Stevens, Head of Strategy and Development at the MIT Center for Constructive Communication, joins us to explore how small-group conversations and ethical AI uncover deeper insights. He shares how empowering communities to share and interpret their own stories drives more meaningful data and better decision-making.

Key Takeaways:

(03:50) MIT used early language models to analyze media data and gauge public opinion.

(09:27) Ingesting more data only amplified the loudest voices, not diverse perspectives.

(18:31) Peer-led talks in Newark schools revealed stories that would never be revealed to adults.

(24:55) Like Tamr, the process uses humans to guide AI through iterative coding.

(30:40) Radical transparency and consent are core to ethical data use.

(37:07) Hearing humanity in others is essential to overcoming division.

(41:32) Replacing people with AI personas is a rejected dystopian shortcut.

(43:12) Without humanity, all you’re left with is empty thematic summaries.

Resources Mentioned:

MIT Center for Constructive Communication website

Cortico

Russel: [00:00:00] if two humans can't agree on how to code. a piece of language or a story, you can never get the machine to code it reliably, right? And there are plenty of situations where two people or three people can't agree on how would we code the story that someone shared? Well, at that point in time, if you don't have a human in charge, steering the wheel here, you're actually not gonna have a good outcome.

Anthony: Welcome back to Data Masters, the podcast where we explore the cutting edge of data. Today we're thrilled to have Russell Stevens [00:01:00] with us. Russell is a strategist and entrepreneur at the forefront of using technology and data to foster more constructive communication and bridge societal divides. He's the head of strategy and development at the MIT Center for constructive Communication.

And a co-founder of Cortico, a nonprofit building systems to elevate unheard community voices. Now, some of you may be thinking constructive communication, societal divides. How does this connect to the world of data analytics and business intelligence? I assure you it's extraordinarily relevant. We often talk about structured data, the kind that neatly is organized into databases.

but today I wanted to venture into a territory We haven't explored that much on data masters, the realm of [00:02:00] unstructured qualitative data. And this is precisely the kind of data that Russell is seeking out. Rich insights from community conversations. And if you think about it, this is the kind of data that businesses should strive to understand, the nuanced feedback and unspoken needs that are often ignored or perhaps never even gathered.

I. And you may turn to traditional methods like focus groups, and Russell will speak to this, I'm sure 'cause he's used similar. but you know, focus groups are expensive, they're logistically challenging, and there are in fact biases in them as well. So. Today we'll be diving deep into how Russell and his teams are pioneering new ways to gather, understand, and utilize this vital qualitative data, to create a healthier public sphere, and how their approaches can offer powerful lessons for anyone looking [00:03:00] to truly understand their customers, their stakeholders, or their communities.

Russell, welcome to Data Masters.

Russel: I'm really happy to be here and talk about, What we're doing and some of our common interests.

Anthony: So I alluded to this a little bit, in the introduction. You've seen quite an evolution at Cortico, as you've worked through finding the right sources and types of data and data gathering techniques. You've initially started working with social media data. Then you've pivoted towards lived experiences from small group conversations.

Maybe share a little bit about your journey, in terms of trying to find these new innovative sources of data. 

Russel: It's a great question that really Anthony gets to the heart of why we do this work. So we started, MIT as the lab for social machines in 2014, we were essentially, building hammers, looking for nails, and the hammers that we were building, were [00:04:00] machines that were using at the time. we would consider them to be puny language models.

Sweet deve, word deve, ultimately Bert. but basically machines that could ingest, organize, analyze media data. And you could think of that media data as digital media. That was the byproduct of everyday life in this country. So Twitter and other social media, data that we could get our hands on. We had more access to Twitter data than anyone other than Twitter and the Library of Congress 'cause. Twitter helped get our group, started at the media lab. So we had the full fire hose. We were ingesting news media and we ended up ingesting talk radio. No one had ever done that before, so twenty four seven, a hundred fifty stations. 'cause I. we could, and we thought that by ingesting and organizing and analyzing this, we could, use it as a way of gauging public opinion, right?

And insights about people, what they were thinking and how they were living their lives, in communities and the. [00:05:00] The nail we pointed it to,at, in, in 2015 is an obvious one, which is the 2016 election. So we basically built a machines called the that was, built to, really try to understand what the conversation about that election was in the wild. in the world unprompted. Right? So as opposed to public opinion surveys or focus groups, we were not offering prompts. We were just wanted to pick up organically what people were saying. And we were working with the Washington Post and CNN, the Wall Street Journal, to develop analytics that they included in their stories.

We worked with the commission on presidential debates to provide suggested questions to moderators, during the debates. Really, probably built the most sophisticated machine for analyzing that sort of data. that was around in 2016. And, even though it wasn't meant to be predicted, so we weren't doing this to sort of predict who was gonna win or lose, we did have really what we would consider to be near perfect information again using just [00:06:00] media data that we were ingesting and structuring We missed the story like everybody else did in 2016, right? Like we had great information, but we did not understand how, dislocated and disconnected people were feeling from the system and the body politic and from each other that many people just voted to, as they admitted, blow up the system. And, In the aftermath of that election, we did a lot of soul searching and realized that, and this is by the way, before Twitter got sort of toxic at the time, Twitter was a reasonable approximation for public opinion. It was considered to be not so bad, not perfect, but not so bad. even with decent,Twitter information, great Twitter information. we realized that relatively few people were tweeting about what they thought about the election, right? And those who were typically gonna be the loudest voices anyways. Fewer people who were calling into talk radio, even fewer, were getting, quoted in news articles. And that if we wanted to really understand what. [00:07:00] People were experiencing communities and how that was affecting their perception of politics or of each other. There was really only one thing to do, and it was extremely counterintuitive for a lab based at the media lab building, basically language models. And that was, we had to go talk with people directly. And really you could see the evolution of what we've done to the aftermath of that campaign where we said, there's no replacement for actually asking people directly to share their. Experiences in their stories. So really our story is since 2016, in, into, into 2017 when we created Cortico, a nonprofit that's affiliated with our group at the media lab that you could think of it as a deployment arm for the Media lab, we built a conversation platform that allows us to talk to human beings. At scale. Right? And these are done through small group conversations that if you were to look, and we can get into this later, if you look in a window at one of these conversations, looks like a focus group. the question that we asked ourselves is, could we, [00:08:00] hold small group human to human conversations? At scale and use technology to make sense of them at scale because that's really the problem with focus groups historically has been great information, but it gives you almost like too much unstructured data to be able to use and creating insights to make decisions, to give people an opportunity to express themselves systematically. So we have built essentially a conversation platform. Spent eight years building a conversation platform and testing it that allows, universities, schools, and increasingly, organizations, even corporations, to bring this platform in to really understand what life is like in their own community.

Anthony: What I love about that evolution is you started, I think, where. Many listeners, almost certainly would intuitively start, which is, and we have this bias embedded in us, which is big data, like more data is better. So you know, if a sample of Twitter data is [00:09:00] good, then all of Twitter must be better. And if a few talk radio stations are interesting, all of them would be even better.

And if you have all of that, like almost by definition, this feeling that if we just collect more data, we will get. Better insights and I don't wanna put words, but would you agree with the sentiment that

Russel: Yeah.

Anthony: it was like more data wasn't better, it was better, data is better, but I don't wanna put words in your mouth.

Is that fair? I.

Russel: We thought our superpower was our ability to collect everything. I said my partner is Deb Roy. He's the principal investigator at our group at MIT and the CEO of Cortico. and I. We thought our superpower was the ability to sort of have the fire hose and to ingest 150 or more if we wanted to talk radio stations.

Right. And we knew how to do that. we had built the machine to be able to do that. But here's the problem. If you could ingest all of that, but if only the same people are talking across all those forms, right, and they tend to be the [00:10:00] loudest voices and those loudest voices get heard. On every platform, right? So Twitter, Reddit, Facebook, those are the people, and not the same people, but the same type of people who are calling in to talk radio who are getting quoted who, and by the way, who are also going to town hall meetings and school committee meetings, I. getting themselves heard, overheard in some ways all across this country then doesn't matter how much volume of data that you're picking up, those are the people who are really using those platforms. Then those platforms aren't gonna tell you about how the average person the under person

Anthony: Right.

Russel: life in those communities. So what we realized was we were picking up. At the loudest, we were almost creating our own echo chamber. We were picking up the voices that were being heard already. by ingesting everything, we were still biased towards the most extreme, the most loud voices.

Anthony: Perversely, by increasing the volume of data, you're actually amplifying the,the bias that you had. [00:11:00] So, so I think this is a great insight and maybe share some of the other challenges. Challenges associated, like with, this type of qualitative data? I think, again, most listeners are pretty, have a, an intuition about structured data.

If I think about, what orders have come into my business, I can think about a database table full of orders or,all of the addresses of my customers, these sorts of things. These are the very normal. or traditional structured sources of data. But here you're, you are thinking about these unstructured, nuanced, personal kinds of data.

we've talked about one bias, which is, volume may not actually equate to, truth, but what are some of the other challenges associated with collecting, interpreting and responsibly representing these narratives?

Russel: So, the best way to think about our platform, and there are challenges associated with each stage of the platform. So I can walk through those. So it's a classic sort of input processing output. So hold conversations and [00:12:00] a set of challenges and actually holding conversations among. 5, 6 humans synchronously,at processing is analyzing those, making sense of those conversations.

And then the output side is sharing those conversations and those voices back into the community. So when you think about just holding conversations, you have to think about who's important. Within the community who you're trying to understand. And when we say important, again, it's not just the loudest voices, it's thinking about the unheard voices in that community who doesn't have time to show up to a school committee meeting, who isn't gonna pick up the phone to answer a telemarketer, a, a a public opinion researchers, call who isn't going to tweet or make their voice heard otherwise.

There's the prep work in that. But, holding conversations. For us, originally when we created the platform, it was, it had to be in person. So we stuck a recording device on a round table, among, five or six people and held the conversation and then uploaded it. actually the pandemic helped us in that regard.

We moved everything to [00:13:00] Zoom and it's a lot easier to get six adults or six young adults on a Zoom at the same time and as opposed to, in the same place. And Zoom does it. Quite a good job, of separating audio files and things like that. So it's quite good as an input device and we've developed an app,a mobile app to as a sort of a FaceTime interface to make it even easier to collect conversations.

Well, Bella, congratulations, Anthony. You've just. At 50 to 75 conversations in the community you're interested in your company, in your municipality, in your neighborhood. You now have 50 to a hundred hours, maybe more of audio and transcript to analyze. And anyone who's sort of looked at any of that kind of conversational data knows that is impossible for a single human being to make sense of. And once you start bringing multiple people into that, you have. Issues of inter coder reliability. They have to be coding the same kinds of things in the same way. So you spend a lot of time building inter coder reliability. and behold, we get to the processing stage. we were developing, [00:14:00] as I said, puny language models, to deal with this data for a while and then.

3.5, chat, GBT 3.5 came along and that kind of changed everything for us. So these models are quite good when they're tuned and they're in the hands of humans at helping, humans make sense of lots and lots of conversations. And so I'd say the biggest breakthrough we've had is being able to take that unstructured data. turn it into data that, decision makers, and policy makers, but also people within communities can use to understand the other people within their communities. So what we've done is we've built a set of AI sense making tools that help, Analysts, sense makers, analyze these conversations and derive meaning. and we've been working on this for three, four years at this point in time and have a very good sense of, what the AI is good at and what it isn't. We can go a little bit more into that later. and so that's the sense making piece. And then the output piece is, how do we share these voices back into these communities?

And it's a critical piece of it. Like we do not [00:15:00] want these voices and these stories to go just straight into a black box and up to. Decision makers, right? It's really critical for us that they get shared back into these and we've built a number of tools, that allow people to hear each other's voices and people in power to hear, and for people to see that people in power and who are making decisions that have heard them. there's challenges at each sense of the way. I would tell you though, one thing that is really important for us when it comes to, Understanding what the limitations are and what some of the, challenges are is that, we do not push all of this into the AI and say, make sense of it. big part of our process is both in the collection of the conversations, when we train facilitators, we don't facilitate the conversations ourselves. We get people from those communities to facilitate those conversations. Right. So they know what's going on in the room. they can read between the lines. We get people from the communities involved in the making a sense, uh, in the [00:16:00] sense making of the conversations. have built tools for people from those communities to use in helping understand what's going on because we think they have the local context and they have the sense of nuance. And again, this is unstructured data that we're structuring, but if two humans can't agree on how to code. a piece of language or a story, you can never get the machine to code it reliably, right? And there are plenty of situations where two people or three people can't agree on how would we code the story that someone shared? Well, at that point in time, if you don't have a human in charge, steering the wheel here, you're actually not gonna have a good outcome.

Anthony: So I want to come back to this question of, the human in the loop and how you, engage humans working with the ai. But before we go there, you said something which I think might be intuitive to you, but maybe not.

I. To anyone else, which is, are not, [00:17:00] people sitting alone in a,Zoom call by themselves. Reading off of a set of questions, but you said two really, I think really maybe counterintuitive things that I wanna push on. One is that there's a group of people. Discussing a topic, and the second is that there's a facilitator who's a member of the group that's not a paid facilitator from Cortico or from, the government or whatever, something, but someone from the group.

Talk me through that a little bit. That's, again, I think that's not what one would expect.

Russel: Sure. I think because the models that have been around for decades have said that only professionally trained facilitators can run a focus group or an ethnography in the way that, the client needs it to be run. And we understand that. And there are certainly times that you will never get past the need if you're in a, Very tense situation where there are people on both sides. There's a few things about our model that, that distinguish it. One is we, the, in the ideal state, the people in the room, on the zoom actually know each [00:18:00] other or are familiar with each other or share a passion or an interest or an experience, right?

So we don't cast to these. groups classically, like focus groups where you have the LAX pro with his hat on backwards and the young Latina and the older black man. We actually want people who kind of know each other are comfortable enough with each other to share, right? Because we know, and we've seen that they'll be much more vulnerable, they'll much be much more authentic when they're in a room with people who, they feel like they're in their same kind of tribe. That includes a facilitator. And we've had this remarkable experience with this group, down in Newark, New Jersey, organizational Youth Network, which, opportunity Youth Network rather, that, works with, youth between 18 and 24, who neither work nor go to school. There's a terrible, problem with absenteeism in the newer school system this group wants to advocate, for. And so what we did is we trained their youth advisory board, young people to then train. High school students to facilitate these groups. The [00:19:00] high school students had conversations four to six person conversations with other high school students in their own school, and it created stories that never would've been told.

It prompted stories that never would've been told at a grownup. Been the facilitator or somebody from. The outside world, the only way that you get that kind of story sharing is when you've got, you've created a space where people are comfortable sharing with each other. So that's one thing. two is that, We do is we really get people to, stare stories and experiences over opinions and facts. And we have some research that's coming out soon that shows that when you hear someone else's experience or story, you're much more likely to have empathy and trust and respect for that person as opposed to digging in versus their fact.

So we encourage people to share long meandering, emotionally vulnerable stories in a way that focus groups, Don't because they want prompt response. They wanna structure the data as [00:20:00] much as possible. They want prompt response, another prompt. And what we want is the best possible articulation of someone's experience, right?

So we orient everything towards telling stories because we're confident that the machine will help us make sense of it. We're confident that our AI tools will help us structure this. So the third piece is the human element of. Wanting not only people from these communities to be involved in the facilitation of the conversations, but in the sense making the interpretation of the conversations because again, they're gonna pick up, they're gonna read between the lines in a way that even the best, are not gonna be able to really understand at that micro level what's going on in that conversation.

Anthony: [00:21:00] So it seems to me that key idea there is trust that, you're creating a safe, vulnerable space where people, trust each other, but, and then what that's eliciting is not. Opinions and ratings, which is again, the thing we traditionally think of. And I think you're, I just wanna underscore a point you made that's not done because it's eliciting the best data, it's being done.

'cause it's the easiest to code. 

Russel: That's exactly right, 

Anthony: right, so the big idea here is get people in a safe space where they have [00:22:00] a sense of trust. Have them share the stories. But that then of course leaves the, which you alluded to. But now I'd love you to dig into the obvious problem, which is great. Now we have a whole bunch of stories, How do we now, how do we turn that into coded in? How do we code it? And again, just to bring these two ideas together. You are using obviously, ai, but you're not blind. It's not like you're opening a chat GPT session and saying, could you summarize this very much? The opposite. So maybe talk about. The pairing of the AI capabilities with the human capabilities, what are you asking the AI to do?

What are you asking the human to do? what can we learn from that?

Russel: exactly right. and as I said, that's, that was the breakthrough to the platform, right? Because we were, before then, we were sort of decentralizing the sense making, but it was a very manual process and there were very few sort of clients who were willing to sign up for that. So once, the AI came into it, we knew that we had an ability [00:23:00] to radically decrease the amount of time, We'd be asking anyone to be spending analyzing it so I can walk you through the process. it starts with the humans, right? So, we don't take the raw transcript or audio and put it into the AI and say, pull out the themes. We actually ask to do a couple of the first steps.

One is to go through and make highlights. So the conversation, the entity so to speak, becomes a highlight. And a highlight could be, The entirety of one of my answers to you, or it could be three or four segments depending on what someone thought was interesting. So it's almost like when people highlight things in Medium, right?

So we're pulling out, we're pulling out highlights and so a conversation gets sort of, Slipped into a,bunch of highlights and we actually ask, the humans to do that, who are involved in the process. And we are working on AI that will, give them a headstart, right? and we prototyped that, but there's a human involved in saying these are the most salient pieces of this conversation. and then we start the AI sense making process by [00:24:00] then saying, and here seemed to be some key themes That come out of this, right? So we're seeding some. Themes and topics into the AI so that it can start to, ruminate about and really start to understand as it scrapes through the rest of these conversations, what's going on.

So the first step is I. Highlights some themes, but you have to then come up with a code book. so anyone who's sort of done this kind of, how do you turn on structured data into structured data, you need some sort of a code book that creates a consistency across the analysis. So if you've got 50 conversations, you can, I. reasonably consistent that the adding and the coding of those conversations stays the same all the way through no matter who's doing it. That's, as I said, there's a, there's an inter coder reliability problem. When you have multiple humans, you don't have an inter coder reliability problem when you have the ai, but the AI needs to be accurate, right? So we go through this, it. Process. And then again, from what I know about Tamer, that's sort of how Tamer started, is this iterative human in the loop where it's, [00:25:00] there's, the humans are kind of directing the machines towards disambiguating things and so forth. That's a little bit of what this process is like.

Like what's the code book look like? here's how I would code things. And then you say to the, you can basically hit a toggle and ask the machine, how would you code. This, and if you're satisfied that the machine is, tagging things, coding things in the way that you want, it's recognizing that this topic has, is actually a, about housing, a affordable housing insecurity, and food insecurity. Right. and you think the machine is gonna code it the way you'll turn that on and then basically tell the machine, yes. Now code the rest of the conversation or the rest of the conversations in that

Anthony: So, so would it be fair to say that, rather. Them blindly turning this over to the machine to do the work. You are letting the human bootstrap it, and then amplifying the human's work. By turning o it's like the rote boring part. the interesting [00:26:00] part is finding the insights and then the boring part is, finding the insights again.

Like you found it once, it's like, okay, great,maybe it's twice or three times, and then finally the AI is like, yeah, yeah, I get it. The 72 other instances of the same insight. It's the 72. That is the interesting, not the first three.

Russel: Exactly. And then you also have this situation, so you get halfway through the corpus and you realize, oh, there's another theme. There's another three topics, right? As a human, which I've had to do, you'd have to go back and recode everything again. If you've got the machine involved in that, then the sort of ODing becomes a lot easier.

Kind of keeps up the lagging. lagging coding. So we've got ai, assisted highlight tagging, AI assisted code, book creation, and then the third stage of DAI is, we can actually help it to summarize. Okay. Take a look at. What you've coded now, help me summarize what the key findings are again. there are plenty of tools and technologies out there that you can just [00:27:00] push. You could push the transcript into the AI and say, I. straight to the end, to the theme. We just think like in, in the kinds of things that we get ourselves involved with, right? Which is making good policy decisions, making selections of local officials using community that, that the community needs to be involved.

Humans need to be involved in this process to make sure that it's being steered in the right direction. So we don't do anything, we don't hit the put, we don't do push button ai.

Anthony: implicit in that statement. I think, again, I wanna be careful, I don't wanna put words in your mouth, but this, it feels like not just making a, statement of efficiency or efficacy, but you're making an ethical statement as well. you're saying whether it's. Possible or not, it's not right.

so maybe and admittedly, I'm, I am going a little, off piece here. one would hope that ethics are an important consideration in business. there's reason to suggest that may not be the case, but [00:28:00] in any case, it's certainly important to you and it probably should be important to businesses.

talk a little bit about how you do think about,your. Mission is about. Elevating under heard and underrepresented groups. There's an implicit value judgment in that. and that does involve ethical considerations. And I do think many businesses think about wanting to hear from their, least profitable customers and how can I make them more profitable?

Or there at least heard segments and these sorts of things. So there's a, direct correlation here, but talk about the ethical considerations and how you think about the challenges associated with that.

Russel: Sure, yeah. There, there are several layers and ethical considerations and some are quite, practical in a sense, so, we've had a number of organizations over the last eight years specifically, companies come to us and say, we'd like bring this platform inside. And, first thing you're confronted with is, there are different stakes.

Like if you do a,we did a project in Boston around the last mayoral election. Lots of conversations in communities and, you can't get [00:29:00] fired from being a citizen of Boston if you say something bad about, I. Mayor w right? or candidate. Woo. I mean, you can get yourself in trouble if you admit to a crime, but like basically, you're not putting your sort of,your life or your profession in danger, but you could get yourself into some hot water at work.

if you say something in a conversation, about management or, about other employees. So there were a set of. Things that we needed to work through that had to do with,voice identity. I know your voice right now. We've spoken several times. If I hear you out of context, three days from now, I'm gonna, and I only hear your voice, I'm gonna peg it to Anthony, right?

So in smaller communities, people's voices are their identities. So we've created some voice morphing technology that allows you to retain your identity until you wanna get up. That's an ethical. Principle, right? Which is, I wanna be in control of my voice. You're, these are biometrics that we're sharing that be, are being shared. And there's other things like redaction and what happens if somebody says something actionable in one of those rooms? So there's a set of [00:30:00] protections that we want the participant to have, and that's a,key piece of the ethics around this, is that we want the person who's sharing their story. They're sharing a piece of themselves with somebody who's gonna listen and somebody hopefully who it matters to. how do we give that person as much control and protection as possible all the way through the process? There's that layer. It's almost like a tech or a meth methodological layer, sort of at a macro level. our core principle is nothing about us without us, right. That we're, we are working with. Communities and corporations are communities. municipalities are communities, neighborhoods and communities. And, schools and universities and, they've gotta be partners with us in this process.

 if we go at this from a very sort of deterministic top down way, we own the data, we'll de decide what to do with it. there are just too many ways that this will goes off. Go off the rails. So, we start with really radical transparency. People know exactly how their stories are gonna be used, who's gonna hear them, what [00:31:00] the attendant outcomes are. and as you can imagine, so there's a consent framework that we start at the beginning of every conversation I knew in this conversation, so podcast, my voice was gonna be recorded and be public, right? For whoever wants to listen. That's not the case in a lot of our. In a lot of our situations, we need to understand what the context is, get everyone's consent.

So transparency. We also insist that the community, really people who are involved in the conversations owns the insights right now. It gets a little trickier in a company and we're, that's one of the things that we're working through. But when we do a project with a municipality, we don't, the city of Boston doesn't own all those voices and all those stories that the community organizations that.

Who we've partnered with along the way have ownership and control of those. And the other pieces, just designing for dignity. I would say honestly it's every tool we build has to pass that test. Does this make the person who shared their story feel heard and respected, it make them feel like a data point? [00:32:00] really sort of fundamentally shapes how we design everything from our conversation guides to our analysis tools.

Anthony: Again, going back to this theme of trust, I would think that part of what this creates is a sense that, it's safe and comfortable to share those, their stories, which is what allows you to then gather the real insights that aren't being obscured by,the normal biases that we get in, the more traditional mechanisms of doing this.

if you don't mind, I feel like we've had a, you shared the example at the beginning of the 2016 election. that's obviously a very real world experience that. Almost every, certainly American, but even globally, people, experienced, and you've changed your approach thereafter. Maybe walk through it in a very practical sense, an example of this inaction that, a listener might think to themselves, all right, I get it.

I can see how this works. And in,

Russel: Sure.

Anthony: in reality, we've talked a lot at a sort of theoretical level. I'm [00:33:00] curious to make it really tactical.

Russel: so we've worked extensively with the city of Durham in North Carolina, and the city was facing significant community tensions. traditional public forms weren't working. What we found over the years is that, the bad behavior that used to exist only in social media or when mediated through devices where you were anonymous, is now seeping into. city hall meetings

and,frankly, we've had a whole bunch of CEOs tell us, we don't wanna do open mic night and town hall meetings anymore because that toxicity is now even just seeping into, in, person. and that was, it was a. Pretty big sea change over the last, like five years where these behaviors are coming in.

But they were having problems with traditional public forums. They were becoming shouting matches, and again, the usual loudest voices kept showing up. So what we did was, facilitated conversations across different neighborhoods and demographics focusing on their lived experiences rather than their policy positions.

And. Included them in the [00:34:00] sensemaking process, just like I've described. they discovered that while people disagreed on solutions, they shared remarkably similar concerns about community safety, economic opportunity, and really just feeling heard by local government. So the city ended up using these insights to really completely redesign their community engagement approach.

Instead of adversarial town halls, they started hosting smaller, Story-based conversations, policy proposals began including specific language addressing the experiences that we surfaced and actually pulling these voices into those proposals. And really, most importantly, I'd say that the residents started seeing their neighbors as people with legitimate concerns rather than just political opponents.

Right. So the city manager told us that the first time in years that they'd had productive dialogue across boundaries, across economic boundaries and political boundaries. and the best way to think about like [00:35:00] the perfect deployment is there's a vertical use case.

It's quite easy to identify. There's someone who needs to make a decision or make policy, and they wanna hear from the community. So what we offer is a. Better, cheaper, but really more qualitatively robust way of pulling stories that really impact people's lives and they, there's a channel for that, but there's also a lateral use case that. People within a community can hear each other's voices and or the voices from other tribes, right? Even if those tribes aren't in conversation with you in the moment, what we can do is play clips from one conversation into another conversation across the boundary. through amplification with media partners, people just start to hear stories from who believe differently than they are politically, but who share similar experiences. And that's a, I would say the City of Durham is one. But we've had a number of other examples where, you've had both this sort of lateral and and vertical, output [00:36:00] in result.

Anthony: love that example because. I think the intuition that as a city government, what one should do to understand one's constituents better is to hold a town hall feels very intuitive. And I think, the same could be said for a business, well, we need to understand, our employees. We're gonna have a town hall or we wanna understand our.

Customer's better. So let's bring a few in and in fact, I've even participated in these as a customer,and we'll do a town hall. But the thing you point out, which I think is counterintuitive, is that these experiences are necessarily adversarial. I'm being talked to or I am talking to you, and there's,naturally puts the opposite side of that conversation on the defensive.

and. Again, just to put words in your mouth, your idea is if we create a safe space where people can share their stories, the insight that you gain, is that actually in the example that you provided with Durham, that the kinds of challenges that [00:37:00] people have, despite their differences are actually quite common, and then that bridges that conversation.

Is that a fair way of saying it?

Russel: that is fair. And you sort of, some. Our overall philosophy, which is, hearing the humanity and others is necessary for democracy to function, for schools, to function universities, communities and organizations, workplaces, right? You need to be able to hear the experience of others and, The humanity in those experiences, in order to be able to get past the boundaries that separate you. Right. and if you think of like what's, really wanna get to the heart of like, the problem with social media and the silos it's created in the echo chambers and whatever you want to use is that it's made it too easy for us to dehumanize others.

They're

Anthony: front of us, right? I don't see them. So I can call them names, I can think of them as other. People. But when you hear that the other people actually have similar experiences to you in your day-to-day life, [00:38:00] then you've created at least a bit the potential for a bond or a little bit of understanding.

Russel: Right. And some people are just, you're just not gonna be able to connect with. That's okay. But if we can, that the idea that we bring into this work is that if we can, surface voices and experiences that, that you typ wouldn't typically hear that we have the ability to maybe, Empathize and create a better, stronger sense of humanity across

Anthony: And then fundamentally get better signal out of the data to your commentary about 2016, I. The insight there is that if you listen to the voices that are willing to speak on social media or for that matter, willing to go to a town hall, willing to stand up, you're gonna hear a view, but you may not hear a broad view.

So, if you don't mind, I mean you in a very literal sense are on the cutting edge of this work, not the least of which, of course. being at MIT, But if you don't mind casting your eye forward into the future, you make this point about GPT-3 five and large language [00:39:00] models being a transformative technical capability that's underpinned a lot of the work you're doing.

maybe share where you think this is all going. and,

Russel: Yeah.

Anthony: how are we gonna, and maybe to put a bit of a. Sharp point on the question. there's a view that the future is all of us, sitting behind screens, interacting entirely virtually, having friends and girlfriends and boyfriends, significant others that are ai, that are human to human interactions.

Become less, not more. admittedly a dystopian, casting of the future. talk me off a ledge.

Russel: Yeah. Right. Well, let me actually talk you towards the ledge first and then

I'll try to talk you off of it. Right. we've done some work years ago where, this is before. CPT really surfaced, one of our students was interested in,public opinion and using a lot of network analysis,and other techniques.

He was able to basically use someone's media diet. we [00:40:00] get, get, basically say Anthony, what do you list? Where do you go for your news? What do you listen to? What are you watch? And we'd get all that. He would get all that and then predict, accurately. as it turns out what your responses would be to survey questions, right?

So now you're in this world where you don't even need to do the survey. You can just take someone's media diet and then predict what their responses are. Okay? Interesting. Not really scary 'cause what the hell? It's just public opinion. Right? now, we're in a situation where you can, create, personas of people. may have an opinion and you could simulate focus groups. Right. and we actually had people who. Who are working on that, not to replace what we do, but rather, for example, one of our students was, exploring the idea of, a small group conversation amongst, a group of people who know each other.

Right? Because one of the issues with our model is that there's homo homogeneity within that group. to create heterogeneity across a [00:41:00] collection of groups, but there not groups of people who are disagreeing with each other. So it's easy to get into a conversation where it is a bit of an echo chamber. safe, it's trusting, but it's an echo chamber. say you got halfway through one of those conversations and you had an AI listening in, and it created a persona of what's the viewpoint that's missing from this conversation? that AI participates in the second half or the last third of that conversation, ringing dissonant points of view into the conversation as prompts for these people to discuss.

Totally cool. But then you start to say, do we need the people to begin with? If we can create these personas, right? Why do we have to go through all the expense of bringing these people together and having a conversation? and so that's the dystopian view is that, we get good enough. At creating AI personas and using media diets or whatever the other prompts are to create what's Anthony's persona in a focus group and do we really need Anthony or do we just need his agent to [00:42:00] participate in that? Right? And we, as you can imagine. I strongly believe that's not the way to go because we've done enough of these.

We've had at cortico, over 250 deployments, of the system out in the world to know that, what you get back is unpredictable. best conversations are the ones where people are sharing things that maybe they've never shared, before, that you get. Stories that you wouldn't get because of the interactions of the human in the room.

And they could be verbal or nonverbal interactions The best. we track the health of a conversation. So we know, this conversation's just two of us. It's two nodes and we're going back and forth and my node would be much bigger than yours 'cause I'm doing a lot more of the talking. But if we had five people in the room, does that kind of conversation flow look like? And we know that when people are actually. In conversations where they're asking each other questions or getting each other to expand their story, that you get [00:43:00] things that you are just not gonna be able to get from budget AI agent sitting around.

You may get a nice distilled summary, but you're not gonna hear the humanity in others in that case. Right. 

Anthony: almost by. Definition. 

Russel: by definition, right, and if you go back to our philosophy that here in the humanity, others is necessary for all these institutions to function, you just. you can't remove the humanity.

because otherwise it's just thematic summaries. And does any, has anyone ever done anything because they, they read a really good thematic summary, 

Anthony: So yes. No, exactly. And I think, what I appreciate about this is the, level of nuance and detail that you're bringing. To these qualitative, information sources. And I think that's really the insight for data master listeners to take away from this, that,focus groups, and gathering and extracting unstructured data, qualitative data isn't, a matter of just, sending out a survey doing a town hall.

A focus group [00:44:00] but is difficult and complicated work with nuance and strategy and, but I met many of the strategies you just described here, how to pair humans with ai, how to turn it into a conversation, how to create a safe space and a trusting space. I think these are all insights that listeners can take away and start thinking about applying, to their work.

So Russell, I appreciate you making time and joining us on Data Masters.

Russel: Thank you, Anthony. I enjoyed it.

Suscribe to the Data Masters podcast series

Apple Podcasts
Spotify
Amazon