Staying Alive S02E04: The Rise of AI in Newsrooms with Gavin Adamson Running time: 26:49 SPEAKERS Gavin Adamson, Winston Sih Winston Sih 00:06 Artificial intelligence: it's all over the headlines and its use in almost every industry seems to be top of mind. In education, it's about ethics and academic integrity. AI has changed the way technology serves us in our daily lives. Think about Amazon Alexa, Siri or Google Assistant or in Gmail. I start writing a sentence and it predictably takes the words out of my mouth. It's a scary thought, if you think of it. Now turned to journalism, in Japan, there are experiments of AI powered news anchors. And here in Canada, Roger Sports & Media is experimenting with a radio station, powered and hosted by a computer. The future is really here. But there's incredible opportunity too. How can artificial intelligence help us in our quest to tell better, more representative stories? The kinds of enterprising reporting that takes time and research? Can we ethically deploy technology to help us work smarter? Is that even possible? I'm Winston Sih and we head into the wild west of AI this week on Staying Alive. Artificial Intelligence is a thought in almost every industry, including Canadian journalism. AI has the ability to predictively cut cameras, write script, edit video, publish material when it thinks your audience is most engaged. But just because it can doesn't necessarily mean it should. But “if you can't fight 'em, join 'em,” others say. So how do you do so with ethics in mind? Joining me now to discuss is Gavin Adamson, Associate Professor at Toronto Metropolitan University. He's a journalist who has reported on everything from business to sports, and has most recently researched the complex and ever changing world of artificial intelligence. Gavin, thanks for coming on. Gavin Adamson 02:07 Thanks for having me, Winston, I'm happy to be here. Winston Sih 02:10 Before we dive into the complex issues of artificial intelligence in Canadian journalism, specifically, I think it'd be a good idea to go back and talk about why it's such a big topic. What aspects of journalism is threatened by AI? And should we be threatened or worried about the capabilities of this technology replacing our jobs? Is that even a ridiculous thought? Gavin Adamson 02:36 No, it's not a ridiculous thought. I think like all technology businesses, and let's be clear, most newsrooms are run by corporations in Canada, and they're interested in efficiencies. And that's usually where technology comes in. And so wherever a business can use a tool to make something work quicker, faster, frankly use less people, it will. So there are some reasons to be concerned. On the other hand, we know that the business of journalism is struggling a little bit with being able to bring in enough income to hire enough reporters to do good work. To the extent that some of the work can be done by AI...always a double edged sword there. Winston Sih 03:21 Absolutely and we've seen different operations, different organizations experiment with AI already. AI, of course, has been in parts of Asia and we've seen stations in Japan test AI powered news anchors, where they're not even real people. In Canada, Roger Sports & Media, they're experimenting with a radio station that's entirely generated by AI, predicting what songs to play. I think it's fair to say if you can't fight 'em, join 'em, as I said earlier. Like every other piece of technology we have, whether we like it or not, it's here to stay. What are some of the questions that come up in your work as a professor and how we teach journalism today, especially in the presence of all these tools, like ChatGPT? Gavin Adamson 04:10 I'll start with my experience as a journalism teacher. Actually, as you know, we teach intro video journalism and it's not a concern in that class, from the perspective that it's very hard to do AI on video. That class is very much focused on finding fresh sources, which is something AI just can't do. And even when you talk about ChatGPT, it is not something that can be current enough. I mean, it's trained on a whole bunch of data, but it's all in the past. It's not going to be able to tell you the news of the day. In my other class, business journalism, it really comes into focus because organizations like Bloomberg have spent a great amount of time and effort in building AI for writing business news. Business news is very structured, you know? How profitable is a company? What is its net income look like from quarter to quarter? All that stuff is so easily accessed by machines, the focus for the students becomes, "Okay, so what are analysts going to say about that kind of data?" We focus on that part. From a greater kind of university perspective, if you don't mind me going there, since November when ChatGPT was launched, so many notes from administration, from the university contemplating how we should be thinking of ChatGPT, specifically, in the context of writing exams or writing essays. My colleague Murtaza Haider, out of the TRSM, he made a great point, he said kind of like you off the top, "you just kind of got to join them," Let the students use it, be aware that the students are going to use it, talk about the limitations of ChatGPT, the kinds of mistakes that it can lead you into, and there are lots of them. Or you could bite against it and try to stop it from happening. Again, I think the implication is you cannot. So, you might as well work with the system, so to speak. Winston Sih 06:07 So we've seen academic policies be modified in the last year. I know at Toronto Metropolitan University, the syllabus now have a section on use of artificial intelligence in assignments. But there's also incredible opportunity where it can take in large datasets and find enterprising story ideas that may range across underrepresented communities. So there's a really positive benefit there as well. So Gavin, where do you see opportunity for ChatGPT and other tools to serve Canadian journalism in a really positive and meaningful way? Gavin Adamson 06:44 Well, one of the obvious ones is this notion of aggregation. This is something that ChatGPT is really good at. So, you drop a news story in there and say, "Please give me a synopsis of this news story in 250 words." It can do that, that kind of work, really quick synopsis is a very clear way. There's been plenty of newsrooms that are experimenting in other ways. There's a newsroom that's been experimenting, this is down in the U.S., with a software called AI For Reporters and this gets to that point that we brought up at the top of your show. Look, there's fewer journalists doing really crucial journalism work kind of at the local government level. So, do you have someone there when decisions are being made at the local council? Newsrooms are trying to develop ways for AI to sift through, as you said, large chunks of data and try to recreate the kind of notes that a news reporter-a political news reporter might make looking at, say, the transcript of a council meeting. The kind of thing that an AI can do really well is say who spoke, who spoke how much, who is the lead speaker, and try to identify quotable bits of text within that transcript. Leaving ChatGPT aside, that's a very specific kind of AI kind of analytic approach to sort of find an efficiency in tough labor intensive reporterly work. Winston Sih 08:24 So there's a real practical element to what you just described. We've spoken about the ethics of slightly of how to use AI in journalism, and of course, the many opportunities that come along with it. Where are the holes? Where, where are we missing the mark? I simply threw my name into ChatGPT recently, and I would say only 70% of it was correct, which I would say is probably on the generous end of many biographies. It's obviously not perfect and constantly changing. Gavin Adamson 08:56 Yeah, I just want to kind of get into that notion that it hallucinates. It basically makes things up. And that's the nature of predictive text. All of our handheld devices, our phones have a very rudimentary kind of predictive text tool, as we know. And if you play with that, you can see just how off base it gets very quickly. If you choose just random words that the predictive text in your messaging app might do. ChatGPT is way more sophisticated than that, but it still just makes stuff up based on pure statistics. And that gets kind of to the nature of ethical issues. You don't really know where the errors are occurring. It's kind of a black box. LLMs - large language models - are trained on so much data that is not really, it's not really clear where it comes from. It just happens. It's actually not funny the kinds of errors it can introduce and the kinds of biases that it can introduce, they get to be tricky. A couple of years ago, Timnit Gebru was working at Google. She was their lead AI researcher. She was doing a lot of great work on the challenges with these large language models, and frankly, the kinds of biases and even racism, sexism that was being introduced. That's one of the major problems. I kind of hinted at the other one is - there's just a lack of transparency, Google won't tell us. OpenAI won't tell us. This is the very nature of technologies is that this stuff is proprietary. These businesses put a lot of money into building it, and they say, "Well, we don't want to tell you exactly how it works." And frankly, sometimes they don't know exactly how it works in the end anyway. So, you've got inherent problems built into the production of AI that comes out on the other end when the users are interacting with it. Winston Sih 11:01 And then I want to add as well, accountability. When we look at self-driving cars, and if the car gets into an accident and hits somebody, who is responsible for that collision? It's the same thing. If AI says something defamatory or extremely irresponsible, who is responsible for that information? It's now a computer, so how do you keep these different systems accountable? And from there, I also want to pivot to your research, because you've done some really fantastic and fascinating work into how to use AI to break down, but also contextualize journalism. And it's something that many news consumers don't necessarily critically do instinctually. Can you speak a little bit more about that space? Gavin Adamson 11:48 I will, yeah, this notion of fairness, transparency, and accountability, FTA, is something that's applied to the whole area of AI research. So is the tool fair? Is it transparent? Is someone accountable? Is it accountable to anyone? But the work that actually I'm doing, and along with my colleague, Asmaa Malik, we're actually turning that notion of FTA on journalism, using AI. So what we're trying to do is build a tool that essentially would identify all of the human sources that a reporter spoke to, in the process of doing the reporting. It would identify them, count them, and categorize them. Now, we don't know exactly what that's going to look like, whether it's a score or kind of a color code. But the whole idea is to make open the very process of journalism. And let's be clear, as much as we're talking about people not knowing how AI works, your average person doesn't know what a journalist does. The first notion that a journalist kind of takes on is, "Who am I going to speak to? Who am I speaking to? Why am I speaking to this person? Who else am I going to talk to?" This is the elemental step in journalism and so we want to make that obvious to the reader in a way that hopefully builds trust, and opens up a conversation about why we speak to the police so much, or maybe even question why we speak to the police so much. Why do we talk to reporters? Why don't we talk to individuals on the street more frequently. But the whole idea is to sort of have this sort of very simple, almost gestural, or a rule of thumb almost that readers can look at and the journalist can also look at when they're doing the work to say, "Am I doing good work?" So that's what we're doing. We're trying to build an AI tool that does that, and shows that. Winston Sih 13:45 It's almost like a pulse check, where journalists can use the tool to make sure that they're thinking and asking the right questions along the way. But at the same time for consumers of news, maybe if we open the curtains and show people some of these questions, maybe we build better engagement, and that in turn strengthens the trust with consumers and journalists. Would you agree with that? Gavin Adamson 14:10 That's precisely it. It's-we hope it's a trust building tool. The newsroom we're working with is in a province where there's a large Indigenous population. We're trying to speak with First Nations and Métis communities in that province. We want to show the results to them and ask them, "Does this help you understand the work that journalists are doing? Does this help you ask the right kinds of questions? Does that seem fair to you? Does it seem like a way that will build trust?" And we want to ask the same kinds of questions of the reporters and editors, and that's the work we're heading into this summer and into the fall. Winston Sih 14:46 And I think that's also another interesting point, too, is AI really does rely on the input of diverse amounts of data. And if we're not doing that, then we're not building these tools to be representative of our true communities. And so, there's an inherent responsibility to engage with different communities or going out to marginalized populations and interacting with them and having those datasets input into AI as well. Is that something that is being thought of in the process by many of these organizations that are developing these tools? Gavin Adamson 15:22 I would say almost the very opposite. I don't think they think about it at all. I don't think newsrooms think about it very much still. Listen, I've got a kind of a terrible story. So the reporter, which is kind of a tech, you know, a B2B tech title, published a story late in, it was in the winter I think, about how OpenAI, which is the unit that built ChatGPT, how they had farmed out the identification of racism, sexism, misogyny, to Kenyans. So you have this situation where, yeah, at least they acknowledged that the dataset needed to be combed through so that you could take racial epithets out of it. But it farms this out to a Black community in Kenya, and of course, pays them poorly. As far as I know, OpenAI did not answer any questions when the reporter actually published that story. I haven't seen any follow up on that. That's a terrible decision. And you have that kind of typical corporate decision where you're trying to do something cheaply and harming communities. And I think that's the kind of thing that we need to call out, as journalists ourselves trying to understand the impact of technology on not only our industry, but the impact in communities that, frankly, need more care. Winston Sih 16:54 Absolutely. How do we shed this simplistic fear that artificial intelligence is just going to replace our jobs? Because I think if we have these simple conversations, and you say, AI, people think, "Well, AI is just going to take everyone's jobs. It's going to change the world and it will change the world in many ways and for the better and for the worse." How do we collectively think about the deeper issues of AI to encourage better engagement so we're responsibly developing these tools with the right questions in mind? Gavin Adamson 17:27 Yeah, I mean, I want to be clear, I'm not a technophobe. I don't love technology. It just it always cuts both ways. The way I do it is I contextualize it in old technology. So what's good and what was bad about the advent of the telephone? What's good and bad about the advent of television? And the internet? And you start to see that there's some great communications progress, but there's also damage. So that's how you have to think about any new technology, including all the different kinds of AI out there. So it's very complex. When you think about social media, for example, and there's a lot of hate about social media and particularly TikTok nowadays. I have two girls at home and they use TikTok a lot and they're always on their phones. And, I know that there's some things on social media that are harmful. You know, there's this whole TikTok meme called "that girl" Just pushes so many beauty products and terrible approaches to what's beautiful, and what's not and this kind of thing, and I worry about what they see. But I also know that they're communicating with their friends and also learning in that experience in TikTok and the way they use it and other social media. So, I try not to be super judgmental about it in the same way, when I look back about, you know, watching too many hours of TV when I was a kid that my parents judged me for. Try to understand it in a way that is meaningful to you, and try to understand how it cuts both ways, specifically for you. Winston Sih 18:59 So it goes back to the ethical and responsible use of these tools. It applies to every industry, including in journalism. And so we've got your tool that you're developing in your research. What other trends or tools do you see when it comes to how we teach journalism? And how artificial intelligence is being used? Are there any innovative projects being worked on now? Where do you think things need to shift? Gavin Adamson 19:27 Boy, that's a tough question. I mean, I mentioned some off the top whenever you can take structured data of any kind, whether it's sports numbers, financial numbers, that I think you're just gonna see more and more of, right? Where I think we need to be careful is in trying to use AI to make news judgments. I think it's going to be very hard to try to teach AI to do that kind of thing anyway, but in an interesting way, I think that AI also makes us think as journalists how much our systems and processes internally is kind of like an artificial intelligence. When Donald Trump was in the middle of his presidency, he would say outrageous things and he was doing it deliberately to get attention so that journalists would turn their microphones towards him to basically just own the air. Trump at the time, literally was saying things like two plus two equals five and we were spreading that misinformation for him. And so I think about what journalism can learn about AI and being rote and not thinking enough. That's actually what I think about more than anything these days. I think we can learn about ourselves as journalists by thinking about where AI goes wrong. Winston Sih 20:47 And the scary part is, when that information is put out there by former President Trump, for example, that gets put into the echo chamber. We go back to these ethical questions, because people will read this and whether it's true or not, it will be taken in as the truth because it's online somewhere and perhaps that gets fed into an AI engine and get spat out and because AI is telling you people will think again, that that's real. So we go back in this almost vicious cycle, where we're questioning what is true and what is not and that's where we really need to be careful. Gavin Adamson 21:23 Exactly. The distribution of news is one area going back many years already, that is basically become something that AI owns. So why a story lands in your feed? Again, it's sort of a black box, we don't know how. We know that generally, your reading habits inform the way the algorithm spits out content to you. The details aren't specific and you're exactly right, we don't know how that happens. It can be dangerous, you can be fed misinformation. We've already talked about how ChatGPT literally makes stuff up. Again, there's no accountability around where it goes wrong if you get fed the misinformation, right? That is exactly the kind of thing we need to be thinking about because it's already happened. We know it's - can I tell you a story about my research? So I did - I did a research story in 2016 about how news was distributed through Twitter and Facebook from Canadian newsrooms and one of the things I had to say in my study, as a limitation in my study, was I didn't know as a researcher how the algorithms of both Twitter and Facebook, specifically Facebook at the time, was driving the readership of certain kinds of articles over others, I just couldn't account for it and I had to put that as a limitation in my research. I did that research well that was like seven years ago and we still don't know! We still don't know exactly how those decisions are being made by machines. Winston Sih 22:55 I think of so many basic but important fundamental questions that we still can't figure out. We see this back and forth between Elon Musk and the CBC on how to just label their Twitter account, whether it's government or publicly funded. We can't make a decision about that. How do we attack these much more complex, but basic and fundamental questions about these tools? And we also know that people are often trapped in their echo chambers and only read and consume what they want to read and consume, that these tools learn that and they continue to feed you the data that you're looking for. And so, that brings and proposes a danger in itself that drives further polarization. Gavin Adamson 23:37 You're exactly right. This notion of the echo chamber has been talked about so much for five to seven years. And there's research that points one way or the other. I think heavy news users, they do tend to read across the spectrum. But I think it's at the margins, people who don't read a lot of news might not realize just how specific the kind of news they're getting is and how, in fact, it might be filled with some kind of misinformation. So I think at the margins, it's where becomes very dangerous. It's not good for democracy in whatever way you want to think of that and it's not good for educating voters. Think about this: you're a deliberate person who wants to consume political news ahead of your local elections. And you don't know what kind of information has been verified. It has become increasingly problematic. There is some experimentation in AI, by the way, that is pulling together local news sources to try to identify facts that are undisputed. So that's one of the areas that some newsrooms are looking into. Newsrooms are interested in this so establishing the ground truth of facts and AI can do that because it can read a whole bunch of different news articles and say, "Okay, this is confirmed, that's confirmed, that's confirmed and..." So, more of that kind of AI, we need to think about. We have to be transparent to the reader about how that's being done and what were the original sources in a way that isn't going to kind of bog down the reader with yet more data that makes it challenging to read quickly. Winston Sih 25:18 Absolutely agree. I think that is, like you said, the transparency, but also making sure that we are presenting information in a responsible way that gives people proper opportunity to think about all sorts of topics from a balanced manner. I think there's incredible opportunity there in how journalists engage with AI, but also how news consumers engage with AI and we know that this continues to evolve and it seems like there's a new topic and discussion every day and it's so relevant. So Gavin, I appreciate your candid insights on this really complex issue that I know is ever evolving. So thanks so much for joining us on the show. Gavin Adamson 26:01 Thank you so much, Winston, I really enjoyed this. Winston Sih 26:04 Gavin Adamson is a journalist and associate professor at The Creative School at Toronto Metropolitan University. Next week, we're diving into public engagement in journalism. What's contributing to apathy in journalism? What's being done and where do we go from here? That's next on Staying Alive. I'm Winston Sih. Thanks again for tuning in.