New from me: Unlike with academics and reporters, you can’t check when ChatGPT’s telling the truth

Over at The Conversation. Please do check it out.

Looking at the comments (protip: never look at the comments), I think it’s important to clarify that my point isn’t that you can trust journalists or academics because they’re academics or journalists. It’s that the method that they follow gives you, the reader, the ability to verify (or refute) their work. Think their sources are dodgy? Is the evidence presented one-sided? By all means, doubt away!

I’m leaving to the side that readers’ own assessment skills may not be the greatest and can lead them down paranoid rabbit holes, but the point’s the same. The reader can get their assessment wrong, just as the researcher can. However, that’s not the fault of the process, but rather the reader’s interpretation of the process. Whether used correctly or incorrectly, the important point is that it’s the presentation of evidence in a particular (scientific) manner that provides the grounds for critiques.

One of the biggest challenges we face in dealing with machine learning and artificial intelligence is that understanding it fully requires that we think about what knowledge is. These are discussions that most people in both the hard and social sciences tend not to be very comfortable with. But just as monetary economists need to understand what money is to do their work, if we’re going to thoughtfully incorporate machine-learning processes into our lives, we’re going to have to understand the different ways we can create knowledge. This very much includes considering how knowledge that is created via the scientific method (show your work, outline your method, test against reality) is fundamentally different from knowledge created statistically by analyzing not the world, but words about the world, by autocomplete functions. One key difference, as I discuss here, is that unlike science-based knowledge, autocomplete knowledge contains no method within itself to confirm, or deny, its authenticity.

That’s a huge problem that cannot be wished away.

Here’s a photo of two fearsome predators in their natural environment.

This entry was posted in chatgpt, machine learning and tagged , , , . Bookmark the permalink.