My latest for CIGI. I’m usually pretty good at writing quickly, but it’s taken me a couple of months to figure out how to think about ChatGPT. It’s a longer piece, but I’m quite happy with it, so please do check it out.
Briefly, though, I think we have to keep our eyes focused on how these technologies work, which is by statistical analysis of existing data. In this case, words and texts. This piece focuses less on this specific technology than on the mindset behind it, which José van Dijck calls dataism, which effectively replaces theorizing with an unearned faith in raw data and correlations. ChatGPT is simply the most recent manifestation of this ideology, which has significant cross-social buy-in, as van Dijck recognized way back in 2014. It’s only gotten stronger since then. The fascination with ChatGPT is driven by the same impulse that has bureaucrats believing in AI as a means to solve social welfare or regulate immigration.
I include some policy recommendations – tech should complement, not replace, human activity; companies like OpenAI have to be stopped from effectively running unethical, uncontrolled experiments on the public; we need to expand data rights beyond personal information to include the rights of those (i.e., pretty much everyone online) whose words have been weaponized in the form of ChatGPT.
But the big one is, we need to stop thinking that data and engineers and data scientists will save us. It might sound hyperbolic to say that dataism is replacing scientific thought, but the two are very different. Science in all its guises pursues understanding, while dataism identifies statistical correlations and calls it knowledge. The difference between the two is the difference between the scientist and the technician.
Calling for a change in belief is easy to say, but hard to accomplish. Especially when there’s a lot of money to be made in dataism. But identifying the problem is a necessary first step.
Here’s a picture of Cooper, our community ambassador from down the street.