Comment of the day, courtesy of Eevee on Mastodon:
remarkable to watch the curve of computing go from “it will do exactly, precisely what you ask of if” to “here’s a few heuristics for less well-defined problems” to “self-driving is good enough, give us billions of dollars” to “we put autocomplete on our search engine to generate a whole fictional website about what you’re looking for but we don’t really know why”
As a description of the general trajectory of mainstream Silicon Valley marketing pitches, that’s pretty much it. As for how we got to the point where search-engine companies think it’s a good idea to put a bullshit-generating autocomplete machine between the user and the information the user’s looking for, I see it as the result of three mutually reinforcing forces: ideological, economic and political.
Ideological: This downward descent toward the mystification of computing is driven by dataism. If you believe that everything can be represented by fundamentally neutral data, and that all you need to explain and predict everything is enough data and enough computing power, then you’re going to believe in crazy things like Artificial General Intelligence, and you’re going to think that Large Language Models will produce valid knowledge. Of course, since data is never neutral, and the world in all its complexity will never be reducible completely to digital data, you’ll also end up (re)inventing phrenology, because data is always partial, and we always end up injecting our biases into knowledge-creation, intentionally or not.
Economic: In a dataist world, bullshit like self-driving and Spicy Autocomplete for search are more valuable as marketing tools than as functioning technologies. Fuelled by dataism, people have shown themselves more than willing to believe that words spewed by a computer are more trustworthy than those linked more directly to actual people. It’s why companies go to such lengths to position themselves as labour-light tech companies, even when it’s the behind-the-scenes labour that makes it all go.
It’s why instructors who would incinerate an academic-paper mill if given the chance think it’s completely fine to use a paper generated via statistical probabilities as a starting point for our students’ education. Chatbot-generated academic papers are empty calories built on correlations. In contrast, papers produced by people working for paper mills may create low-quality work that, like chatbot papers, can be passed off by a student as their own. But they’re created through a recognizable knowledge-creation process.
Both are awful products, but in my years as a student and teacher, I’ve never heard of any class dissecting a paper mill-generated paper. So, why are some teachers using crappy chatbot-created statistical word amalgamations as starting points for discussion, and not paper-mill papers? The difference is the technology: computers are seen as more authoritative than actual people, and so they get a pass.
Also economically, of course, from a search perspective, if you can keep users from actually visiting the sites you’re plundering for your Autocomplete Answers, then you’ll be able to hoard more of that sweet, sweet ad revenue.
Political: If only there were an institution that was capable of placing binding controls on how tech companies operate, to keep them from experimenting on the general public for the sake of their own bottom line. As I’ve noted elsewhere, if drug companies engaged in the kind of reckless behavour that OpenAI, Microsoft and Google have – not just in search, but in so many other areas (hiya, Google Street View) – they’d be facing massive fines and possibly even jail time. Unfortunately, the libertarian belief that tech and the internet are special and should be subject to minimal regulation, remains a potent force. Because you wouldn’t want to thwart innovation.
Mutually reinforcing
And that’s the problem. Each of these reinforces the other. The people running these companies aren’t just in it for the money; they’re in it for the revolution, no matter how stupid that revolution actually is (see: bitcoin, blockchain, web3). Doing things differently would require that they work against not only their perceived economic interests, but also the ideological belief that the world actually is like a giant computer, and if we only had enough data and a big enough computer, they can crack this nut. That more and more people are embracing dataism as an ideology is a related big problem.
And when you add an unhealthy does of libertarianism to this economic and ideological mix, it becomes harder for society to defend itself against their reckless actions.
Combine the three together – dataism, tech privatization and libertarianism – and you end up with an industry that seems likely to continue pushing bravely toward a dataist future. It’s why, regardless of what happens to Bing and ChatGPT, tech utopianism is likely to persist. Lucky us.
Here’s a picture of a fancy cat:

You must be logged in to post a comment.