New from me: Truth-Agnostic Chatbots Show the Need for a Search Alternative

Over at CIGI. Is it a problem that search engine companies, whose only job is to return information that people can trust and use, have hitched their wagon to a technology that produces falsehoods?

Yes. Yes it is. If companies won’t take their internet-cataloguing responsibilities seriously, we need to reconsider whether we should leave search responsibilities to the private sector:

Exactly how reckless are these companies being? Think about it in terms of how a search tool usually functions. When a user inputs a search term, Google (or Bing) serves up a series of links deemed to be relevant to the user. Although its algorithm remains a black box, Google Search is based in part on the assumption that the number of links that refer to a specific webpage can serve as a proxy for its authoritativeness. …

Now, consider what it means to put a generative AI chatbot on top of this format. As people, myself included, have pointed out in the three months since OpenAI unleashed ChatGPT on an unprepared world, generative AI has a tendency to generate falsehoods. This is because it is merely a complex auto-complete machine. The text that a GPT (generative pretrained transformer) generates — to call what it produces answers is to insult actual thought — is created by the GPT’s calculations of what the next word is likely to be, based on the texts on which the model was “trained,” itself the product of underpaid behind-the-scenes workers and often in horrific circumstances.

That it’s a machine for creating what can only really be called bullshit (following the definition of American moral philosopher Harry Frankfurt: speech produced with no regard as to whether it is true or not) has become comedically clear in the past several days, with Bing’s GPT producing text that is petulant, threatening, whiny and argumentative, and not at all helpful in serving up the world’s knowledge.

Inserting these chatbots into search introduces an enormous degree of uncertainty and unreliability. It’s tantamount to placing a BS-creation machine between the user and the search results. Google and Microsoft are well aware of how unreliable this tech is. While Google’s gaffe has received most of the attention, Bing has also generated its own share of howlers. And both companies explicitly warn their users that they cannot necessarily trust the output that they, as businesses, are serving them.

It’s audacious: People depend on search engines to find information they can use. Now, these companies are telling users that they can’t necessarily trust the information that they provide. These are not the actions of companies that care about supporting the healthy knowledge ecosystems all societies need to survive and thrive. …

Corporate search’s ChatGPT-driven embrace of generative AI may have exhilarated Microsoft and embarrassed Google, but the rest of us should take the opportunity to reconsider the costs of our information ecosystem. We have entrusted the world’s information to companies that have little regard for the essential service they’re supposed to provide.

Check out the whole piece over at CIGI.

This entry was posted in chatgpt, search and tagged , , , . Bookmark the permalink.

Leave a Reply

Please log in using one of these methods to post your comment: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s