Wrote this up on Twitter (y’all know what I’m talking about, so no need to call it something else), reposting it here. Basically calling for critics to engage with the past decade (and longer) of literature on platform governance, especially regarding platform power, platforms as infrastructure and platform non-neutrality.
I read critiques of C-18 that blame the government for Meta’s *choice* to block Canadian media and it’s immediately clear that too many critics have not engaged with a decade of platform governance literature.
Including how platforms’ goal is to create monopolies. Their goal is to become indispensable infrastructure. Their goal is to attack democratic governments’ ability to regulate (US and EU excluded). All of which are in play here and which complicate the gov’t bad-Meta good POV.
On platform neutrality: in reality, both intermediaries and news media create value. This value is social *and* economic. They are all part of our information ecosystem. Platforms are not neutral. Their actions (algorithms, advertising monopoly) affect others.
Meta and Google have not been good stewards of this ecosystem, be it their ad-driven search degradation or fomenting an actual genocide (Facebook, Rohingya). It’s not insane to require them to pay to support the ecosystem into which they’ve inserted themselves.
It’s disingenuous to argue that by blocking Canadian media Meta’s just following the law, making themselves a non-news intermediary. They have spent the past decade selling themselves as indispensable media infrastructure. Remember the (falsehood-driven) Pivot to Video?
What’s happening: these companies spent a decade positioning themselves as indispensable media infrastructure. (And infrastructure that shouldn’t be regulated. Because the Internet.) Tempted by the platforms, including by their lies (e.g., pivot to video), media jumped on board.
Google and Meta spent money on individual deals w media cos to forestall regulation. Why? B/c mandatory bargaining, eg as found in C-18, would remove their power over media companies. They could no longer decide who gets what money and on what terms.
Now, having established their position as essential media infrastructure, they’re cashing in their chips and attempting to tell us who’s boss.
C-18 BTW, is an advance over the Australian model, where (secret) negotiation can happen outside their Code, at the platforms’ discretion. C-18 is fairer and more democratic.
This isn’t about money, or paying a “fair” amount. These very profitable companies will throw around money if they feel like it. This is about power. Specifically, platform power to refuse to be governed by democratic parliaments.
The principle at stake isn’t free information flows. We know (again the scholarship has moved on from early 2000s internet libertarianism) that information quality matters as much as the flow of this information. Free flows don’t matter much if the flows are garbage.
To be clear, for all its flaws, the regulatory coverage and transparency aspects of the bill are things to be celebrated. Transparency, accountability are good, democratic values. These are what Meta and Google, and their defenders, are stomping on. It’s bizarre.
If anything, the government erred in taking too soft a line with the platforms, leaving it up to them to negotiate deals. We now know that this sign of respect for their autonomy was seen as an attack. The gov’t should have been more hard-nosed from the beginning.
This fight, again, is not about money or fairness. It’s about platforms rejecting the right of governments to regulate platforms. Which means that this was a fight that we were going to have one way or another. Cold comfort for Canadian media, but there you are.
Which means that the only way out is through. Governments were always going to have to take on platform power. The best that can be said now is that governments should stop operating under the illusion that these platforms are in any way neutral. They’re anti-democratic rivals.
If you’re an instructor, we’d be more than happy to talk with your class about our book and the issues it discusses. I can be reached at bhaggart at brocku dot ca.
The New Knowledge unpacks the transformative implications of the rising centrality of the control of knowledge – particularly data and intellectual property – for the exercise of economic, social and political power. Put another way, no matter what field you’re working in, or what you do, pretty much every policy and activity has a data and IP – a knowledge – component. We’re negotiating trade agreements that are no longer about physical trade, but regulating knowledge flows (chapter 3) – agreements that are now more about regulating global production/value chains than international trade (also chapter 3). Companies that previously would have been seen as lowly IT providers are inserting themselves (and being welcomed) into all parts of society based on their ability to collect and analyze data (chapters 5 and 6). Property relations are being redefined, with the Internet of Things placing de facto control over connected devices with the supplier, not the nominal owner (chapter 7). The state, meanwhile, is as enamored of data and algorithms as everyone else, and is as ready to buy into its magical properties as anyone.
Understanding this transformation is a vital necessity for everyone. And while there are many who have been working on data and IP issues for a long time – many of whom we cite in this book – many people (including academics and policymakers) are coming across them for the first time. This book is for all those who want to break through the mysteriousness and opacity that often accompanies discussions of data and IP, which has only gotten worse in the current Artificial Intelligence mania.
And so we have two chapters devoted to demystifying knowledge (chapter 2) and data (chapter 4), written for those who would rather not wade through the deliberately murky musings of dense French philosophers. It’s our hope that these chapters will help to inoculate readers against those who ask us to trust in the data and the algorithm – to trust in AI, say – as if these were something magical and above human biases and fallibilities. The promise of AI, of data, of algorithms, is the promise of neutral knowledge unsullied by human prejudice, ignorance and bias. It’s a false promise, as scholars have known for decades, and as we highlight here. The reality is, you can never escape people. Once you have a firm grasp of what data is, that it’s people all the way down, it’s hard to take the claims of AI evangelists seriously.
The Power of Belief
The theme of belief in data, IP and algorithms recurs throughout the book. We argue that the defining characteristic of our current knowledge-driven society isn’t technological, but rather the belief that commodified knowledge – data and IP – is a superior form of knowledge. As we unpack in Chapter 5, this belief, and the privileging of commodified knowledge (and the people who claim mastery over its collection and analysis), is only partially related to global digitization and the internet. It is this belief that convinces urban-development experts, say, that a search engine and advertising company has the capacity to plan and build a city. Or that we should listen to computer scientists who claim to have invented god, and not just a glorified autocomplete machine.
Power and Control
The ability to control these socially valuable forms of knowledge (or, rather, forms of knowledge believed to be socially valuable) has allowed certain actors – namely the large (mostly) American “platforms” – to play dominant roles throughout the economy and society. Want to understand what Meta and Google are up to in the C-18 tug-of-war? Our book outlines how these companies are systematically working to make themselves central de facto governors over our lives, whether as two-sided markets or as standards-setters (chapter 6).
Nothing New
The belief that the challenges posed by AI, by algorithms, by platforms, by the internet are all new, and require novel responses, is the biggest challenge to addressing these challenges. Shoshana Zuboff put forward her book The Age of Surveillance Capitalism as an attempt to fill what she called a void, a “tabula rasa” that required novel responses to “unprecedented” challenges.
Not only does Zuboff do a disservice by effectively erasing the myriad scholars who have been working for decades on the specific issues she identifies, but she’s flat-out wrong in claiming that these are unprecedented challenges.
Theory is the scaffolding of an argument. In The New Knowledge, our scaffolding was provided by three International Political Economy scholars in particular: Susan Strange, Robert Cox and Karl Polanyi (chapters 2 and 3). I won’t go into the details here (read the book!), but importantly, all three of them made their primary theoretical contribution before the mainstreaming of the internet (1980s for Strange and Cox, 1940s(!) for Polanyi), and were not primarily focused on data, intellectual property or related issues. The mark of a good theory is its applicability beyond the situation or era in which it was first proposed, and these three more than fit the mark. Most importantly, our successful application of these three theorists to what we call the knowledge-driven society suggests that if existing theories can be used to understand our current moment, existing policy solutions are also available to us. No need to reinvent the wheel.
(This isn’t to say that we simply apply their work unchanged. We do propose reinterpretations of all three, but in a way that solves puzzles posed by their own formulations while maintaining the essence of their own theories. That is our theoretical contribution to our understanding of International Political Economy.)
That said, we also draw on the work of two other, more contemporary, theorists, José van Dijck (for her concept of dataism) and Evgeny Morozov (for his concept of technological solutionism), who better than most understand the ideological orientation of our current age.
Knowledge Feudalism, Digital Economic Nationalism and … a third option
As interested as we are in theory, and as important as it is to describe and understand our current moment, we also have a strong, pragmatic interest in policy, a byproduct of our pre-academia years spent working in various capacities for the Canadian federal government. We identify two dominant policy approaches to the knowledge-driven society. The first is knowledge feudalism. This is the dominant player’s strategy: if you already control significant amounts of economically and socially valuable knowledge (and the mans to disseminate and analyze it), then you’ll want to ensure that others pay to play. This is the strategy of the US and its companies (e.g., Meta, Google).
Challengers, seeking access to such knowledge, will be more open to cooperation, more-open data and IP flows, and state intervention (the state being the only actor capable of going toe-to-toe with US and Chinese corporate champions). We call this Digital Economic Nationalism. And while it’s seen as more benign than US Knowledge Feudalism, its end goal remains domination of others. This is one reason why, in the book, we are relatively critical of the European Union’s General Data Protection Regulation (GDPR): it may have some good points to it, but it’s designed around European interests and European values, not those of other countries, to whom the EU presents itself as a “regulatory superpower” (chapter 8).
While we acknowledge that Digital Economic Nationalism is a logical response to knowledge feudalism (and shouldn’t be seen as akin to mercantilist protectionism (chapter 3)), it fails to get rid of the oppressive power dynamics created by the knowledge/data/IP-driven society. And so, in Chapter 9 and the Conclusion, we propose a policy of decommodification. Drawing on concepts of data justice, group (as opposed to individual) privacy, Indigenous data sovereignty, the practical work on Barcelona’s smart city and Karl Polanyi’s analysis of fictitious commodities (data and IP being fictitious commodities), we argue that data and knowledge commodified by intellectual property laws must be seen first and foremost not as commodities to be repurposed at the whims of others. This approach to data mirrors how other fictitious commodities are treated. Labour and environmental regulations, and bankruptcy laws, are all designed to limit the marketization of things that are essential to human and social functioning. That’s the direction we need to head with data and IP laws. (And AI laws, which are functionally equivalent to data laws.)
Instead, the context within which data and knowledge is generated must be respected, with the people who serve as the source of this knowledge, and who will be acted upon using this knowledge, given control over how their data and knowledge will be used. Almost all problems in the data- and knowledge-driven economy are the result of knowledge appropriated away from these individuals and groups: it’s not a problem when a user sends their data to Google Maps to get from Point A to Point B; it is a problem when that data is sold to insurance companies and ends up being used to increase an identified group’s insurance rates, or deny them insurance altogether.
A note on ChatGPT and clarity of thought
Readers will note that, in contrast to the avalanche of words that has accompanied the November 2022 release of ChatGPT, the book is somewhat light on mentions of ChatGPT, large language models and “artificial intelligence.” We would invite the reader to see this as a feature, not a bug. As we note in the book, “artificial intelligence” is a term with many different sets of meanings. It’s also a term that current overuse has degraded to the point to meaninglessness. When writing the book, we made a point to avoid using the term, preferring instead to focus on precise concepts, such as “predictive algorithms” or “algorithmic regulation.” We intentionally burrowed down to the fundamental concepts at play, namely data, knowledge, algorithms, and the perceptions of these. It is our hope that focusing clearly on these underlying concepts – which are equally applicable to the AI debate in 2023 as they were in 2022 – will help readers cut through the AI froth, to focus on the ideas and concepts that truly matter.
With all that out of the way, here’s the Run the Jewels song that served as the inspiration for this post’s title.
Interrupting my July vacation to highlight some points regarding Meta and Google’s high-stakes game of chicken with the Canadian government over Bill C-18.
1. This is not about money. It’s about power. I see that Michael Geist is arguing that Meta’s and Google’s decision to remove Canadian news providers is based on economic considerations, that “news links” (what most people just call “news”) isn’t economically valuable enough to justify them allowing Canadian news services on their networks. And similarly, that “economic circumstances” have changed, and these companies don’t have as much ready cash as they did in the good ol’ days.
The tell that Google and Meta’s actions aren’t being driven by economic considerations is that Google has run this play before. Not in Australia, but in Spain, which Google News abandoned in 2014, when the government passed a law requiring news aggregators to pay a licensing fee for posting headline snippets. They only returned last year, following a modification of the law, which allows … wait for it … “media outlets to negotiate directly with the tech giant.”
Google and Meta aren’t objecting to the price, or a free and open internet. They simply don’t want to be told what to do.
This is a fight over structural power, and who gets to exercise it: tech macrointermediaries (a more accurate term than “platform”) or democratically elected governments. The term “structural power” – which refers to the ability to set the rules and norms under which others operate – doesn’t show up in these debates very often, but it’s key to understanding these clashes between platforms and governments.
And this isn’t just any fight over structural power. As the foundational International Political Economy scholar Susan Strange recognized, the power to control the legitimation, creation, dissemination and use of knowledge is a fundamental form of power in society. In other words, this is a fight about the very principle that sovereign, democratically elected governments have the legitimate right to pass and enforce laws regulating activities within their territory.
In a nutshell, Google and Meta – two unaccountable, foreign monopolies – want to retain for themselves the right to determine what Canadians are able to access over their (monopolistic) networks. In an interview with Jesse Brown, Google gave a nice example of what this structural power looks like when wielded by Google, with Google head of communication Lauren Skelly noting that they would be blocking Canadian sites based not on whether a company had registered with the CRTC, but on “Google’s own determination.”
This should be old hat by now, but it bears repeating: This is what platforms/macrointermediaries do. They set rules governing our actions. All else being equal, they will go along with things (voluntary codes of conduct, captive oversight boards) that don’t challenge their structural power, and resist any attempts by governments to adopt rules that they don’t agree with.
That this is a battle for structural power also clarifies that regardless of the pros and cons of Bill C-18 or any imagined alternative, these companies would have actively resisted pretty much any Canadian legislation that directly challenged their business model, which is based on low/no-cost access to material produced by others. Lots of people have been calling for effective regulation of platforms/macrointermediaries; that is, regulation that changes their behaviour. Any effective regulation would have triggered this type of tantrum.
2. “Value” is a two-way street. One of the ways to see Bill C-18 is as a long-needed corrective to the idea that openness itself is an unmitigated good, rather than as a two- or many-sided relationship in which all parties contribute something necessary for the creation of social value. The idea of a “link tax” as a pejorative captures the dynamics of the “unmitigated good” position: not only that the companies shouldn’t have to pay for linking to material they didn’t create, but that such a tax would serve as an unjustifiable restriction on the spread of knowledge. (Not true: As Paris Marx has noted, requirements elsewhere that search engines pay for use has failed to “break the internet.”)
While perhaps defensible in the early 2000s, the spread of misinformation and hate speech has shown the unmitigated good position to be near-sighted and naïve, while the collapse of the journalism industry serves as a reminder that the quality of content matters as much as the ability to share content. By requiring a form of revenue sharing between macrointermediaries and essential information sources, Bill C-18 can be seen as an attempt to recognize the two-way nature of the information-creation and -dissemination relationship.
(To be clear, Bill C-18 does not enact a tax (a word that has a specific meaning) on links. The use of the phrase “link tax” by people in this debate is wholly polemical and, IMO, should be avoided by those interested in serious debate.)
3. Before you judge the law too harshly, think through the politics. Even those who support the idea of government regulation have complained that Bill C-18 is a sop to Canada’s other media monopolists that misses a lot at the heart of the crisis in Canadian media. Which, fair enough. (Dwayne Winseck’s analysis of the pros and cons of the bill holds up pretty well even after a year.)
But then I started thinking about why the government may have chosen this approach rather than either a wholesale reform of the broader media sector or more fundamental regulation challenging the macrointermediary for-profit business model that lies at the heart of the problem. (Granted, this is a government that does not explain well even its good ideas, so I’m inferring here.) And when you think through the politics of the situation, Bill C-18 makes a lot more sense; i.e., it’s not a completely bananaheaded law.
First off, tackling foreign macrointermediaries/platforms (in C-11, C-18 and the upcoming online harms legislation) and Canada’s homegrown media monopolies would be like fighting a war on two fronts. To suggest it is to immediately see its folly.
Second, and related, the reality is that controversial legislation needs allies. Rupert Murdoch and his poisonous empire rightly deserve all our scorn. But the reality is, Australia’s legislation doesn’t get anywhere without the backing of a powerful lobby, which in their case was Murdoch. In Canada, it’s the large media companies. Communication and law professors may have right, and an armada of Twitter followers, on their side, but that’s not nearly enough to push through an ambitious policy agenda. You go to policy war with the allies you have.
Third, Canada, like Australia, is a small player. One of the lessons of Natasha Tusikov’s invaluable book, Chokepoints: Global Private Regulation on the Internet, is that the power to shape the actions of these large macrointermediaries is largely the purview of the biggest states, namely the United States and the European Union. The European Union’s market power allows it to reshape these companies’ activities — that is, they can exert structural power over Meta and Google.
Not so smaller countries. I’m not sure that Canada or Australia could pull off more than they have here. And that they’re even trying to hold these titans accountable is laudable enough that we should try to understand their logic, even if in the case of Canada (as noted), they may not be as forthcoming as they should be.
If negotiating a multi-million-dollar payment among media players is anathema to these companies, imagine their reaction if Canada, or Australia, had tried to, as Cory Doctorow recommends, crack down on the surveillance-based advertising that is their lifeblood. I 100% agree this needs to happen, and even that it would be worth the sacrifice. But there’s no world in which it wouldn’t trigger a nuclear reaction from our macrointermediary overlords.
The best should not be the enemy of the good. These companies need to be subject to democratic domestic regulation. A plan that leaves platform power relatively intact while leaving it to private actors to negotiate payments? I totally get it. Does it solve all our problems? No. Did it do enough? Not at all — Paris Marx has a nice rundown of things that the government should also be doing, while Winseck’s piece, cited above, lays out a policy agenda I find hard to disagree with. But: it gets some money to media companies while trying not to pick too big a fight. In other words, it wasn’t a crazy idea, no matter what the Monday-morning quarterbacks are saying.
4. Remember net neutrality? I’m still waiting for someone to stand up for the principle that online service providers should not discriminate against different types of content. To wit: following the principle of net neutrality, it should be illegal for search engines and social media platforms to discriminate against Canadian news providers. This should be part of the government’s next move, in Sandy Garossino’s words, to “Hit. Them. Harder.” This is about structural power. Canada needs to ensure the health of its information ecosystem: this is an existential requirement. Right now, Meta and Google are threatening its health.
5. This is just a preview of what’s to come. I still find it hard to believe that these companies, and their hangers-on, are fighting so hard against what are, in the big picture, very minor pieces of legislation. Countries need a healthy information ecosystem to survive: that content-dissemination companies, whose own behaviours have been degrading said ecosystem, should pay to support that should be a no-brainer. Ditto payments to support Canadian culture, which have been a normal part of Canadian society for decades.
As for the fear-mongering over requiring these companies to promote Canadian culture in their search results? Pass me my smelling salts! I’d be a lot more concerned if these companies didn’t already shape their search and recommendation algorithms to suit their own commercial interests, and if the government were asking for anything more than what has been common practice, again, for decades.
The real fight is still on the horizon. The regulation of online harms is going to be bruising. And while it touches any number of third rails, especially free expression, thinking about it as a contest over structural power – whether democratic governments or unaccountable foreign corporations should set the rules under which we live – can help clarify things.
As Facebook and Google’s attempts to hold Canadian media and Canadians hostage against the Canadian government show, these companies already – and vengefully – restrict freedom of expression on their platform. The question before us isn’t, should expression be restricted online? It already and always is. And even the question of what the rules should be is, in a sense, secondary to the larger issue of, who should be allowed to set them, and with what degree of accountability? From where I sit, the Canadian government is a hell of a lot more accountable, and responsible, than these two would-be information monopolists.
That’s enough for me. I’m going to write a post for the release of our book (out next week! DO NOT BUY THE EBOOK – it’ll be available as an Open Access publication), but other than that I’ll see y’all in a couple of months. Don’t drop any heavy legislation while I’m AFK.
Here’s a picture of Niagara Falls, which isn’t a metaphor for anything. It’s just pretty, and pretty majestic.
Not really a catchy title, is it? Still, I think it’s worth highlighting the federal Privacy Commissioner’s report, published on May 30, that found that, “with some exceptions, that the measures implemented by the government during the pandemic complied with relevant privacy laws and were necessary and proportional in response to the unprecedented public health crisis.” After all, it’s not everyday that we hear that a government actually did a pretty good job in protecting citizens’ privacy.
That this is largely a good-news story may account for why it doesn’t seem to have been discussed widely beyond this National Post article: a bit of a disappointment, given the harsh criticisms the government endured at the time for tracking Canadians’ mobility through cell phone tower data as a pandemic-mitigation measure.
Anyways, the picture that emerges from the report is of a government that, on the policy side of things, operated with a relatively strong interest in protecting Canadians’ privacy, including such policies as the Covid Alert app and the program for using cell phone tower data, provided by Telus, to see if Canadians were following quarantine orders. Note that the Privacy Commissioner not only found the government to be in compliance with the Privacy Act (which pretty much every critic had granted), but that “the government’s response to the pandemic … was also necessary and proportional consider the unprecedented health crisis”: a higher standard, as the Privacy Commissioner also notes.
Instead, the government came in for the strongest criticism for its communication of its plans; i.e., that they didn’t adequately communicate to Canadians that they were doing a good job.
I don’t know about you, but as a Canadian, I’ll take the problem of “use your words better” over bad government policy every day of the week.
This report is, I think, important for two reasons. First, it further highlights just how delusional and paranoid the people behind the Ottawa Occupation (which was ramping up just as worries about cell phone tower surveillance for pandemic-mitigation reasons were coming to light) were. To the extent that such government actions provided the pretext for their unlawful occupation of Canada’s capital, they were flat-out wrong. That Pierre Poilievre owes his ascension to the head of the Conservative Party to the tumult of the Ottawa Occupation and its paranoid delusions should give us all pause.
Second, it highlights the need for a much more nuanced discussion of privacy and surveillance. In particular, I think we need to keep in mind that not all surveillance is created equally. As the Privacy Commissioner’s comments on necessity and proportionality during a pandemic highlights, surveillance conducted by health authorities during a pandemic for the purposes of stopping the pandemic is in no way equivalent to surveillance undertaken by police, or by security services for national security, or by surveillance-capitalist companies who want to commodify and sell your data.
All too often, these distinctions get muddled together when we focus on privacy as a single thing, rather than something that can be positive or negative depending on the context. One of the case studies in in my upcoming book (with Natasha Tusikov) is the failed Covid Alert app. One of the things that characterized its development was its focus on privacy above all else. It was designed to minimize data collection, a weird choice during a pandemic, when surveillance of disease transmission could save lives. This choice was made in large part because of dual non-health imperatives: because it was being run on infrastructure provided by surveillance capitalists, who have shown they cannot be trusted with our data, and undertaken against a general backdrop of suspicion of any government surveillance, with no distinction drawn between surveillance conducted by security services and health officials. There’s a lot to unpack there (download the open-access version when it’s released in July!). In any case, a failure to distinguish between good and bad surveillance, if you will, certainly didn’t help.
Keeping this basic point in mind — that the value of privacy is contextual — can help us think more clearly about the challenges posed by privacy and surveillance, and hopefully to jump at fewer shadows in the future.
Over at CIGI. Is it a problem that search engine companies, whose only job is to return information that people can trust and use, have hitched their wagon to a technology that produces falsehoods?
Yes. Yes it is. If companies won’t take their internet-cataloguing responsibilities seriously, we need to reconsider whether we should leave search responsibilities to the private sector:
Exactly how reckless are these companies being? Think about it in terms of how a search tool usually functions. When a user inputs a search term, Google (or Bing) serves up a series of links deemed to be relevant to the user. Although its algorithm remains a black box, Google Search is based in part on the assumption that the number of links that refer to a specific webpage can serve as a proxy for its authoritativeness. …
Now, consider what it means to put a generative AI chatbot on top of this format. As people, myself included, have pointed out in the three months since OpenAI unleashed ChatGPT on an unprepared world, generative AI has a tendency to generate falsehoods. This is because it is merely a complex auto-complete machine. The text that a GPT (generative pretrained transformer) generates — to call what it produces answers is to insult actual thought — is created by the GPT’s calculations of what the next word is likely to be, based on the texts on which the model was “trained,” itself the product of underpaid behind-the-scenes workers and often in horrific circumstances.
That it’s a machine for creating what can only really be called bullshit (following the definition of American moral philosopher Harry Frankfurt: speech produced with no regard as to whether it is true or not) has become comedically clear in the past several days, with Bing’s GPT producing text that is petulant, threatening, whiny and argumentative, and not at all helpful in serving up the world’s knowledge.
Inserting these chatbots into search introduces an enormous degree of uncertainty and unreliability. It’s tantamount to placing a BS-creation machine between the user and the search results. Google and Microsoft are well aware of how unreliable this tech is. While Google’s gaffe has received most of the attention, Bing has also generated its own share of howlers. And both companies explicitly warn their users that they cannot necessarily trust the output that they, as businesses, are serving them.
It’s audacious: People depend on search engines to find information they can use. Now, these companies are telling users that they can’t necessarily trust the information that they provide. These are not the actions of companies that care about supporting the healthy knowledge ecosystems all societies need to survive and thrive. …
Corporate search’s ChatGPT-driven embrace of generative AI may have exhilarated Microsoft and embarrassed Google, but the rest of us should take the opportunity to reconsider the costs of our information ecosystem. We have entrusted the world’s information to companies that have little regard for the essential service they’re supposed to provide.
You must be logged in to post a comment.