What we like

Artificial Intelligence: Friend or Foe?

Michael Albert and Arash Kolahi | ZNet

In a hypothetical race to claim the mantle of biggest threat to humanity, nuclear war, ecological catastrophe, rising authoritarianism, and new pandemics are still well in front of the pack. But, look there, way back but coming on fast. Is that AI? Is it a friend rushing forward to help us, or another foe rushing forward to bury us?

As a point of departure for this essay, in their recent Op Ed in The New York Times Noam Chomsky and two of his academic colleagues—Ian Roberts, a linguistics professor at the University of Cambridge, and Jeffrey Watumull, a philosopher who is also the director of artificial intelligence at a tech company—tell us that “however useful these [AI] programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects….”

They continue: “Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.”

Readers might take these comments to mean current AI so differs from how humans communicate that predictions that AI will displace humans in any but a few minor domains is hype. The new Chatbots, painters, programmers, robots, and what all are impressive engineering projects but nothing to get overly agitated about. Current AI handles language in ways very far from what now allows humans to use language as well as we do. More, current AIs’ neural networks and large language models are encoded with “ineradicable defects” that prevent the AIs from using language and thinking remotely as well as people. The Op Ed’s reasoning feels like a scientist hearing talk about a perpetual motion machine that is going to revolutionize everything. The scientist has theories that tell her a perpetual motion machine is impossible. The scientist therefore says the hubbub about some company offering one is hype. More, the scientist knows the hubbub can’t be true even without a glance at what the offered machine is in fact doing. It may look like perpetual motion, but it can’t be, so it isn’t. But what if the scientist is right that it is not perpetual motion but nonetheless the machine is rapidly gaining users and doing harm, with much more harm to come?

Chomsky, Roberts, and Watumull say humans use language as adroitly as we do because we have in our minds a human language faculty that includes certain properties. If we didn’t have that, or if our faculty wasn’t as restrictive as it is, then we would be more like birds or bees, dogs or chimps, but not like ourselves. More, one surefire way we can know that another language-using system doesn’t have a language faculty with our language faculty’s features is if it can do just as well with a totally made up nonhuman language as it can do with a specifically human language like English or Japanese. The Op Ed argues that the modern chatbots are of just that sort. It deduces that they cannot be linguistically competent in the same ways that humans are linguistically competent.

Applied more broadly, the argument is that humans have a language faculty, a visual faculty, and what we might call an explanatory faculty that provide the means by which we converse, see, and develop explanations. These faculties permit us a rich range of abilities. As a condition of doing so, however, they also impose limits on other conceivable abilities. In contrast, current AIs do just as well with languages that humans can’t possibly use as with ones we can use. This reveals that they have nothing remotely like the innate human language faculty since, if they had that, it would rule out the non human languages. But does this mean AIs cannot, in principle, achieve competency as broad, deep, and even creative as ours because they do not have faculties with the particular restrictive properties that our faculties have? Does it mean that whatever they do when they speak sentences, when they describe things in their visual field, or when they offer explanations for events we ask them about—not to mention when they pass the bar exam in the 90th percentile or compose sad or happy, reggae or rock songs to order—they not only aren’t doing what humans do, but also they can’t achieve outcomes of the quality humans achieve?

If the Op Ed said current AIs don’t have features like we have so they can’t do things the way we do things, that would be fine. In that case, it could be true that AIs can’t do things as well as we do them, but it could also be true that for many types of exams, SATs and Bar Exams, for example, they can outperform the vast majority of the population. What happens tomorrow with GPT 4 and in a few months with GPT 5, or in a year or two with GPT 6 and 7, much less later with GPT 10? What if, as seems to be the case, current AIs have different features than humans but those different features let it do many things we do differently than we do them, but as well or even better than we do them?

The logical problem with the Op Ed is that it seems to assume that only human methods can, in many cases, attain human-level results. The practical problem is that the Op Ed may cause many people to think that nothing very important is going on or even could be going on, without even examining what is in fact going on. But what if something very important is going on? And if so, does it matter?

If the Op Ed focused only on the question “is contemporary AI intelligent in the same way humans are intelligent,” the authors’ answer is no, and in this they are surely right. That the authors then emphasize that they “fear that the most popular and fashionable strain of AI—machine learning—will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge,” is also fair. Likewise, it is true that when current programs pass the Turing test, if they haven’t already done so, it won’t mean that they think and talk the same way we do, or that how they passed the test will tell us anything about how we converse or think. But their passing the test will tell us that we can no longer hear or read their words and from that alone distinguish their thoughts and words from our thoughts and words. But will this matter?

Chomsky, Roberts, and Watumull’s essay seems to imply that AI’s methodological difference from human faculties means that what AI programs can do will be severely limited compared to what humans can do. The authors acknowledge that what AI can do may be minimally useful (or misused), but they add that nothing much is going on comparable to human intelligence or creativity. Cognitive science is not advancing and may be set back. AIs can soundly outplay every human over a chessboard. Yes, but so what? These dismissals are fair enough, but does the fact that current AI generates text, pictures, software, counseling, medical care, exam answers, or whatever else by a different path than humans arrive at very similar outputs mean that current AI didn’t arrive there at all? Does the fact that current AI functions differently than we do necessarily mean, in particular, that it cannot attain linguistic results like those we attain? Does an AI being able to understand nonhuman languages necessarily indicate that the AI cannot exceed human capacities in human languages, or in other areas?

Programs able to do information-based linguistic tasks are very different, we believe, than tractors able to lift more weight than humans, or hand calculators able to handle numbers better than humans. This is partly because AI may take various tasks away from humans. In cases of onerous, unappealing tasks this could be socially beneficial supposing we fairly apportion the remaining work. But what about when capitalist priorities impose escalating unemployment? That OpenAI and other capitalist AI firms exploit cheap overseas labor to label pictures for AI visual training ought not come as a surprise. But perhaps just as socially important, what about the psychological implications of AI growth?

As machines became better able to lift for us, humans became less able to lift. As machines became better able to perform mathematical calculations for us, humans became less able to perform mathematical calculations. Having lost some personal capacity or inclination to lift or to calculate was no big deal. The benefits outweighed the deficits. Even programs that literally trounce the best human players at chess, go, video games, and poker (though the programs do not play the way humans do), had only a fleeting psychological effect. Humans still do those very human things. Humans even learn from studying the games the programs play—though not enough to get anywhere near as good as the programs. But what happens if AI becomes able to write letters better than humans, write essays better, compose music better, plan agendas better, write software better, produce images better, answer questions better, construct films better, design buildings better, teach better, converse better, and perhaps even provide elderly care, child care, medical diagnoses, and even mental health counseling better—or, in each case, forget about the programs getting better than us, what happens when programs function well enough to be profitable replacements for having people do such things?

This isn’t solely about increased unemployment with all its devastating consequences. That is worrisome enough, but an important part of what makes humans human is to engage in creative work. Will the realm of available creative work be narrowed by AI so that only a few geniuses will be able to do it once AI is doing most writing, therapy, composing, agenda setting, etc.? Is it wrong to think that in that case what humans would be pushed aside from could leave humans less human?

The Op Ed argues that AI now does and maybe always will do human-identified things fundamentally differently than humans do them. But does that imply, as we think many Times readers will think it does, that AIs won’t do such things as well or even better than most or perhaps even all humans. Will AIs be able to simulate human emotions and all-important human authenticity into songs and paintings they make? Maybe not, but even if we ignore the possibility of AIs being explicitly used for ill, don’t the above observations raise highly consequential and even urgent questions? Should we be pursuing AI at our current breakneck pace?

Of course, when AIs are used to deceive and manipulate, to commit fraud, to spy, to hack, and to kill, among other nefarious possibilities, so much the worse. Not to mention, if AIs become autonomous with those anti-social agendas. Even without watching professors tell of AI’s already passing graduate level examinations, even without watching programmers tell of AIs already outputting code faster and more accurately than they and their programmer friends can, and even without watching AIs already audibly converse with their engineers about anything at all including even their “feelings” and “motives”, it ought to be clear that AI can have very powerful social implications even as its methods shed zero light on how humans function.

Another observation of the Times Op Ed is that AIs of the current sort have nothing like a human moral faculty. True, but does that imply they cannot have morally guided results? We would bet, instead, that AI programs can and in many cases already do incorporate moral rules and norms. That is why poor populations are being exploited financially and psychologically to label countless examples of porn as porn—exploitative immorality in service of what, morality or just phony propriety? The problem is, who determines what AI-embedded moral codes will promote and hinder? In current AIs, such a code will either be programmed in or learned by training on human examples. If programmed in, who will decide its content? If learned from examples, who will choose the examples? So the issue isn’t that AI inevitably has no morality. The issue is that AI can have bad morality and perpetuate biases such as racism, sexism, or classism learned from either programmers or training examples.

Even regarding a language faculty, as the Op Ed indicates certainly there is not one like ours in current AI. But is ours the only kind of faculty that can sustain language use? Whether the human language faculty emerged from a million years of slow evolution like most who hear about this stuff think linguists must believe, or it emerged overwhelmingly over a very short duration from a lucky mutation and then underwent only quite modest further evolution while it spread widely, as Chomsky compellingly argues, it certainly exists. And it certainly is fundamental to human language. But why isn’t the fully trained neural network of an AI a language faculty, albeit one different from ours? It generates original text. It answers queries. It is grammatical. Before long (if not already) it will converse better than most humans. It can even do all this in diverse styles. Answer my query about quantum mechanics or market competition, please. Answer like Hemingway. Answer like Faulkner. Egad, answer like Dylan. So why isn’t it a language faculty too—albeit unlike the human one and produced not by extended evolution or by rapid luck, but by training a neural network language model?

It is true that current AI can work with human languages and also, supposing there was sufficient data to train it, with languages the human faculty can not understand. It is also true that after training, an AI can in some respects do things the human language faculty wouldn’t permit. But why does being able to work with nonhuman languages mean that such a faculty must be impoverished regarding what it can do with human languages? The AI’s language faculty isn’t an infinitely malleable, useless blank slate. It can’t work with any language it isn’t trained on. Indeed, the untrained neural network can’t converse in a human language or in a non-human language. Once trained, however, does its different flexibility about what it makes possible and what it excludes make it not a language faculty? Or does its different flexibility just make it not a human-type language faculty? And does it even matter for social as opposed to scientific concerns?

Likewise, isn’t an AI faculty that can look at scenes and discern and describe what’s in them and can even identify what is there but is out of place being there, and that can do so as accurately as people, or even more accurately, a visual faculty, though again, certainly not the same as a human visual faculty?

And likewise for a drawing faculty that draws, a calculating faculty that calculates, and so on. For sure, despite taking inspiration from human experiences and evidence, such as AI programmers have done, none of these AI faculties are much like the human versions. They do not do what they do the way we humans do what we do. But unless we want to say that the contingent, historically lucky human ways of information processing are the only ways of information processing that can handle language as intelligently as humans can, and are the only ways of information processing that can not only produce and predict but also explain, we don’t see why true observations that current AI teaches us nothing about how humans operate imply that current AI can’t in two or five, or ten or twenty years—be indistinguishable from human intelligence, albeit derived differently than human intelligence.

More, what even counts as intelligence? What counts as creativity and providing explanations? What counts as understanding? Looking at current reports, videos, etc., even if there is a whole lot of profit-seeking hype in them, as we are sure is the case, we think AI programs in some domains (for example playing complex games, protein folding, and finding patterns in masses of data) already do better than humans who are best at such pursuits, and already do better than most humans, in many more domains.

For example, how many people can produce art work better than current AIs? We sure can’t. How many artists can do so even today, much less a year from now? A brilliant friend just yesterday told of having to write a complex letter for his work. He asked chatGPT to do it. In a long eye blink he had it. He said it was flawless and he admitted it was better than he would have produced. And this was so despite that he has written hundreds of letters. Is this no more socially concerning than when decades ago people first used a camera, a word processor, a spreadsheet, or a spell checker? Is this just another example of technology making some tasks easier? Do AIs that already do a whole lot of tasks previously thought to be purely human count as evidence that AIs can do that much and likely much more? Or, oddly, does what they do count as evidence that they will never do that much or more?

We worry that to dismiss the importance of current AIs because they don’t embody human mechanisms risks obscuring that AI is already having widespread social impact that ought to concern us for practical, psychological, and perhaps security reasons. We worry that such dismissals may imply AIs don’t need very substantial regulation. We have had effective moratoriums on human cloning, among other uses of technology. The window for regulating AI, however, is closing fast. We worry that the task at hand isn’t so much to dispel exaggerated hype about AI as it is to acknowledge AI’s growing capacities and understand not only its potential benefits but also its imminent and longer run dangers so we can conceive how to effectively regulate it. We worry that the really pressing regulatory task could be undermined by calling what is occurring “superficial and dubious” or “hi tech plagiarism” so as to counter hype.

Is intelligent regulation urgent? To us, it seems obvious it is. And are we instead seeing breakneck advance? To us, it seems obvious we are. Human ingenuity can generate great leaps that appear like magic and even auger seeming miracles. Un-opposed capitalism can turn even great leaps into pain and horror. To avoid that, we need thought and activism that wins regulations.

Technologies like ChatGPT don’t exist in a vacuum. They exist within societies and their defining political, economic, community, and kinship institutions.

The US is in the midst of a mental health crisis with virtually every mental health red flag metric off-the-charts: Suicides and ‘deaths of despair’ are at historic levels. Alienation, stress, anxiety, and loneliness are rampant. According to the American Psychological Association’s Stress in America, the primary drivers of our breakdown are systemic: economic anxiety, systemic oppressions, alienation from our political, economic, and societal institutions. Capitalism atomizes us. It then commodifies meaningful connections into meaninglessness. 

Social Media algorithms calculate the right hit that never truly satisfies. They keep us reaching for more. In the same way that Social Media is engineered to elicit addiction through user-generated content, language model AI has the potential to be far more addicting, and damaging. Particularly for vulnerable populations, AI can be fine-tuned to learn and exploit each person’s vulnerabilities—generating content and even presentation style specifically to hook users in. 

In a society with rampant alienation, AI can exploit our need for connection. Imagine millions tied into AI subscription services desperate for connection. Profit motive will incentivize AI companies to not just lure more and more users, but to keep them coming back.

Once tied in, the potential for misinformation & propagandization greatly exceeds even social media. If AI replaces human labor in human defining fields, what then is left of “being human”? Waiting for AI guidance? Waiting for AI orders?

Clarity about what to do can only emerge from further understanding what is happening. But even after a few months of AI experiences, suggestions for minimal regulations seem pretty easy to come by. For example:

  • Legislate that all AI software & algorithms that have public impact must be Open Sourced allowing for their source code to be audited by the public.
  • Establish a new regulatory body, similar to the FDA, for public impact software.
  • Legislate that all AI-generated content, whether voice, chat, image, video, etc., must include a clearly visible/audible, standardized watermark/voicemark stating that the content was generated by AI, that the user has to acknowledge.
  • Legislate that all AI generated content provide a list of all specific sources used/learned to generate that particular content, including weights.
  • Legislate that any firm, organization, or individuals creating and distributing intentionally misleading and/or manipulative false AI created content be subject to severe penalties.
  • Legislate that no corporation, public or private, can replace workers with AI unless the government okays the step as being consistent with human priorities (not just profit seeking), the workforce of the workplace votes in favor of the change as being consistent with workforce conditions and desires (and not just profit seeking), and the replaced workers continue to receive their current salary from their old firm until they are employed at a new firm.

GO BACK