{"id":11502,"date":"2023-03-19T19:35:55","date_gmt":"2023-03-19T17:35:55","guid":{"rendered":"https:\/\/metacpc.org\/?p=11502"},"modified":"2023-03-19T19:36:18","modified_gmt":"2023-03-19T17:36:18","slug":"artificial-intelligence-friend-or-foe","status":"publish","type":"post","link":"https:\/\/metacpc.org\/en\/artificial-intelligence-friend-or-foe\/","title":{"rendered":"Artificial Intelligence: Friend or Foe?"},"content":{"rendered":"\n<p class=\"has-text-align-center has-accent-background-color has-background\"><strong><a href=\"https:\/\/znetwork.org\/znetarticle\/artificial-intelligence-friend-or-foe\/\" target=\"_blank\" rel=\"noopener\">Michael Albert and Arash Kolahi | ZNet<\/a><\/strong><\/p>\n\n\n\n<p>In a hypothetical race to claim the mantle of biggest threat to humanity, nuclear war, ecological catastrophe, rising authoritarianism, and new pandemics are still well in front of the pack. But, look there, way back but coming on fast. Is that AI? Is it a friend rushing forward to help us, or another foe rushing forward to bury us?<\/p>\n\n\n\n<p>As a point of departure for this essay, in their recent Op Ed in&nbsp;<a href=\"https:\/\/www.nytimes.com\/2023\/03\/08\/opinion\/noam-chomsky-chatgpt-ai.html\" target=\"_blank\" rel=\"noreferrer noopener\">The New York Times<\/a>&nbsp;Noam Chomsky and two of his academic colleagues\u2014Ian Roberts, a linguistics professor at the University of Cambridge, and Jeffrey Watumull, a philosopher who is also the director of artificial intelligence at a tech company\u2014tell us that \u201chowever useful these [AI] programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects\u2026.\u201d<\/p>\n\n\n\n<p>They continue: \u201cUnlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.\u201d<\/p>\n\n\n\n<p>Readers might take these comments to mean current AI so differs from how humans communicate that predictions that AI will displace humans in any but a few minor domains is hype. The new Chatbots, painters, programmers, robots, and what all are impressive engineering projects but nothing to get overly agitated about. Current AI handles language in ways very far from what now allows humans to use language as well as we do. More, current AIs\u2019 neural networks and large language models are encoded with \u201cineradicable defects\u201d that prevent the AIs from using language and thinking remotely as well as people. The Op Ed\u2019s reasoning feels like a scientist hearing talk about a perpetual motion machine that is going to revolutionize everything. The scientist has theories that tell her a perpetual motion machine is impossible. The scientist therefore says the hubbub about some company offering one is hype. More, the scientist knows the hubbub can\u2019t be true even without a glance at what the offered machine is in fact doing. It may look like perpetual motion, but it can\u2019t be, so it isn\u2019t. But what if the scientist is right that it is not perpetual motion but nonetheless the machine is rapidly gaining users and doing harm, with much more harm to come?<\/p>\n\n\n\n<p>Chomsky, Roberts, and Watumull say humans use language as adroitly as we do because we have in our minds a human language faculty that includes certain properties. If we didn\u2019t have that, or if our faculty wasn\u2019t as restrictive as it is, then we would be more like birds or bees, dogs or chimps, but not like ourselves. More, one surefire way we can know that another language-using system doesn\u2019t have a language faculty with our language faculty\u2019s features is if it can do just as well with a totally made up nonhuman language as it can do with a specifically human language like English or Japanese. The Op Ed argues that the modern chatbots are of just that sort. It deduces that they cannot be linguistically competent in the same ways that humans are linguistically competent.<\/p>\n\n\n\n<p>Applied more broadly, the argument is that humans have a language faculty, a visual faculty, and what we might call an explanatory faculty that provide the means by which we converse, see, and develop explanations. These faculties permit us a rich range of abilities. As a condition of doing so, however, they also impose limits on other conceivable abilities. In contrast, current AIs do just as well with languages that humans can\u2019t possibly use as with ones we can use. This reveals that they have nothing remotely like the innate human language faculty since, if they had that, it would rule out the non human languages. But does this mean AIs cannot, in principle, achieve competency as broad, deep, and even creative as ours because they do not have faculties with the particular restrictive properties that our faculties have? Does it mean that whatever they do when they speak sentences, when they describe things in their visual field, or when they offer explanations for events we ask them about\u2014not to mention when they pass the bar exam in the 90th percentile or compose sad or happy, reggae or rock songs to order\u2014they not only aren\u2019t doing what humans do, but also they can\u2019t achieve outcomes of the quality humans achieve?<\/p>\n\n\n\n<p>If the Op Ed said current AIs don\u2019t have features like we have so they can\u2019t do things the way we do things, that would be fine. In that case, it could be true that AIs can\u2019t do things as well as we do them, but it could also be true that for many types of exams, SATs and Bar Exams, for example, they can outperform the vast majority of the population. What happens tomorrow with GPT 4 and in a few months with GPT 5, or in a year or two with GPT 6 and 7, much less later with GPT 10? What if, as seems to be the case, current AIs have different features than humans but those different features let it do many things we do differently than we do them, but as well or even better than we do them?<\/p>\n\n\n\n<p>The logical problem with the Op Ed is that it seems to assume that only human methods can, in many cases, attain human-level results. The practical problem is that the Op Ed may cause many people to think that nothing very important is going on or even could be going on, without even examining what is in fact going on. But what if something very important is going on? And if so, does it matter?<\/p>\n\n\n\n<p>If the Op Ed focused only on the question \u201cis contemporary AI intelligent in the same way humans are intelligent,\u201d the authors\u2019 answer is no, and in this they are surely right. That the authors then emphasize that they \u201cfear that the most popular and fashionable strain of AI\u2014machine learning\u2014will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge,\u201d is also fair. Likewise, it is true that when current programs pass the Turing test, if they haven\u2019t already done so, it won\u2019t mean that they think and talk the same way we do, or that how they passed the test will tell us anything about how we converse or think. But their passing the test will tell us that we can no longer hear or read their words and from that alone distinguish their thoughts and words from our thoughts and words. But will this matter?<\/p>\n\n\n\n<p>Chomsky, Roberts, and Watumull\u2019s essay seems to imply that AI\u2019s methodological difference from human faculties means that what AI programs can do will be severely limited compared to what humans can do. The authors acknowledge that what AI can do may be minimally useful (or misused), but they add that nothing much is going on comparable to human intelligence or creativity. Cognitive science is not advancing and may be set back. AIs can soundly outplay every human over a chessboard. Yes, but so what? These dismissals are fair enough, but does the fact that current AI generates text, pictures, software, counseling, medical care, exam answers, or whatever else by a different path than humans arrive at very similar outputs mean that current AI didn\u2019t arrive there at all? Does the fact that current AI functions differently than we do necessarily mean, in particular, that it cannot attain linguistic results like those we attain? Does an AI being able to understand nonhuman languages necessarily indicate that the AI cannot exceed human capacities in human languages, or in other areas?<\/p>\n\n\n\n<p>Programs able to do information-based linguistic tasks are very different, we believe, than tractors able to lift more weight than humans, or hand calculators able to handle numbers better than humans. This is partly because AI may take various tasks away from humans. In cases of onerous, unappealing tasks this could be socially beneficial supposing we fairly apportion the remaining work. But what about when capitalist priorities impose escalating unemployment? That OpenAI and other capitalist AI firms exploit cheap overseas labor to label pictures for AI visual training ought not come as a surprise. But perhaps just as socially important, what about the psychological implications of AI growth?<\/p>\n\n\n\n<p>As machines became better able to lift for us, humans became less able to lift. As machines became better able to perform mathematical calculations for us, humans became less able to perform mathematical calculations. Having lost some personal capacity or inclination to lift or to calculate was no big deal. The benefits outweighed the deficits. Even programs that literally trounce the best human players at chess, go, video games, and poker (though the programs do not play the way humans do), had only a fleeting psychological effect. Humans still do those very human things. Humans even learn from studying the games the programs play\u2014though not enough to get anywhere near as good as the programs. But what happens if AI becomes able to write letters better than humans, write essays better, compose music better, plan agendas better, write software better, produce images better, answer questions better, construct films better, design buildings better, teach better, converse better, and perhaps even provide elderly care, child care, medical diagnoses, and even mental health counseling better\u2014or, in each case, forget about the programs getting better than us, what happens when programs function well enough to be profitable replacements for having people do such things?<\/p>\n\n\n\n<p>This isn\u2019t solely about increased unemployment with all its devastating consequences. That is worrisome enough, but an important part of what makes humans human is to engage in creative work. Will the realm of available creative work be narrowed by AI so that only a few geniuses will be able to do it once AI is doing most writing, therapy, composing, agenda setting, etc.? Is it wrong to think that in that case what humans would be pushed aside from could leave humans less human?<\/p>\n\n\n\n<p>The Op Ed argues that AI now does and maybe always will do human-identified things fundamentally differently than humans do them. But does that imply, as we think many&nbsp;<strong>Times&nbsp;<\/strong>readers will think it does, that AIs won\u2019t do such things as well or even better than most or perhaps even all humans. Will AIs be able to simulate human emotions and all-important human authenticity into songs and paintings they make? Maybe not, but even if we ignore the possibility of AIs being explicitly used for ill, don\u2019t the above observations raise highly consequential and even urgent questions? Should we be pursuing AI at our current breakneck pace?<\/p>\n\n\n\n<p>Of course, when AIs are used to deceive and manipulate, to commit fraud, to spy, to hack, and to kill, among other nefarious possibilities, so much the worse. Not to mention, if AIs become autonomous with those anti-social agendas. Even without watching professors tell of AI\u2019s already passing graduate level examinations, even without watching programmers tell of AIs already outputting code faster and more accurately than they and their programmer friends can, and even without watching AIs already audibly converse with their engineers about anything at all including even their \u201cfeelings\u201d and \u201cmotives\u201d, it ought to be clear that AI can have very powerful social implications even as its methods shed zero light on how humans function.<\/p>\n\n\n\n<p>Another observation of the&nbsp;<strong>Times&nbsp;<\/strong>Op Ed is that AIs of the current sort have nothing like a human moral faculty. True, but does that imply they cannot have morally guided results? We would bet, instead, that AI programs can and in many cases already do incorporate moral rules and norms. That is why poor populations are being exploited financially and psychologically to label countless examples of porn as porn\u2014exploitative immorality in service of what, morality or just phony propriety? The problem is, who determines what AI-embedded moral codes will promote and hinder? In current AIs, such a code will either be programmed in or learned by training on human examples. If programmed in, who will decide its content? If learned from examples, who will choose the examples? So the issue isn\u2019t that AI inevitably has no morality. The issue is that AI can have bad morality and perpetuate biases such as racism, sexism, or classism learned from either programmers or training examples.<\/p>\n\n\n\n<p>Even regarding a language faculty, as the Op Ed indicates certainly there is not one like ours in current AI. But is ours the only kind of faculty that can sustain language use? Whether the human language faculty emerged from a million years of slow evolution like most who hear about this stuff think linguists must believe, or it emerged overwhelmingly over a very short duration from a lucky mutation and then underwent only quite modest further evolution while it spread widely, as Chomsky compellingly argues, it certainly exists. And it certainly is fundamental to human language. But why isn\u2019t the fully trained neural network of an AI a language faculty, albeit one different from ours? It generates original text. It answers queries. It is grammatical. Before long (if not already) it will converse better than most humans. It can even do all this in diverse styles. Answer my query about quantum mechanics or market competition, please. Answer like Hemingway. Answer like Faulkner. Egad, answer like Dylan. So why isn\u2019t it a language faculty too\u2014albeit unlike the human one and produced not by extended evolution or by rapid luck, but by training a neural network language model?<\/p>\n\n\n\n<p>It is true that current AI can work with human languages and also, supposing there was sufficient data to train it, with languages the human faculty can not understand. It is also true that after training, an AI can in some respects do things the human language faculty wouldn\u2019t permit. But why does being able to work with nonhuman languages mean that such a faculty must be impoverished regarding what it can do with human languages? The AI\u2019s language faculty isn\u2019t an infinitely malleable, useless blank slate. It can\u2019t work with any language it isn\u2019t trained on. Indeed, the untrained neural network can\u2019t converse in a human language or in a non-human language. Once trained, however, does its different flexibility about what it makes possible and what it excludes make it not a language faculty? Or does its different flexibility just make it not a human-type language faculty? And does it even matter for social as opposed to scientific concerns?<\/p>\n\n\n\n<p>Likewise, isn\u2019t an AI faculty that can look at scenes and discern and describe what\u2019s in them and can even identify what is there but is out of place being there, and that can do so as accurately as people, or even more accurately, a visual faculty, though again, certainly not the same as a human visual faculty?<\/p>\n\n\n\n<p>And likewise for a drawing faculty that draws, a calculating faculty that calculates, and so on. For sure, despite taking inspiration from human experiences and evidence, such as AI programmers have done, none of these AI faculties are much like the human versions. They do not do what they do the way we humans do what we do. But unless we want to say that the contingent, historically lucky human ways of information processing are the only ways of information processing that can handle language as intelligently as humans can, and are the only ways of information processing that can not only produce and predict but also explain, we don\u2019t see why true observations that current AI teaches us nothing about how humans operate imply that current AI can\u2019t in two or five, or ten or twenty years\u2014be indistinguishable from human intelligence, albeit derived differently than human intelligence.<\/p>\n\n\n\n<p>More, what even counts as intelligence? What counts as creativity and providing explanations? What counts as understanding? Looking at current reports, videos, etc., even if there is a whole lot of profit-seeking hype in them, as we are sure is the case, we think AI programs in some domains (for example playing complex games, protein folding, and finding patterns in masses of data) already do better than humans who are best at such pursuits, and already do better than most humans, in many more domains.<\/p>\n\n\n\n<p>For example, how many people can produce art work better than current AIs? We sure can\u2019t. How many artists can do so even today, much less a year from now? A brilliant friend just yesterday told of having to write a complex letter for his work. He asked chatGPT to do it. In a long eye blink he had it. He said it was flawless and he admitted it was better than he would have produced. And this was so despite that he has written hundreds of letters. Is this no more socially concerning than when decades ago people first used a camera, a word processor, a spreadsheet, or a spell checker? Is this just another example of technology making some tasks easier? Do AIs that already do a whole lot of tasks previously thought to be purely human count as evidence that AIs can do that much and likely much more? Or, oddly, does what they do count as evidence that they will never do that much or more?<\/p>\n\n\n\n<p>We worry that to dismiss the importance of current AIs because they don\u2019t embody human mechanisms risks obscuring that AI is already having widespread social impact that ought to concern us for practical, psychological, and perhaps security reasons. We worry that such dismissals may imply AIs don\u2019t need very substantial regulation. We have had effective moratoriums on human cloning, among other uses of technology. The window for regulating AI, however, is closing fast. We worry that the task at hand isn\u2019t so much to dispel exaggerated hype about AI as it is to acknowledge AI\u2019s growing capacities and understand not only its potential benefits but also its imminent and longer run dangers so we can conceive how to effectively regulate it. We worry that the really pressing regulatory task could be undermined by calling what is occurring \u201csuperficial and dubious\u201d or \u201chi tech plagiarism\u201d so as to counter hype.<\/p>\n\n\n\n<p>Is intelligent regulation urgent? To us, it seems obvious it is. And are we instead seeing breakneck advance? To us, it seems obvious we are. Human ingenuity can generate great leaps that appear like magic and even auger seeming miracles. Un-opposed capitalism can turn even great leaps into pain and horror. To avoid that, we need thought and activism that wins regulations.<\/p>\n\n\n\n<p>Technologies like ChatGPT don\u2019t exist in a vacuum. They exist within societies and their defining political, economic, community, and kinship institutions.<\/p>\n\n\n\n<p>The US is in the midst of a mental health crisis with virtually every mental health red flag metric off-the-charts: Suicides and \u2018deaths of despair\u2019 are at historic levels. Alienation, stress, anxiety, and loneliness are rampant. According to the American Psychological Association\u2019s&nbsp;<strong>Stress in America<\/strong>, the primary drivers of our breakdown are systemic: economic anxiety, systemic oppressions, alienation from our political, economic, and societal institutions. Capitalism atomizes us. It then commodifies meaningful connections into meaninglessness.&nbsp;<\/p>\n\n\n\n<p>Social Media algorithms calculate the right hit that never truly satisfies. They keep us reaching for more. In the same way that Social Media is engineered to elicit addiction through user-generated content, language model AI has the potential to be far more addicting, and damaging. Particularly for vulnerable populations, AI can be fine-tuned to learn and exploit each person\u2019s vulnerabilities\u2014generating content and even presentation style specifically to hook users in.&nbsp;<\/p>\n\n\n\n<p>In a society with rampant alienation, AI can exploit our need for connection. Imagine millions tied into AI subscription services desperate for connection.&nbsp;Profit motive will incentivize AI companies to not just lure more and more users, but to keep them coming back.<\/p>\n\n\n\n<p>Once tied in, the potential for misinformation &amp; propagandization greatly exceeds even social media. If AI replaces human labor in human defining fields, what then is left of \u201cbeing human\u201d? Waiting for AI guidance? Waiting for AI orders?<\/p>\n\n\n\n<p>Clarity about what to do can only emerge from further understanding what is happening. But even after a few months of AI experiences, suggestions for minimal regulations seem pretty easy to come by. For example:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Legislate that all AI software &amp; algorithms that have public impact must be Open Sourced allowing for their source code to be audited by the public.<\/li>\n\n\n\n<li>Establish a new regulatory body, similar to the FDA, for public impact software.<\/li>\n\n\n\n<li>Legislate that all AI-generated content, whether voice, chat, image, video, etc., must include a clearly visible\/audible, standardized watermark\/voicemark stating that the content was generated by AI, that the user has to acknowledge.<\/li>\n\n\n\n<li>Legislate that all AI generated content provide a list of all specific sources used\/learned to generate that particular content, including weights.<\/li>\n\n\n\n<li>Legislate that any firm, organization, or individuals creating and distributing intentionally misleading and\/or manipulative false AI created content be subject to severe penalties.<\/li>\n\n\n\n<li>Legislate that no corporation, public or private, can replace workers with AI unless the government okays the step as being consistent with human priorities (not just profit seeking), the workforce of the workplace votes in favor of the change as being consistent with workforce conditions and desires (and not just profit seeking), and the replaced workers continue to receive their current salary from their old firm until they are employed at a new firm.<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>In a hypothetical race to claim the mantle of biggest threat to humanity, nuclear war, ecological catastrophe, rising authoritarianism, and new pandemics are still well in front of the pack. But, look there, way back but coming on fast. Is that AI? Is it a friend rushing forward to help us, or another foe rushing forward to bury us?<\/p>\n","protected":false},"author":5,"featured_media":11503,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"template-parts\/content-blog.php","format":"standard","meta":{"_acf_changed":false,"_eb_attr":"","_EventAllDay":false,"_EventTimezone":"","_EventStartDate":"","_EventEndDate":"","_EventStartDateUTC":"","_EventEndDateUTC":"","_EventShowMap":false,"_EventShowMapLink":false,"_EventURL":"","_EventCost":"","_EventCostDescription":"","_EventCurrencySymbol":"","_EventCurrencyCode":"","_EventCurrencyPosition":"","_EventDateTimeSeparator":"","_EventTimeRangeSeparator":"","_EventOrganizerID":[],"_EventVenueID":[],"_OrganizerEmail":"","_OrganizerPhone":"","_OrganizerWebsite":"","_VenueAddress":"","_VenueCity":"","_VenueCountry":"","_VenueProvince":"","_VenueState":"","_VenueZip":"","_VenuePhone":"","_VenueURL":"","_VenueStateProvince":"","_VenueLat":"","_VenueLng":"","_VenueShowMap":false,"_VenueShowMapLink":false,"_tribe_events_control_status":"","_tribe_events_control_status_canceled_reason":"","_tribe_events_control_status_postponed_reason":"","_tribe_events_control_online":"","_tribe_events_control_online_url":"","footnotes":""},"categories":[61],"tags":[],"class_list":["post-11502","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-what-we-like-en"],"acf":[],"_links":{"self":[{"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/posts\/11502","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/comments?post=11502"}],"version-history":[{"count":5,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/posts\/11502\/revisions"}],"predecessor-version":[{"id":11509,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/posts\/11502\/revisions\/11509"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/media\/11503"}],"wp:attachment":[{"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/media?parent=11502"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/categories?post=11502"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/metacpc.org\/en\/wp-json\/wp\/v2\/tags?post=11502"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}