The exotic versus the mundane ethics of AI
The ethics of AI is like a two-headed dragon. One head represents the long-term worry: AI becomes so advanced that it takes over the world and ultimately destroys humanity. The other head stands for the short-term worry: AI models can “bleed” inaccurate information, which can inadvertently lead to the spread of misinformation. I am not exactly saying that the two types of worries belong to fairytales as dragons do, although the long-term one most likely does belong there. The short-term worry holds validity, and it is crucial that researchers and AI practitioners continue working to address this issue. However, neither the short-term nor the long-term risks associated with AI is what the majority of people who interact with systems like ChatGPT ponder. Everyday users are not preoccupied with the “paper clip scenario” or the spread of misinformation that much. Instead, their considerations revolve around the ethical dilemmas posed by AI in their daily lives. While the mainstream (media) is currently fascinated by the more exotic stuff, which is mathematics revolting on the human race, the mundane worries center around how technology affects work, creative expression, and relationships.
For instance, on Reddit, one person asks: “I’m using ChatGPT to breeze through freelance work, do you think that is ethical?” Similarly, in a different subreddit someone else raises the question, “Is it unethical to use ChatGPT for assignments?” Another subreddit poses the query, “Is it cheating if your spouse has an AI girlfriend?” Yet another goes with, “I have been selling some of my (or the AI’s work) on T-Shirts and NFTs. Is it ethical to sell art trained on such a wide array of real artists work? Am I in the wrong?”
What is intriguing to me is that these ethical dilemmas are not exactly new. While the dimension or tool of generative AI may be relatively novel, the underlying questions have existed for ages. People have long grappled with questions regarding productivity and automation, as well as creativity and authorship.
Consider, for instance, the further automation and robotization of US autoplants in the 1970s and 1980s. As the quotas for car production increased, workers were forced to toil harder and faster. However, this did not lead to an increase in workers’ wages. This historical fact connects to the earlier subreddit discussion on freelancing, where the individual was advised to continue with their “secret” automation until it was discovered, leading to raise the quotas. (In that particular case, the freelancer’s work involved summarizing texts.)
The sentiment surrounding the morality of selling AI-generated art was mixed. Some people saw no issue with it, remarking that first copyright law needed to catch up with the advancement of new tools, while others thought that it is in fact unethical because the living artists, whose works were used to train the models, are not being compensated. From a psychological perspective, the question at large is not even compensation and legality, it is simply human effort: the production of a piece that involved sweat and time versus the creation of something relatively effortless. This question will become even more prominent as we become inundated with AI-generated feature films and other artistic content. Would you value two movies of comparable quality equally if one was produced by an AI and the other by humans? My gut feeling is that some people may not.