Skip to main content

AI Ethics: What Do Religious Leaders Think?

Ethics
| Binazir Sankibayeva | Issue 153 (May - Jun 2023)

This article has been viewed 11443 times

AI Ethics

In This Article

  • Scientists believe that AI can contribute to combating the climate crisis as long as innovators make climate decision-making processes local, democratic, and open.
  • Technology has to be used for the improvement of human life and shouldn’t leave anyone behind.” It should not harm but serve humanity. Therefore, human beings must be in control of it and not vice versa.

Are you struggling to come up with fresh content ideas for your blog, website, or social media channels? If so, I have great news for you. I'm excited to share with you "1000+ ChatGPT Prompts for Business". This collection contains over 1000 pre-written prompts that you can easily copy and paste into ChatGPT to help you generate content, saving you time and effort. Get it here for FREE (Limited Time Only).

With the late-2022 launch of chatbots, ads like the one above is becoming more common. As exciting as it appears, many are not sure where this new era is leading. One major question about AI centers around ethics. On January 10, 2023, “AI Ethics: An Abrahamic Commitment to the Rome Call” [1] gathered the three Abrahamic religions and different corporate leaders at the Vatican for the purpose of discussing ethics in navigating this technology. The common agreement was that algorithms should improve the world but not be the ultimate decision-maker.

In his speech, Microsoft representative Brad Smith underlined the importance of considering a religious take on ethics as a moral compass in determining the rules and regulations around AI. Following the conference, the leaders of the Abrahamic religions signed a joint declaration. It urged the developers of AI to follow six principles: AI must be transparent, inclusive, accountable, impartial, reliable, secure, and respectful of the users’ privacy. It is necessary to analyze relevant studies that bring up the positive and negative aspects of AI in order to better understand the rising concern that surrounds it.

Findings from previous AI studies indicate that AI carries about itself a vague identity; hence, it is not totally clear what might possibly go wrong with the technology. Proponents of the technology call for its urgent implementation because machines seem to be working for the benefit of society. For instance, AI is currently being used at fulfilling some Sustainable Development Goals, and the UN believes that technology can assist in overcoming global catastrophes in the future. Some believe AI has potential benefits [2] in industries such as manufacturing, transportation, agriculture, translation, and publishing. Scientific American [3] reported how AI is helping doctors find out possible causes of life-threatening illnesses in order to reduce deaths by 20 percent. Researchers state that medical experts analyze the provided information related to a patients’ condition and make decisions about whether to agree with a machine’s data or not. In this case, health providers are able to take control of the machine, not vice versa. This is one example that demonstrates that people are not blindly relying on algorithms.

Scientists believe that AI can contribute to combating the climate crisis [4] as long as innovators make climate decision-making processes local, democratic, and open. Here we can see further evidence of technology acting as an ally in coping with environmental disasters and illnesses without compromising human authority.

On the other hand, the future consequences of man-made tech is still blurry; there are legitimate concerns that their possible malfunctioning might lead to harmful situations. The launch of ChatGPT has caused major debates about its ethical usage. While generating something based on already existing data, some people think ChatGPT violates the rights of the artists and writers who actually made the work that the program sources. A couple of studies have evidenced cases where machines let down their users. In 2016, Chatbot Tay made by Microsoft initially looked human-friendly but had to be shut down after unexpectedly tweeting pro-Nazi, antisemitic, and anti-feminist remarks. The technology failed at giving sensible responses when Joseph Austerweil, a psychologist at the University of Wisconsin-Madison, tested the machine for morality questions [5].

A study [6] by the National Institute of Standards and Technology showed that facial recognition systems are biased against people of color and women. San Francisco and Berkeley (CA), Somerville, and Brookline (MA) prohibited the government from using facial recognition tools because the biased technology is found to be more problematic in law enforcement and governance. Maria De-Arteaga, an algorithmic systems researcher at Carnegie Mellon University suggested that companies and governments should be very careful before relying on a machine’s intellect and questioned the safety of these technologies. 

While these machines appear uncontrollable and unpredictable, an ethics researcher at Simon Fraser University in British Columbia believes that “they are not unguided” and work according to the instructions and choices made by people. At the Rome Call for AI ethics convention, Mario Rosetti, Professor Emeritus of Theoretical Physics at the Politecnico di Torino, noted that the human brain cannot be compared to an artificial intellect due to the brain’s magnificent structure and function. If the human brain is far more powerful than AI, then there is hope that the technology could be controlled by people. All the previous studies accentuate an argument that AI must not be worshiped.

Since machines are constantly being improved upon, religious and tech leaders have found it important to discuss the ethical issues that surround AI and help innovators minimize their risks. Religious leaders at Rome Call for AI ethics approached the ethical issues of the technology from a spiritual perspective by referring to holy scriptures.

Shaykh Hamza Yusuf, President of Zaytuna College, explained how inventions have historically been approached with caution. Yusuf gave the example of a dialogue from Plato’s Phaedrus, in which Thoth (or Theuth) shows his invention of writing to the King as “a recipe for memory and wisdom.” The King responds that this invention will “implant forgetfulness in their souls.” With this example, Shaykh Yusuf was basically pointing to the risk that such inventions may not really serve knowledge, for with such inventions, knowledge is no longer coming from inside but from outside. He also highlighted that the concept of invention always had negative connotations in many religious traditions out of fear of societal destabilization. When the focus is on technological benefits, people might disregard the potential harm, including its alienating, distracting nature which we experience everyday by “constantly checking our phones.” In the past, distraction was considered synonymous with “mental drain.” Kafka said, “Evil is whatever distracts.” While progress might be inevitable, it does not mean all progress is useful. We need to “look down the road at the consequences,” Shaykh said, based on an Islamic juristic principle (al-nazar fi al-maalat) and seriously consider how we can prevent harm. In his speech, Shaykh Abdallah bin Bayyah reminded that Prophet Muhammad, peace be upon him, said, “There should be no harm and no reciprocation of harm.” Reflecting on Aristotle’s five intellectual virtues, Hamza Yusuf noted the importance of approaching technology (artistry, craftsmanship) with prudence (phronesis) and wisdom (sophia).

The Jewish attitude is that humans are created in God’s image and that they carry divine attributes within themselves and therefore stand above artificial intelligence. Israeli attorney and professor of law Haim Aviad Hacohen talked about the ancient Babylonian civilization’s failure to appreciate this quality of mankind. The Babylonians had an eager wish to reach heaven through the highest tower. The Bible tells us that this tower was special and that it would have demonstrated the technical and economic accomplishment of that nation. People were obsessed with the idea of conquering the sky, so they excluded God’s opinion and showed no care for the construction workers that built the tower because, as Rabbi Hacohen narrated, “from the high top one cannot really see millions of needy people on the ground who need their attention.” Rabbi Shlomo David Rosen gave an example, saying that “When a brick fell down and broke, people stopped their work and cried. But when a person fell and died, they did not bat an eyelid.” Consequently, people were punished for their arrogant and negligent behaviors when God made them speak different languages.

Previously, Rabbi Eliezer Simha Weisz, a member of the Council of the Chief Rabbinate, said that the Jewish community used to make golems (creatures brought to life using clay and Hebrew incantations) by means of kabbalistic efforts to protect themselves from their enemies. However, the golems were the ones who would be defeated. Even though they were man-made powerful creatures they came out to be weaker than human beings.

The essential argument made by almost every religious leader in the convention can be summed up into one common statement: “Technology has to be used for the improvement of human life and shouldn’t leave anyone behind.” It should not harm but serve humanity. Therefore, human beings must be in control of it and not vice versa. All of the speakers at the event supported Pope Francis’ statement concerning asylum seekers. Thus, technology shouldn’t harm the most vulnerable category of people but assist them in overcoming their hardships.

When all is said and done, it seems that technology is like God’s creation of evil. It is there to guide people in the difference between good and bad, and to stimulate us to seek higher achievements in this life. But this is possible only as long as we control that which is evil, which is not an easy task. Similarly, AI is a man-made invention, and it is inevitably becoming part of our daily lives. Instead of avoiding its usage, it’s better to look for healthy ways of integrating it into our lives. However, whether or not AI will act in favor of, or against, mankind will depend on how it is applied.

References

  1. https://www.romecall.org/the-abrahamic-commitment-to-the-rome-call-for-ai-ethics-10th-january-2023/
  2. https://www.itu.int/en/mediacentre/backgrounders/Pages/artificial-intelligence-for-good.aspx
  3. https://www.scientificamerican.com/article/algorithm-that-detects-sepsis-cut-deaths-by-nearly-20-percent/
  4. https://www.scientificamerican.com/article/what-ai-can-do-for-climate-change-and-what-climate-change-can-do-for-ai/
  5. https://www.nytimes.com/2021/11/19/technology/can-a-machine-learn-morality.html
  6. https://www.nytimes.com/2019/12/19/technology/facial-recognition-bias.html

More Coverage

“God contracts and expands.” (2:245) The heart is a fairly small muscle that beats 100,000 times a day and produces roughly 115,000 Joules of energy, enough to boil a cup of water for some tea. Like a flowing river, deoxygenated blood travels...
Being trouble-free is the biggest trouble of all. I’d rather wish people felt so troubled for others that their hearts felt burning in embers, rather than being trouble-free. Troubled as to why people are in so miserable condition because they do ...
More, more, and more Adams GS et al. People systematically overlook subtractive changes. Nature, April 2021 A recent study showed that human beings are driven by a powerful instinct to add rather than subtract in daily problem solving. Researche...
Hi, my name is Everlasting. I know what you are thinking. What an absurd name, what a paradox, as no one can last forever except in paradise.  Well, I’m not sure who gave me the name, but I like it. It gives me the power to grow in an environment ...