Subscribe to our Newsletter


click to dowload our latest edition

CLICK HERE TO SUBSCRIBE TO OUR NEWSLETTER

Featured Item

Can a robot be Jewish? AI’s quandaries for humanity

Published

on

Artificial Intelligence (AI) is already a part of our world, and as it grows, it will have a profound impact on ethics, religion, and humanity. So says Paul Root Wolpe, the Raymond F. Schinazi Distinguished Research Chair in Jewish Bioethics, professor of medicine, paediatrics, psychiatry, neuroscience, and sociology, and the director of the Center for Ethics at Emory University.

Wolpe presented “A Jewish perspective on artificial intelligence and its ethical challenges” at the eLimmud2 series of webinars on Sunday, 25 October.

“At the moment, there is no aspect of your life [that’s] not impacted in one way or another by AI,” he said. “AI already helps fly planes and drive cars, select what services and products we see in our social media, determines who gets a mortgage, and how healthcare is delivered. In the future, we may see robotic personal assistants, autonomous surgical robots, thought controlled gaming, real time universal translation, virtual companions, and real time emotion analytics.”

Yet all this comes with important ethical concerns, and according to Wolpe, rabbis are already exploring these quandaries. He referred the audience to the writing of Rabbi Daniel Nevins, who asks fascinating questions about AI and Judaism.

For example, “Are Jews liable for the halachic consequences of actions taken by machines on their behalf, for example, Sabbath labour? Should ethical principles derived from halacha be integrated into the development of autonomous systems for transportation, medical care, warfare, and other morally charged activities, allowing autonomous systems to make life-or-death decisions? Might a robot perform a mitzvahor other halachically significant action? Is it conceivable to treat an artificial agent as a person? As a Jew?”

In Judaism, there are hardly any rituals or rites that only a rabbi can perform, “but in other religions like Christianity, would robots be allowed to perform rites?” asked Wolpe rhetorically. “What about robots that can search the Talmud and analyse it with a speed and thoroughness that human beings simply don’t have? During an experiment in Germany, a robot wrote the entire script of the Torah perfectly. Can this Torah be kosher? Probably not, as this is one of the few things in Judaism that a human being is commanded to do.”

Wolpe showed a famous experiment of a Microsoft chat bot programme from 2016, which learned and modified its behaviour according to its interaction with others. “They figured that if people interacted with it, it would start to sound more like a human being. But that isn’t what happened. People started playing with it – they didn’t hide that it was a bot – and it began to modify its responses. It made racist, antisemitic statements, and even denied the Holocaust. In 24 hours, it became a hate-filled bot because of the way people responded to it.” This example shows why people are fearful of AI and its ability to create hate and division in the world.

Wolpe explained that all AI is based on algorithms, “which are like a formula or recipe. The problem is, we don’t know how AI makes the decisions it makes, as algorithms are often opaque. When making AI, you ‘plug in’ existing algorithms to make something new, and sometimes add your own. These are thousands and millions of lines of code, but we sometimes have no way to explain how these decisions are made. For example, if a doctor tells you that you have cancer, you want to know how he came to that conclusion. But AI might not give that reasoning.”

This is crucial when it comes to something like the ethics of automated cars. If a car is driving itself and something goes wrong, it may need to make a decision: drive into a group of pedestrians or smash into a wall, possibly killing the occupant. For human beings, that decision may need to be made in a tenth of a second, and they do their best. “But a tenth of a second is an eternity for AI. It can make millions of calculations. So we need to programme in what we want it to do in these situations,” Wolpe said. “What do we tell it to do? With AI, we will be programming moral decision-making into machines for the first time in history.”

Taking this question one step further, he asked, “Would we tell the car what kind of pedestrians it can and can’t hit? Will we give it the ability to tell the difference between a baby and a businessman?” Consumers have actually been asked these questions, and most say the car should save the pedestrians. But when asked if they would buy such a car, they say no. Yet, if we are going to develop automated cars, we will need to programme such decisions into them.

It’s the same with automatic weapons. “While some say robots in warfare will reduce casualties, where are the ethical limits? Can we programme a machine to ‘take someone out’? What if terrorists have drones? Should robots be used in riots?” he asked.

Robots that look like baby animals are already used to comfort elderly people with dementia. “How ethical is it that they aren’t aware that this is a robot? And what about robots that remind people to take their medication – what if the person refuses?”

There is also the question of “companion robots”, where people buy a human-size robot and form a relationship with it. “How would Judaism respond to this if it’s a religion that places so much emphasis on the interactions between people?” he asked.

Exploring this further, Wolpe asked if robots developed some form of “self-awareness” or higher consciousness, could they become Jewish? While this may seem like a ridiculous question now, it might be something that needs to be answered one day.

He said “all questions about AI” in Judaism go back to the Golem, the legend of a clay creature that has been magically brought to life. The classic narrative of the Golem tells of how Rabbi Judah Loew of Prague creates a Golem to defend the Jewish community from antisemitic attacks. But eventually, the Golem grows violent, and Rabbi Loew is forced to destroy it.

If robots are ever going to become a danger to humans, we need to heed the story of the Golem, “which had a built-in ‘kill switch’ to stop it if it got out of hand”, said Wolpe.

These are all questions that Wolpe believes need to be answered if the human race is going to develop AI. For Judaism, a religion based on a strict set of ethics, they are going to be difficult to ask, and to answer.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *