19/10/2016 By micoots 0

Artificial intelligence spawns ethical concerns

Copyright (c) 2016 Baptist Press. Reprinted from Baptist Press (www.baptistpress.com), news service of the Southern Baptist Convention. The original story can be found at http://www.bpnews.net/47739/artificial-intelligence-spawns-ethical-concerns

NASHVILLE (BP) — The ethics of artificial intelligence (AI) has drawn comments from the White House and British House of Commons in recent weeks, along with a nonprofit organization established by Amazon, Google, Facebook, IBM and Microsoft. Now, Baptist computer scientists have called Christians to join the discussion.


Image: iStockLouise Perkins, professor of computer science at California Baptist University, told Baptist Press she is “quite worried” at the lack of an ethical code related to AI. The Christian worldview, she added, has much to say about how automated devices should be programmed to safeguard human flourishing.

Individuals with a Christian worldview need to be involved in designing and programing AI systems, Perkins said, to help prevent those systems from behaving in ways that violate the Bible’s ethical standards. Believers can thus employ “the mathematics or the logic we will be using to program these devices” to “infuse” a biblical worldview “into an [AI] system.”

While AI has no universally accepted definition, an Oct. 12 report by the White House’s Office of Science and Technology Policy defines it as “a computerized system that exhibits behavior that is commonly thought of as requiring intelligence.”

Self-driving cars and AI-equipped unmanned aircraft are two examples highlighted in the report, which also notes AI applications in health care, transportation, the environment, the criminal justice system and “economic inclusion.”

Perkins noted automated manufacturing, surgery and warfare as potential applications of AI. Among the most common forms of AI on the market is the automated personal assistant on many smartphones.

The 58-page White House report states that “ethical training for AI practitioners is a necessary part” of ensuring “fairness and safety” in the use of developing technologies.

Similarly, the House of Commons’ Science and Technology Select Committee stated in an Oct. 5 report that “ongoing consideration” is needed in such areas as minimizing “bias being accidentally built into AI systems” and limiting “unwanted, or unpredictable, behaviours” by AI systems.

The Amazon/Google/Facebook/IBM/Microsoft partnership — dubbed the partnership on Artificial Intelligence to Benefit People and Society — was announced Sept. 28 with one of its goals being publication of research “in areas such as ethics, fairness and inclusivity,” according to a press release.

Perkins said one of her ethical concerns related to AI is security. By hacking into the AI systems of a future manufacturing plant, for example, terrorists might be able to program an entire series of self-driving cars to malfunction at the same time, causing chaos on the roads. AI weapons systems also could fall prey to hackers, and AI systems that prepared food could be hacked to poison people.

Positive AI capabilities that appear to be on the horizon are “wonderful,” Perkins said. “But these things come with the ability for someone else to take control of them and use them.”

The only “non-hackable” devices available currently “are behind fences, where they are not interconnected to the real world,” Perkins said. “The devices we’re developing now are going to be interconnected to the real world.”

Perkins also noted that ethical standards will have to be programmed into AI systems involved in surgery and warfare among other applications. A robot performing surgery on a pregnant woman, for instance, could have to weigh the life of the baby relative to the life of the mother, and an AI weapon system could have to apply standards of just warfare.

Mike Brake, a former computer scientist at Los Alamos National Laboratory in New Mexico who now serves as a student pastor, echoed some of Perkins’ concerns.

AI “is to be used to help aid and assist humans,” said Brake, student pastor at First Baptist Church in Los Alamos. “There’s definitely a great intention behind it in almost any use.”

Yet “when you have software making decisions on behalf of humanity,” Brake told BP, “you have humans who are writing that software. And humans, as we all know, are prone to error.”

Christians should not worry about robots taking over the world, Brake said. “But you do have to be aware that there is artificial intelligence, and it’s a software program that is gaining intelligence and making decisions.”

Get Baptist Press headlines and breaking news on Twitter (@BaptistPress), Facebook (Facebook.com/BaptistPress) and in your email (www.bpnews.net/SubscribeBP).

Download Story