Artificial Intelligence and Ethics
Background

Though the concepts behind "mechanical computation" had been developing over hundreds of years, it wasn't until the mid-20th century that digital computers arrived on the scene and the field of computer science emerged. Not long after that, computer scientists began considering the possibility of artificial intelligence (AI), the idea that a computer could achieve human-like intelligence.

Alan Turing

Click to Enlarge Alan Turing

In 1950, Alan Turing, a young English computer scientist, proposed a simple test—now known as the Turing Test—to determine whether a computer had achieved intelligent behavior. According to the test, if the computer and a human have a conversation and a human judge can't tell which is the computer and which is the person, then the computer has demonstrated intelligence. Turing predicted that this would be achieved by the year 2000. He was off by 15 years, as the first successful passing of the Turing Test occurred in 2015.

In the years that followed Turing's challenge, computer scientists began actively pursuing the goal of creating computer systems capable of intelligent behavior. The phrase "artificial intelligence" was first used to describe this new field in 1956 when a group scientists met for a summer-long conference called the Dartmouth Summer Research Project on Artificial Intelligence. Many consider that conference to be the birth of AI as a field of research, and by the end of that summer these researchers had developed programs that could speak English, win simple games against humans, and solve word problems in math. At the time, these were huge steps forward, and for a while researchers were overly optimistic that artificial intelligence could be solved within a decade or two. AI research started receiving funding from governments and academic institutions, and research programs were set in motion.

Challenges and Slow Progress

The field did not develop quite as fast as those early predictions. Over the next several decades, AI researchers encountered important roadblocks. The biggest issue was computational power and speed. Computers were slow and had very little memory by today's standards. Machines with a few thousand bytes of memory were unlikely to achieve anything that would compete with the 100 trillion neural connections of a human brain. Major advances in AI would have to wait until the development of faster processors, better memory storage, and cloud technology, as well as more sophisticated computer coding techniques.

Early AI scientists also debated different approaches to achieving intelligent behavior. Some felt it was critical that AI systems should mimic human brain functioning, and they spent decades working with psychologists to develop our understanding of human information processing. Others felt that human biology was irrelevant; the way humans achieve intelligent behavior is constrained by our evolution and the functioning of our brain cells, but if a machine can achieve it by other means, why not let it?

Another debate has centered around how to define intelligence. The Turing Test is a benchmark that computer scientists have worked toward, but along the way they have wrestled with how to approach it. Is it enough to achieve intelligent output in just one specific task like solving a specific problem, or does the computer system have to achieve general intelligence across a wide range of problems? Is it necessary to program the computer with lots of facts and knowledge about the world, or is it possible to achieve intelligence with a set of abstract processes, such as formal logic rules? Is it enough to have a computer that can provide human-like output, or does it have to be capable of learning from its own experiences or perhaps show self-awareness?

Rapid Advancement and Ethical Concerns

By the end of the 20th century, advances in technology and a growing computer culture led to renewed interest in AI. Past research had been funded mostly by governments and universities, but by the early 21st century, businesses started investing heavily in AI and progress came more rapidly.

Predator unmanned aerial drone

Click to Enlarge Predator unmanned aerial drone

The boom in AI development also sparked more interest in the ethics of AI. AI experts began debating issues like the loss of human jobs, hacking dangers and threats to privacy, as well as how to maintain human moral standards and eliminate bias when computers make our decisions. Scholars have suggested that an intelligent learning computer system could have the ability to reprogram and improve itself until it achieves superintelligence—a level of thinking far beyond human intelligence—at which point humans will lose the ability to control it. Many have also raised alarms about relying on AI to automatically control certain technologies. For example, autonomous weapons systems (military AI systems that could automatically select and attack targets without a human controller) could be an incredibly effective military tool but could also cause catastrophic harm to humans.

These ethical concerns have led many to suggest that industries and governments should adopt a code of ethical standards as AI continues to develop. One of the most important strategies that has been suggested is maintaining transparency. If AI programs are learning and making decisions, they should create output to show the process behind the decision, not just the decisions themselves. If AI programs are not transparent, humans may have no way to intervene and control them.

Heather Lacey
Heather Lacey is associate professor of applied psychology at Bryant University. She obtained her BA in psychology from California State University East Bay, her MA in psychology from the University of Michigan, and her PhD in the Cognitive and Perception program in psychology also from the University of Michigan. Dr. Lacey teaches classes in cognitive psychology, judgment and decision making, and forensic psychology, in addition to introductory classes. Her research focuses on health related decision making, happiness and quality of life, and judgments of aging and health conditions. Her writing has appeared in the Journal of Business Research, Journal of Happiness Studies, Health Psychology, and the Journal of Medical Ethics among others.
MLA Citation

Lacey, Heather. "Artificial Intelligence and Ethics." ABC-CLIO Solutions, ABC-CLIO, 2023, educatorsupport.abc-clio.com/TopicCenter/Display/2080559?productId=3. Accessed 28 Jan. 2023.

View all citation styles.

Back to Top