Artificial Intelligence and Ethics
Though the concepts behind "mechanical computation" had been developing over hundreds of years, it wasn't until the mid-20th century that digital computers arrived on the scene and the field of computer science emerged. Not long after that, computer scientists began considering the possibility of artificial intelligence (AI), the idea that a computer could achieve human-like intelligence.
In the years that followed Turing's challenge, computer scientists began actively pursuing the goal of creating computer systems capable of intelligent behavior. The phrase "artificial intelligence" was first used to describe this new field in 1956 when a group scientists met for a summer-long conference called the Dartmouth Summer Research Project on Artificial Intelligence. Many consider that conference to be the birth of AI as a field of research, and by the end of that summer these researchers had developed programs that could speak English, win simple games against humans, and solve word problems in math. At the time, these were huge steps forward, and for a while researchers were overly optimistic that artificial intelligence could be solved within a decade or two. AI research started receiving funding from governments and academic institutions, and research programs were set in motion.
Challenges and Slow Progress
The field did not develop quite as fast as those early predictions. Over the next several decades, AI researchers encountered important roadblocks. The biggest issue was computational power and speed. Computers were slow and had very little memory by today's standards. Machines with a few thousand bytes of memory were unlikely to achieve anything that would compete with the 100 trillion neural connections of a human brain. Major advances in AI would have to wait until the development of faster processors, better memory storage, and cloud technology, as well as more sophisticated computer coding techniques.
Early AI scientists also debated different approaches to achieving intelligent behavior. Some felt it was critical that AI systems should mimic human brain functioning, and they spent decades working with psychologists to develop our understanding of human information processing. Others felt that human biology was irrelevant; the way humans achieve intelligent behavior is constrained by our evolution and the functioning of our brain cells, but if a machine can achieve it by other means, why not let it?
Another debate has centered around how to define intelligence. The Turing Test is a benchmark that computer scientists have worked toward, but along the way they have wrestled with how to approach it. Is it enough to achieve intelligent output in just one specific task like solving a specific problem, or does the computer system have to achieve general intelligence across a wide range of problems? Is it necessary to program the computer with lots of facts and knowledge about the world, or is it possible to achieve intelligence with a set of abstract processes, such as formal logic rules? Is it enough to have a computer that can provide human-like output, or does it have to be capable of learning from its own experiences or perhaps show self-awareness?
Rapid Advancement and Ethical Concerns
By the end of the 20th century, advances in technology and a growing computer culture led to renewed interest in AI. Past research had been funded mostly by governments and universities, but by the early 21st century, businesses started investing heavily in AI and progress came more rapidly.
These ethical concerns have led many to suggest that industries and governments should adopt a code of ethical standards as AI continues to develop. One of the most important strategies that has been suggested is maintaining transparency. If AI programs are learning and making decisions, they should create output to show the process behind the decision, not just the decisions themselves. If AI programs are not transparent, humans may have no way to intervene and control them.