Society has been here many times before. A little more than 200 years ago, a group of English textile workers called the Luddites took to destroying mill machinery – not because they were inherently anti-technology – but mainly to protest labor conditions at the time. The movement faded in a few years and the textile industry continued down an inevitable road to automation.
The history of technology reveals many such examples of innovation being met with suspicion and worries that mankind would lose its moral compass. From Gutenberg’s printing press to the dawn of the computer age, people have fretted that new ways and machines would erode the ability of people to discern good from bad while dangerously accelerating change.
Today, it’s the explosion of artificial intelligence technologies that frightens some people. This time, however, it is not led by modern-day Luddites but people who understand tech and who actively push the innovation envelope themselves.
Familiar names such as Elon Musk (PayPal, Tesla, SpaceX and Twitter) and Steve Wozniak (Apple) are among at least 1,000 tech leaders, researchers and others who earlier this year signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems.
Developers of AI systems are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict or reliably control,” read the letter, released by the nonprofit Future of Life.
There is no denying the race is on. Companies such as Microsoft, Google, IBM, Amazon and Tencent are all investing heavily in AI, which may be defined as the simulation of human intelligence processes by machines, especially computer systems levering large sets of data. Millions of people have toyed with the free version of ChatGPT, introduced by OpenAI and Microsoft, which can do everything from answer questions in poetic verse to engage in human-like banter.
The Chat evolution (GPT stands for Generative Pre-Trained Transformer) is up to version four, which prompted the Future of Life letter urging a moratorium. It cited “profound risks to society and humanity” unless there is a time out to introduce “shared safety protocols” for AI systems.
“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter urged.
From deep fakes to disinformation that could start wars, and from education plagiarism to simple errors based on incorrect data, many AI risks are there. But so are opportunities to improve health care, transportation, financial services, environmental mitigation and what computer scientists call “natural language processing.” The latter describes giving computers the ability to understand text and spoken words in much the same way human beings do.
Balancing risk with opportunity is the obvious challenge. The question is whether any government is equipped to meet it. Technology has always outpaced the ability of government to understand and absorb the changes it brings to the economy and society. For example, consider the inability of Congress to pass a national set of data privacy standards, thus defaulting to states to build a patchwork quilt of rules.
It’s more likely the task will fall initially to industry in concert with tech experts, researchers and even ethicists. The history of technology includes many cases of “creative destruction” that have moved economies and societies ahead, generally benefiting mankind by displacing old, inefficient systems and ways of doing things. There are also many examples of technology outpacing the ability of people, laws and regulations to keep up.
The Future of Life “AI time out” letter is a reminder that innovations in medical science, information technology, communications and more sometimes strain society’s ability to absorb the change. Artificial intelligence as a technology is here to stay. How it’s used for better or worse is the question.