Artificial Intelligence Risks & Solutions

C O N T E N T S:

KEY TOPICS

  • Unknowable risk The real danger of well designed artificial intelligence is in its ability to reprogram and upgrade itself.(More…)
  • Existential risk from artificial general intelligence is the theory that substantial progress in artificial general intelligence (AI) could someday result in human extinction or some other unrecoverable global catastrophe.(More…)

POSSIBLY USEFUL

  • Preparing for the Future of Intelligence : White House report that discusses the current state of AI and future applications, as well as recommendations for the government’s role in supporting AI development.(More…)

RANKED SELECTED SOURCES

Artificial Intelligence
Image Courtesy:
link: http://futureoflife.org/background/benefits-risks-of-artificial-intelligence/
author: futureoflife.org
description: Benefits & Risks of Artificial Intelligence – Future of Life Institute

GENERAL INFO
CYBERSEC EUROPEAN CYBERSECURITY FORUM – Kraków, Poland – The future of artificial intelligence was a hot topic at the third annual CYBERSEC Cybersecurity Forum, where security professionals representing Poland, the Netherlands, Germany, and the United Kingdom discussed the pitfalls and potential of AI, and its role in the enterprise. [1]

KEY TOPICS

Unknowable risk The real danger of well designed artificial intelligence is in its ability to reprogram and upgrade itself. [2] Potential risks from advanced artificial intelligence (Report). [3] A new report highlights risks of artificial intelligence, such as malicious self-driving cars and robots programmed to be assassins. [4]

The Information Technology and Innovation Foundation (ITIF), a Washington, D.C. think-tank, awarded its Annual Luddite Award to "alarmists touting an artificial intelligence apocalypse"; its president, Robert D. Atkinson, complained that Musk, Hawking and AI experts say AI is the largest existential threat to humanity. [3] Elon Musk recently commented on Twitter (TWTR) that artificial intelligence (AI) is more dangerous than North Korea. [5] As progress in artificial intelligence accelerates, confusion about what it makes possible could reignite these fears, leading to hair-trigger nuclear weapons, concern about an "AI gap," and an arms race. [6] Another way of conceptualizing an advanced artificial intelligence is to imagine a time machine that sends backward in time information about which choice always leads to the maximization of its goal function; this choice is then output, regardless of any extraneous ethical concerns. [3] While Silicon Valley enthusiasts hail the potential gains from artificial intelligence for human efficiency and the social good, Hollywood has hyped its threats. [7] In 2009, experts attended a private conference hosted by the Association for the Advancement of Artificial Intelligence (AAAI) to discuss whether computers and robots might be able to acquire any sort of autonomy, and how much these abilities might pose a threat or hazard. [3] Technologists. have warned that artificial intelligence could one day pose an existential security threat. [3]

That’s the premise behind a new paper from RAND Corporation, How Might Artificial Intelligence Affect the Risk of Nuclear War? It’s part of a special project within RAND, known as Security 2040, to look over the horizon and anticipate coming threats. [8] "Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously," the report says, and that is cause for concern, say the experts. [9] The report, The Rise of Artificial Intelligence: Future Outlook and Emerging Risks, says AI comes with both potential and risks in such diverse areas as economic, political, mobility, health care, defense and environmental. [10]

Humanity needs to better prepare for the rise of dangerous artificial intelligence. [9]

Existential risk from artificial general intelligence is the theory that substantial progress in artificial general intelligence (AI) could someday result in human extinction or some other unrecoverable global catastrophe. [3] The greatest threat to humanity isn’t AI. It’s how we handle AI. "Artificial intelligence and machine learning are not what we need to worry about: rather, it’s failings in human intelligence, and our own ability to learn," Ranger concludes. [11] Entrepreneur Elon Musk has long held the position that innovators need to be aware of the social risk artificial intelligence (AI) presents to the future, but at South by Southwest (SXSW) on Sunday, the SpaceX founder pieced together his plan for the second coming of the Dark Ages, noting AI "scares the hell" out of him. [12] CYBERSEC EUROPEAN CYBERSECURITY FORUM – Kraków, Poland – The future of artificial intelligence was a hot topic at the third annual CYBERSEC Cybersecurity Forum, where security professionals representing Poland, the Netherlands, Germany, and the United Kingdom discussed the pitfalls and potential of AI, and its role in the enterprise. [1]

The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads. [12] Artificial intelligence poses a wide range of hidden and unknown dangers for enterprises deploying the technology. [13]

Addressing students at the beginning of our Labor Day weekend, Putin remarked "Artificial intelligence is the future, not only for Russia, but for all humankind," adding, "It comes with colossal opportunities, but also threats that are difficult to predict." [14]

POSSIBLY USEFUL

Preparing for the Future of Intelligence : White House report that discusses the current state of AI and future applications, as well as recommendations for the government’s role in supporting AI development. [15] The main concern of the beneficial-AI movement isn’t with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. [15] A SurveyMonkey poll of the public by USA Today found 68% thought the real current threat remains "human intelligence"; however, the poll also found that 43% said superintelligent AI, if it were to happen, would result in "more harm than good", and 38% said it would do "equal amounts of harm and good". [3]

One source of concern is that a sudden and unexpected " intelligence explosion " might take an unprepared human race by surprise. [3] The Open Philanthropy Project summarizes arguments to the effect that misspecified goals will become a much larger concern if AI systems achieve general intelligence or superintelligence. [3]

To better protect against the rise of ill-intended AI, policymakers ought to be working closely with technical specialists to be aware of potential applications of machine intelligence. [9] Philippe Pasquier, an associate professor at Simon Fraser University, said "As we deploy more and give more responsibilities to artificial agents, risks of malfunction that have negative consequences are increasing," though he likewise states that he does not believe AI poses a high risk to society on its own. [16]

RANKED SELECTED SOURCES(16 source documents arranged by frequency of occurrence in the above report)

1. (9) Existential risk from artificial general intelligence – Wikipedia

2. (3) OpenAI, Oxford and Cambridge AI experts warn of autonomous weapons

3. (2) Artificial Intelligence: Experts Talk Ethical, .

4. (2) AI ‘more dangerous than nukes’: Elon Musk still firm on regulatory oversight | ZDNet

5. (2) Benefits & Risks of Artificial Intelligence – Future of Life Institute

6. (1) Emerging risks of artificial intelligence span all sectors | Business Insurance

7. (1) Artificial Intelligence Is Only Dangerous If Humans Use It Foolishly

8. (1) Risky AI business: Navigating regulatory and legal dangers to come | CIO

9. (1) Artificial intelligence — the arms race we may not be able to control | TheHill

10. (1) Exploring the risks of artificial intelligence TechCrunch

11. (1) How Dangerous is Artificial Intelligence? | Ricks Cloud

12. (1) Why Artificial Intelligence Researchers Should Be More Paranoid | WIRED

13. (1) Elon Musk Artificial Intelligence: We Should Be Worried | Fortune

14. (1) How Will Artificial Intelligence Affect the Risk of Nuclear War?

15. (1) How We Can Overcome Risks Posed by Artificial Intelligence | Time

16. (1) How Artificial Intelligence Could Increase the Risk of Nuclear War | RAND