What Is Elon Musk’s Take on Artificial Intelligence?

C O N T E N T S:


  • Former Google CEO Schmidt reiterates his opposition to Elon Musk’s views on artificial intelligence.(More…)
  • At a Vox Media Code Conference, Musk claimed that AI will be so much more intelligent than us that “Robots will use humans as pets once they achieve a subset of artificial intelligence known as “superintelligence.?”(More…)
  • Elon Musk, the late Stephen Hawking and 8,000 others have signed an open letter with concerns about the future of AI. There has thus been an increasing recognition for the need to focus on the ethical aspects of AI. Here are some of the most significant ethical issues facing the continued implementation of artificial intelligence.(More…)
  • Considering Elon Musk’s line of work and relationship with AI and autonomous machines, we’d be smart to take heed.(More…)
  • Musk is a man with a lot of plans, including some very dynamic ones like the hyperloop. All of Musk’s companies involve a lot of very futuristic and bold projects and one would think he would be the most unequivocal champion of artificial intelligence (AI).(More…)


  • That’s why I titled my book Intelligence is not Artificial.(More…)
  • Founded in October 2015 by Musk, Sam Altman, the president of the Silicon Valley technology incubator Y Combinator, and a group of other PayPal Holdings Inc. alumni, OpenAI is a nonprofit company dedicated to creating what it calls “safe” artificial general intelligence and distributing it “as widely and evenly as possible.”(More…)


What Is Elon Musk's Take on Artificial Intelligence?
Image Courtesy:
link: http://www.cnbc.com/2017/12/18/9-mind-blowing-things-elon-musk-said-about-robots-and-ai-in-2017.html
author: cnbc.com
description: 9 mind-blowing things Elon Musk said about robots and AI in 2017


Former Google CEO Schmidt reiterates his opposition to Elon Musk’s views on artificial intelligence. [1] Former Google CEO Eric Schmidt has once again urged people not to be worried by alarming predictions about artificial intelligence by Tesla and SpaceX CEO Elon Musk. [1] Tesla and SpaceX boss Elon Musk has been especially vocal about the potential negative implications of the development of artificial intelligence. [2]

Cognitive scientist Piero Scaruffi argues that artificial intelligence is still in its Stone Age, and suggests that Elon Musk should worry more about biotech than AI. [3] Elon Musk tells U.S. governors artificial intelligence is a threat like no other that needs to be regulated now. [1] This is from a movie (more like a documentary) about Artificial Intelligence promoted by Elon Musk. [4]

At least one tech expert isn’t buying into Tesla and SpaceX founder Elon Musk’s predictions of an artificial intelligence (AI) apocalypse. [5] Eric Schmidt, executive chairman of Google and Alphabet, Inc., told TechCrunch, that the Elon Musk’s foreboding view of artificial intelligence is “exactly wrong,” reports Anthony Ha. [6]

At a Vox Media Code Conference, Musk claimed that AI will be so much more intelligent than us that “Robots will use humans as pets once they achieve a subset of artificial intelligence known as “superintelligence.?” [7] Musk has been vocal about his fear of artificial intelligence (AI), stating in no uncertain terms that “AI is humanity’s biggest existential threat.” [7] Mr. Musk, the entrepreneur behind SpaceX and the electric-car maker Tesla, had taken it upon himself to warn the world that artificial intelligence was “potentially more dangerous than nukes” in television interviews and on social media. [8] According to Schmidt, the reason Musk is “exactly wrong” is that he doesn’t understand the full ramifications of the potential of artificial intelligence, he said in Paris. [2] In an earlier interview, Intel’s Artificial Intelligence Product Group (AIPG) head Amir Khosrowshahi told us, “The problems that are of immediate concern and are difficult to address are building AI models which have an inherent bias in them. [7] Another problem with current artificial intelligence technology is that computers are not able to explain how they came to an answer. [2] S. C. Stuart is an award-winning digital strategist and technology commentator for ELLE China, Esquire Latino, Singularity Hub, and PCMag, covering: artificial intelligence; augmented, virtual, and mixed reality; DARPA; NASA; U.S. Army Cyber Command; sci-fi in Hollywood (including interviews with Spike Jonze and Ridley Scott); and robotics (real-life. [3] That same year, he also helped create an independent artificial intelligence lab, OpenAI, with an explicit goal: create superintelligence with safeguards meant to ensure it won?t get out of control. [8] Warnings about the risks of artificial intelligence have been around for years, of course. [8] Currently, artificial intelligence should operate in conjunction with humans, said Schmidt. [2] One current application of artificial intelligence is Google translate, said Schmidt. [2] Google, Facebook, Instagram, Youtube, and various other of our most favorite sites use Artificial Intelligence to a large extent. [4] Regardless of all the fear mongering, we know one thing for sure – Artificial Intelligence will change the world forever. [4] That’s why artificial intelligence is being viewed as potentially dangerous. [4] In his testimony, Mr. Zuckerberg acknowledged that scientists haven?t exactly figured out how some types of artificial intelligence are learning. [8] In the future, artificial intelligence will be used to power self-driving cars and to improve medical care, he said. [2] The man building a spaceship to send people to Mars has used his South by Southwest appearance to reaffirm his belief that the danger of artificial intelligence is much greater than the danger of nuclear warheads. [1] The heavy hitters of A.I. were in the room — among them Mr. LeCun, the Facebook A.I. lab boss who was at the dinner in Palo Alto, and who had helped develop a neural network, one of the most important tools in artificial intelligence today. [8] All sorts of deep thinkers have joined the debate over artificial intelligence, including those at an annual conference hosted in Palm Springs, Calif., by Amazon’s chief executive, Jeff Bezos. [8]

Musk and Altman’s counter-intuitive strategy of trying to reduce the risk that AI will cause overall harm, by giving AI to everyone, is controversial among those who are concerned with existential risk from artificial intelligence. [9] To highlight that point of being “more dangerous than nuclear weapons,” meet MIT’s serial killer artificial intelligence Norman, a disturbed image-captioning A.I. obsessed with murder. [10] A central goal of the field of artificial intelligence is for machines to be able to learn how to perform tasks and make decisions independently, rather than being explicitly programmed with inflexible rules. [11] Fei-Fei Li, the chief scientist for artificial intelligence at Google Cloud. [12] “The fact of the matter is that artificial intelligence and machine learning are so fundamentally good for humanity,” Schmidt said at VivaTech in France last week. [12] The Tesla founder has previously said that artificial intelligence is potentially more dangerous than nuclear weapons. [10] It sounds quite dangerous allowing a freight truck to drive itself; if the sensors break down on a big rig truck going 60-70 MPH, that’s potentially 40 tons barreling down the highway unattended except by artificial intelligence. [10] “Physical liberty” is how the writer of the Futurism essay, “What Will Life Look Like in 2030 Thanks to Artificial Intelligence?”, calls it. [6] For this reason, I believe philosophy and religion — the studies of what is objectively right and wrong — should never be divorced from technology decision-making, especially in this explosive age of artificial intelligence. [6] The tech giant helped test an artificial intelligence computer system. [13]

The founders (notably Elon Musk and Sam Altman ) are motivated in part by concerns about existential risk from artificial general intelligence. [9]

Google scientist Fei-Fei Li used Elon Musk’s doomsday prophecy about AI to warn her company against promoting its controversial work on a military contract. [12] At the VivaTech conference in Paris, former Google CEO Eric Schmidt said he believed that Elon Musk’s approach to AI is “exactly wrong” and does not recognize the benefits the technology will bring. [5]

Elon Musk, the late Stephen Hawking and 8,000 others have signed an open letter with concerns about the future of AI. There has thus been an increasing recognition for the need to focus on the ethical aspects of AI. Here are some of the most significant ethical issues facing the continued implementation of artificial intelligence. [14] Bill Gates isn’t the only one afraid of the possible outcome of AI. Elon Musk, superstar of the technological world, has had a lot to say about artificial intelligence. [15]

Artificial intelligence, or AI, generates the most pressing ethical questions of any technology today, in part because of the nearly ubiquitous influence it will have in so many areas of our lives. [14] Musk is famously worried about artificial intelligence overtaking humans and invested in DeepMind not to make money, but, as he said to Vanity Fair, “to give me visibility into the rate at which it was developing.” [16] Musk struck buyout gold again with his artificial intelligence company DeepMind Technologies. [16] Searle argues that artificial intelligence works by means of syntax, and that it has no human (or semantic) understanding. [14] All of this isn’t to say that we can disregard concerns about the future of artificial intelligence: there’s a wisdom in prudence, after all. [17] Even though artificial intelligence isn?t a new field, we?re a long, long way from producing something that, as Gideon Lewis-Kraus wrote in The New York Times Magazine, can “demonstrate a facility with the implicit, the interpretive.” [18] Let’s not give artificial intelligence a reason to fear us, by establishing the ways in which we interact with it now, and making sure everyone in the industry understands it thoroughly. [15] Artificial Intelligence is, at a cultural level, already met with fear and disdain, but at a practical level, we’ve already embraced it as a common facet of our everyday lives. [17] Artificial intelligence and robotics were initially thought to be a danger to be blue-collar jobs, but that is changing with white-collar workers – such as lawyers and doctors – who carry out purely quantitative analytical processes are also becoming an endangered species. [19] As is so often the case in our modern buzzword culture, the phrase artificial intelligence has grown well beyond the confines of what it is, and is instead often thought of in terms of what it could be. [17] Developments in artificial intelligence and machine learning provide some of the most exciting breakthroughs in technology, medicine, education and many other fields. [14] Artificial intelligence is just that, something we create that can learn. [15] “I think we should be very careful about artificial intelligence. [15] The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. [18] By combining visual processing and deep learning — a type of artificial intelligence that is modeled on neural networks — they were able to match or outperform the diagnoses of some of the world’s leading dermatologists. [14]

Then we have people like Tesla’s CEO Elon Musk, Professor Stephen Hawking or even Microsoft co-founder Bill Gates, worried about artificial intelligence and the future of the human race but Kurzweil isn’t worried. [20] Tesla’s CEO Elon Musk, who has previously warned that humans are in danger, artificial intelligence could make human beings irrelevant, unless humans choose to merge with machines. [20]

The singularity is that point in time when all the advances in technology, particularly in artificial intelligence (AI), will lead to machines that are smarter than human beings. [20] Director of Google’s Engineering Department, Ray Kurzweil has said the artificial intelligence (AI) singularity will take place in the year 2029, and just a few years later human beings will merge with machines. [20] A question often raised about artificial intelligence (AI) is whether machines will one day become human, but this way of framing the question assumes we have settled on what it means to be human. [21] Home Lifestyle Health Care Artificial Intelligence (AI) will be as smart and intelligent as humans by. [20] Mr Kurzweil continued by stating that predictions that artificial intelligence (AI) will enslave humans is “not realistic”, adding that it is already ubiquitous. [20] SoftBank Founder and Chief Executive Officer (CEO) Masayoshi Son said computers running artificial intelligence (AI) programs will exceed human intelligence within three decades. [20] It’s where technology and philosophy crash into each other, because a super-intelligent AI will be able to build an even more intelligent AI, and so on and so on – ending up with an artificial intelligence that has god-like powers. [22] VentureBeat meta name”description” content”Max Tegmark, a cosmologist, professor at MIT, and foremost thinker in artificial intelligence, spoke with VentureBeat about the implications of superintelligence AI [23] Last time we dove into the artificial intelligence (AI) pool we focused on what it is, its emergence and its history. [24] When top teams eventually compete at one of the biggest e-sports events of the year – Dota 2’s world championship “The International” – some of them will be facing a very different opposition than what they?re used to: a team of artificial intelligence (AI) bots. [25] Tegmark’s latest book, Life 3.0: Being Human in the Age of Artificial Intelligence, postulates that neural networks of the future may be able to redesign their own hardware and internal structure. [23] Speaking at an address to the Oxford University Union, Professor Stephen Hawking warned that man-made viruses and malicious artificial intelligence (Technology) needs to be controlled in order to prevent it from destroying the human race. [20] Ray Kurzweil believes that the moment is a little over a decade away and the artificial Intelligence will lead to the end of the human race. [20] Artificial Intelligence drives the human’s future, and human beings may take a position of the backseat. [20] We have robots like Sophia that are making headlines left and right, but as advanced as it (she?) is, it still pales in comparison to what truly artificial intelligence will bring. [24]

Considering Elon Musk’s line of work and relationship with AI and autonomous machines, we’d be smart to take heed. [24] Kurzweil’s words echoed Tesla’s CEO Elon Musk’s sentiments, that humans need to converge with machines, pointing out the work already being done in Parkinson’s patients (disease). [20]

Musk is a man with a lot of plans, including some very dynamic ones like the hyperloop. All of Musk’s companies involve a lot of very futuristic and bold projects and one would think he would be the most unequivocal champion of artificial intelligence (AI). [26] Musk insists our demise is the exclusive responsibility of Artificial Intelligence (AI). [27]

Artificial intelligence can identify objects on the way of the car with 97.5% of accuracy so the possibility of error is 2.5%. [28] Right now, artificial intelligence restrains itself, and in order for the robots to go out on the streets to kill people, they need to get over this restraint, and this is not something you can do in 10 minutes. [28] Within the last 5 years, the development of artificial intelligence gained incredible speed. [28]


That’s why I titled my book Intelligence is not Artificial. [3] “Artificial intelligence is the future, not only for Russia but for all humankind. [4]

I dont know what he knows, but I think the most dangerous ability of AI is that it can (as a requirement for intelligence) improve on itself – without limits. [4] Such a view is based around the idea that such AI could undermine our own intelligence as human beings. [4]

You’ve said that we have organized our world so that machines can navigate it, which, as you point out, isn’t “intelligence” at all. [3] We would have to code machines that have the urge to take control to make them gain intelligence in the first place. [4]

Intelligence is the ability (or the drive) to create as many options as possible. [4] The real intelligence went into structuring the subway in such a way that trains can run largely autonomously. [3]

The creation of “superintelligence” — the name for the supersmart technological breakthrough that takes A.I. to the next level and creates machines that not only perform narrow tasks that typically require human intelligence (like self-driving cars) but can actually outthink humans — still feels like science fiction. [8] At this point, this super-intelligence will outstrip human intelligence and usher in an era of technological change unlike any other. [7]

Despite Elon Musk’s concerns regarding the safety of AI, you will hear AI experts claiming that there is no need to be concerned. [4] I think Elon Musk and others have Hollywood movies in mind, and that’s a completely different AI–an AI that I don’t think it exists or will any time soon. [3] I think that Elon Musk should worry a lot more about biotech than about AI. [3] Elon Musk has warned on several occasions about the dangers of AI. [4] Elon Musk is a brilliant engineer, and he’s pioneeered a lot of initiatives, including OpenAI. So it stands to reason that he understands AI pretty well. [4]

SAN FRANCISCO — Mark Zuckerberg thought his fellow Silicon Valley billionaire Elon Musk was behaving like an alarmist. [8] Unfortunately, when one of the said heralds turns out to be of the likes of Tony Stark-esque Elon Musk, it’s in our best interest to at least pay attention. [7]

To counteract this possibility, as well as finding new solutions to functional and neurological problems for humans, Elon has co-founded OpenAI and Neuralink. [4] When The Wall Street Journal asked Gates about Musk’s views, Gates said, “The so-called control problem that Elon is worried about isn?t something that people should feel is imminent.” [7]

“I think Elon is exactly wrong,” Schmidt said at the VivaTech conference in Paris. [1] And? remember that Elon Musk’s strongest motivation for creating SpaceX was that he tried and failed to purchase Russian ICBM rockets for his first Mars project. [4]

We caught up with him after the event via email, where he countered the current fears around the Singularity, as espoused by Elon Musk and others. [3]

At the time, he suggested Musk’s worries about Google AI specifically should be treated with a grain of salt because he’s an engineer, not a computer scientist. [1] Again, the problem, in Musk’s view, is that AI is being developed in isolation, with no regulation and no oversight. [7] Eric Schmidt, the former executive chairman of the Board of Directors of Alphabet, Google’s parent company, has recently come down in direct opposition to Musk’s dire point of view. [2]

Schmidt said that Musk’s view of AI is ‘exactly wrong,’ and that the technology will go a long way towards making citizens smarter. [5] Now, The Intercept has published a further extract from the same message, in which she refers to Tesla CEO Musk’s critical views of AI. [12]

Some scientists, such as Stephen Hawking and Stuart Russell, believe that if advanced AI someday gains the ability to re-design itself at an ever-increasing rate, an unstoppable ” intelligence explosion ” could lead to human extinction. [9] That’s a shared thought with scientist Stephen Hawking, who also previously warned that “artificial intelligence could spell the end for the human race If we are not careful enough because they are too clever.” [10]

According to Elon, a deep intelligence in the network could create fake news and spoof email accounts. [29] “Elon Musk invests in $1B effort to thwart the dangers of AI”. [9] This non benign scenario put forth by Elon was a hypothetical, but he went into detail about how it could have been possible that an AI, with the goal of maximising the portfolio of stocks, to go long on defence and short on consumer, and start a war. [29]

I watched a youtube clip of Elon Musk talking about his view on the future of AI. He gave two examples. [29] Who is the company which is providing positive AI? Right, it is Neuralink owned by Elon Musk. [29] “My favorite game has been invaded by killer AI bots and Elon Musk hype”. [9]

Previously, Elon Musk, CEO of Tesla and an early investor in Google DeepMind, warned that intelligent machines pose an existential threat to humanity. [11] Telsa CEO Elon Musk and Eric Schmidt, the former executive chairman of Google’s parent firm Alphabet. [12]

These movies make a guess at what the future is going to be like, similar to how Elon Musk tweets. [6] The “Head of A.I. at Google slam the kind of ‘A.I. apocalypse’ fear-mongering Elon Musk has been doing” according to a CNBC headline. [6] OpenAI, which Elon Musk co-founded, has been taking on top Dota 2 players with the bots since last year, and now it’s gunning for a team of top professionals in an exhibition match at one of the biggest events in eSports. [30]

“Elon Musk-backed OpenAI reveals Universe – a universal training ground for computers”. [9]

One of the examples was a benign scenario and the other example was a non benign scenario where he speculated the possibilities of future AI threats and what harm a deep intelligence could do. [29] Getting machines to learn less well defined tasks or ones for which no digital datasets exist is a future goal that would require a more general form of intelligence, akin to common sense. [11]

OpenAI’s Igor Mordatch argues for that competition between agents can create an intelligence “arms race” that can increase an agent’s ability to function, even outside the context of the competition. [9]

Schmidhuber said that the advantages of making medical data available are huge and will be necessary to create “super-human artificial doctors”. [11] In an amiable spar, I was going back and forth with an artificial intelligence/VR thought influencer Mark Metry on Instagram through direct messages. [6] Topics include technological unemployment, autonomous vehicles, robotics, machine learning, neuromorphic chips, cognitive computing, and artificial super-intelligence. [31]

Artificial general intelligence (AGI) is where machines can successfully perform any intellectual task that a human can do – sometimes referred to as “strong AI”, or “full AI”. [19] Even people who study AI have a healthy respect for the field’s ultimate goal, artificial general intelligence, or an artificial system that mimics human thought patterns. [18]

For some, the phrase “artificial intelligence” conjures nightmare visions — something out of the ?04 Will Smith flick I, Robot, perhaps, or the ending of Ex Machina — like a boot smashing through the glass of a computer screen to stamp on a human face, forever. [18]

Max Tegmark in his recent book Life 3.0, describes AI as a machine or computer that displays intelligence. [19] This possibility — which has been called the “technological singularity” or “intelligence explosion” — is that AI systems will improve themselves and increase in intelligence at such a rate that we will lose control. [14]

AI can thus simulate human intelligence but never duplicate it. [14]

As various writers ( me, Ted Chiang, Charlie Stross ) have written, the fear of an AI gone mad evinced by so many boosters of shareholder capitalism is a strong tell that these captains of industry are terrified that the corporate artificial life forms they’ve created and purport to steer are in fact driving them. [32] In the place of traditional education is a class that discusses the ethical and political woes of artificial intelligence– a topic that Musk is deeply concerned about. [33]

Elon Reeve Musk (born June 28, 1971) is a South African-born American entrepreneur and businessman who founded X.com in 1999 (which later became PayPal), SpaceX in 2002 and Tesla Motors in 2003. [34] South African entrepreneur Elon Musk is known for founding Tesla Motors and SpaceX, which launched a landmark commercial spacecraft in 2012. [34]

There’s nothing wrong with maintaining a healthy sense of a technology’s potentially dark extremes, but many experts contend that the gloomy predictions of a Terminator-like apocalypse levied by people like Elon Musk are not only a few decades premature, but ultimately make too many assumptions about the ways AI could be employed or, for that matter, is already being employed. [17] When Elon Musk says things like the following statement about AI, it’s something to think about. [15] AI may be more a help than a hindrance, and the benefits of its use will undoubtedly grow as systems develop, but just in case guys like Elon Musk are right it never hurts to include an off switch. [17] Elon Musk recently suggested that under some scenarios AI could jeopardise human survival. [19]

On March 23, 2018, Elon Musk deleted facebook pages for Tesla and SpaceX in response to the #DeleteFacebook trend in the aftermath of Facebook’s Cambridge Analytica scandal. [16] Elon Musk is the co-founder, CEO and product architect at Tesla Motors, a company dedicated to producing affordable, mass-market electric cars as well as battery products and solar roofs. [34] Elon Musk’s Boring Company has won a bid to connect O’Hare Airport with downtown Chicago via high-speed train, sources told Bloomberg. [35] At 10, around the time his parents divorced, the introverted Elon developed an interest in computers. [34] Google wants to help sift through drone footage and flag elements of import, while the industry surrounding them argues that they?re making targeting systems for the sort of autonomous weapons systems that exist only in Elon Musk’s nightmares. [17]

In August 2013, Elon Musk released a concept for a new form of transportation called the “Hyperloop,” an invention that would foster commuting between major cities while severely cutting travel time. [34] In yet another innovation, in January 2017 Elon Musk suddenly decided he was going find a way to reduce traffic by devoting resources to boring and building tunnels. [34]

Musk has hopes that The Boring Company, which has projects in various states in Los Angeles, Hawthorne, the East Coast, and Chicago, will be the key to unlocking hyperloop travel, one of Musk’s passions. [16] An abbreviated list of Musk’s best investments includes PayPal Holdings, Inc. (NASDAQ: PYPL ), SpaceX, DeepMind (NASDAQ: GOOGL ), Tesla Inc. ( TSLA ), and The Boring Company. [16]

Since two of Musk’s sons are leaving for traditional high schools next year, there’s some concern over what the future of the school looks like, according to Ars. [33] According to Ars, approximately 400 families last year vied for just 12 open spots at Musk’s school. [33]

Did You Know? In April 2017, Musk’s Tesla Motors surpassed General Motors to become the most valuable U.S. car maker. [34]

Founded in October 2015 by Musk, Sam Altman, the president of the Silicon Valley technology incubator Y Combinator, and a group of other PayPal Holdings Inc. alumni, OpenAI is a nonprofit company dedicated to creating what it calls “safe” artificial general intelligence and distributing it “as widely and evenly as possible.” [36] Artificial General Intelligence (AGI) is the holy grail of AI. [37] Artificial general intelligence is a term that refers to software that would have the flexibility to equal or surpass human intellectual abilities across a wide variety of different tasks — much like the androids depicted in science fiction movies. [36] What scientists, philosophers, politicians and anyone with a healthy interest in doomsday really worry about is artificial general intelligence: a computer that thinks for itself. [22] Schaeffer said reinforcement learning is likely to play a part in getting the field closer to artificial general intelligence. [36] An intelligence explosion could occur when we succeed in building Artificial General Intelligence (AGI), whereby a system would be capable of recursive self-improvement, ultimately leading to Artificial Super Intelligence (ASI). [37]

Artificial neural nets prove superior to human neural networks in a number of ways. [21]

An AGI would be able to learn from watching, and ultimately would be able to accomplish a wide variety of tasks, think and rationalize in the same way a human can?–?possibly even at a super-human intelligence level. [37] Kurzweil (2005) holds that intelligence “? is inherently impossible to control,” and that despite any human attempts at taking precautions, intelligent entities by definition “have the cleverness to easily overcome such barriers.” [37] This form of intelligence is called machine learning because, similar to human learning, the dynamic interplay between the machine neuron and its connecting machine synapses allows the machine to store and process information across a network of neurons. [21]

“Let us suppose that the AI is not only clever, but that, as part of the process of improving its own intelligence, it has unhindered access to its own source code: it can rewrite itself to anything it wants itself to be. [37] During an interview on March 13th at the South by Southwest (SXSW) conference in Austin, Texas, Kurzweil predicted that “the singularity”- which is basically the moment when carbon and silicon-based intelligence will exceed human’s natural intellectual capacity and will create a runaway effect- is not very far away (will occur in the next 12 years). [20] British mathematician and cryptologist I.J. Good first warned of singularity when he coined the term ” intelligence explosion ? in his 1965 essay, Speculations Concerning the First Ultraintelligent Machine. [37] “Misidentification with machine intelligence leads to false ethical evaluations of AI’s potentials and threats. [37] The emergent intelligence explosion could, theoretically, displace us as Earth’s top dog and place machines in the top of the metaphorical food chain. [24]

A few decades after that though the intelligence is strong enough to be a concern. [24]

” Computers can, in theory, emulate human intelligence, and exceed it… Success in creating effective AI, could be the biggest event in the history of our civilization. [24] I find myself perplexed by the moral question of what neural net-based AI, modeled as it is on human intelligence, tells us about what it means to be human. [21]

The Google employee said: “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. [20]

Do you trust this computer?, a new documentary from Elon Musk, warns that AI is a new life form posed to “wrap its tentacles” around us. [37] Elon Musk, business magnate, inventor, founder of SpaceX and co-founder of Tesla Inc., shares Hawking’s concerns over AI, and even considers it more of a threat than the apparent nuclear capabilities of North Korea. [24] Tesla and SpaceX CEO Elon Musk says humans will have to merge with machines to avoid becoming irrelevant. [20]

One of the most famous, billionaire inventor Elon Musk, put the dangers this way: “Let’s say you create a self-improving AI to pick strawberries and it gets better and better at picking strawberries and picks more and more and it is self-improving, so all it really wants to do is pick strawberries. [21] This would be a particularly needy super-intelligence, but it’s striking how many of the world’s leading scientific thinkers – Bill Gates, Elon Musk, Tim Berners-Lee and the late Stephen Hawking – all worry that an AI will regard us as annoying bugs. [22] I personally feel that people like Elon Musk and Stephen Hawking have been falsely blamed for pessimism when they’re both quite optimistic people. [23] I agree with Elon Musk and some others on this and don’t understand why some people are not concerned [24]

The reason Elon Musk talks so much about is because he thinks much more about the long-term future than your average politician, who is just thinking about the next election cycle. [23] The solution has lauded by thought leaders including Richard Branson, Elon Musk, Bill Gates, and Mark Zuckerberg; and a 2018 Gallup poll found that 48 % of Americans agreed. [37]

RANKED SELECTED SOURCES(37 source documents arranged by frequency of occurrence in the above report)

1. (14) Artificial Intelligence (AI) will be as smart and intelligent as humans by 2029, claims Google’s engineering head – ScrollToday – LifeStyle & Trending Stories

2. (14) Why does Elon Musk care so much about AI and its threat to the world? – Quora

3. (9) The ethics of Artificial Intelligence

4. (8) The Dangers of Artificial Intelligence

5. (8) Why Everything Elon Musk Fears About AI Is Wrong | News & Opinion | PCMag.com

6. (8) Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots – The New York Times

7. (7) Op-ed: What are the ethical possibilities of artificial intelligence? | Deseret News

8. (7) Google billionaire Eric Schmidt: Elon Musk is wrong about AI

9. (7) Elon Musk Biography – Biography

10. (7) Elon Musk isn?t worried about killer robots, he’s worried about the development of unregulated, self-learning Super AI- Technology News, Firstpost

11. (7) OpenAI – Wikipedia

12. (6) Ex-Google chief Eric Schmidt: Elon Musk’s views on AI are ‘exactly wrong’ | ZDNet

13. (6) The truth about artificial intelligence (isnt all that scary) | SOFREP

14. (6) Experts Disagree: Artificial Intelligence May or May Not Be Creating the World We Want to Live In | Inc.com

15. (5) Fei-Fei Li used Elon Musk to warn Google off promoting Project Maven – Business Insider

16. (5) The “Dangers” Of Artificial Intelligence

17. (5) Elon Musk’s Best Investments (TSLA, PYPL) | Investopedia

18. (5) The Problem Artificial Intelligence Poses for Humans – The Other Journal

19. (5) deep learning – Elon musk’s comment on “non-benign AI scenarios” – Artificial Intelligence Stack Exchange

20. (4) Elon Musk: Free Cash Handouts “Will Be Necessary” If Robots Take Jobs | Light On Conspiracies – Revealing the Agenda

21. (4) Killer robots will only exist if we are stupid enough to let them | Technology | The Guardian

22. (4) MIT fed an AI data from Reddit, and now it only thinks about murder – The Verge

23. (4) Will artificial intelligence bring a new renaissance? — AI Congress London

24. (4) Physicist Max Tegmark on the promise and pitfalls of artificial intelligence | VentureBeat

25. (3) Sean Moncrieff: The dark side of artificial intelligence is doomsday scary

26. (3) Artificial Intelligence Restriction – DZone AI

27. (3) Elon Musk’s Ad Astra School: A Look Inside | Fortune

28. (3) Musk-Backed Bot Conquers E-Gamer Teams in AI Breakthrough | IT Pro

29. (3) Schmidt: Musk ‘doesn’t understand’ AI benefits for job creation, economy – TechRepublic

30. (1) Artificial Intelligence | New York Post

31. (1) Elon Musk’s team of robots learns at a super fast rate and trains by playing 180 years worth of games in a day | South China Morning Post

32. (1) What does Elon Musk have against A.I.? | AlphaStreet

33. (1) Artificial Intelligence: Facebook AI and the extermination of privacy, dissent, humanity

34. (1) OpenAI’s ‘Dota 2’ bots are taking on pro teams

35. (1) Elon Musk The Automated Economy

36. (1) Elons Basilisk: why exploitative, egomaniacal rich dudes think AI will destroy humanity / Boing Boing

37. (1) Chicago taps Elon Musk’s technology to connect airport, downtown – Axios