Is AI Really Good Enough to Drive Cars?

C O N T E N T S:


  • You really can?t expect someone to be watching out continuously when they are under the belief that the AI is generally able to drive the car.(More...)
  • The driver of the car was then signaled to drive up the ramp and go park the car someplace.(More...)
  • "The problem with poorly maintained roads is not only that they're harder to navigate," he asserted in a recent Wired article, "Self-Driving Cars Won't Work Until We Change Our Roads," "but that computers and humans are no longer able to accurately anticipate where others will drive, thus reducing predictability."(More...)
  • The most important thing is to get the AI portion working better and have the AI start to think more like a person behind the wheel of a car.(More...)


  • Suppose the human that attaches the sensors to the back of the hitched item does a lousy job and those sensors are now misreporting data to the AI system.(More...)
  • People have been driving cars for over one hundred years, and you don't need to be a rocket scientist to know that the overwhelming majority of drivers have survived.(More...)
  • Its latest pitch around transparency is coming at a time when some of its more critical use cases for AI are being seriously questioned--just recently, the company released a set of AI principles prohibiting Googlers from using AI in technologies that could violate human rights or cause "overall harm."(More...)


Is AI Really Good Enough to Drive Cars?
Image Courtesy:
author: Fortune


You really can?t expect someone to be watching out continuously when they are under the belief that the AI is generally able to drive the car. [1] There's also the aspect that having AI that can fully drive a car without human intervention is a much more challenging technical aspect, since you presumably need to make the AI so good that it can drive like humans can. [1] Many AI developers tell me that they consider towing as already solved, since in their book if a car can drive normally, it can tow whatever it needs to tow. [2] I assumed the same falsehoods as these AI developers, namely you put the car in drive and away you go. [2] Do you believe that the self-driving car can drive the car without human intervention? For those of you that are AI developers and know about AI self-driving cars, you?d for sure be saying that of course you know that the Autopilot doesn?t truly drive the car without any human intervention. [1] If this isn?t done, you can bet that humans will revolt against AI self-driving cars and they?ll (rightfully) insist to take them off the public roadways until the AI truly knows how to drive. [3] The core right now for AI self-driving cars is to be able have a self-driving car that can drive in a normal fashion, being able to stay within its lanes, make lane changes, stop at stop signs, and so on. [2]

It is contended that most people would think that the Tesla Autopilot is a true Level 5 self-driving car, which is the level at which an AI self-driving car is driven by the AI and there is no human assistance needed. [1] If you are following a slow driving human driven car, the AI can anticipate that it is likely that the human driver is going to take very cautious actions. [3] The human driver begins to become over reliant on the AI driving the car. [1]

By labeling the surrounding cars that the sensors detect, and then tracking those cars throughout the virtual world model, the AI can characterize how those cars are driving and what their dominant driving style consists of, whether being M, S, or F. [3] We might also consider that it could be the AI system that's responsible, or maybe the AI maker, or perhaps an insurance firm that is insuring the self-driving car, or maybe the human occupants (even if not driving the vehicle). [1] Some AI developers have said that there's no need to do this kind of tracking because once there are all AI self-driving cars on the roadways, and no human driven cars, then there won?t be the conventional M, S, or F anymore. [3] We believe its important for the AI of the self-driving car to be able to detect the driving styles of other drivers. [3] For those AI developers that say it's up to the human drivers to cope with the AI self-driving cars, I?d say that's both narrow thinking and even worse it is thinking that's going to get a lot of people killed. [3] A fast human driver is bound to ram into the back of a slow AI self-driving car, due to the fast human driver speeding along and getting caught off-guard by the AI self-driving car that's going slowly. [3] Right now, you could say that the AI self-driving cars are nearly all of a slow driving style, with some having occasions of fast and medium driving styles mixed in. [3]

The model or framework for an AI self-driving car consists of the self-driving car using its sensors to detect other cars, it then does sensor fusion to bring together the sensory data into a cohesive whole, it then updates its virtual world model as to what the surrounding traffic situation consists of, and then it creates action plans as to what to do next, and then provides commands to the car controls accordingly. [3] The AI just acts in a myopic fashion of whatever car happens to be in front of it. [3] Dr. Lance Eliot, CEO, Techbrium Inc. - - and is a regular contributor as our AI Trends Insider, where he has recently authored over 150 articles and has published 9 books on the future of driverless cars. [3] When you have a self-driving car that has a co-shared responsibility, you can always lean on the human to make up the slack for whatever you could not get the AI to be able to do. [1] We cannot assume that all human drivers are going to be willing to adjust their driving habits to whatever AI self-driving cars are put onto the roadways. [3] For quite a long time we?re going to have a mix of human drivers with our AI self-driving cars, do don?t be holding your breath that we?re going to have all and only AI self-driving cars on the roadways anytime soon. [2] For a true self-driving car, which I consider a Level 5, which is a self-driving car that is supposed to be driven entirely by the AI and not require any human driver, we?re likely going to be interacting verbally with the AI, using Natural Language Processing (NLP). [2] As AI self-driving cars become more pervasive, I am sure that people will start to grumble that they want to use their self-driving car when they head to the hills, or the lake, or when they are moving. [2] During the formulation of action plans, the AI then can use that aspect to predict what the other cars around the self-driving car might do next. [3] At the Cybernetic AI Self-Driving Car Institute, we are asking the same kinds of questions about AI self-driving cars, and trying to find ways to deal with the similar problems being encountered. [1] At the Cybernetic Self-Driving Car Institute, we are developing AI software that enables a self-driving car to be M or S or F, and be able to shift their driving style as needed, and also be able to detect the style of other cars around them. [3] Towing with an AI self-driving car is something that people are going to assume they can do. [2] Right now, most of the existing AI self-driving cars are relatively slow drivers and you can properly suggest they should be classified as embodying a S style. [3] On the right, we?ll stack what the AI self-driving car can actually do. [1] The AI needs to realize that the methods of making lane changes that it has when the self-driving car is without a hitch are not quite the same with a hitched item. [2] The AI self-driving car will likely want to communicate electronically with other nearby self-driving cars to let those self-driving cars know that it is towing something and might need extra room. [2] It will be a little tricky because the question of how the extra sensors tie into the rest of the AI self-driving car needs to be determined, and also what the trustworthiness will be of those extra sensors. [2] The moment that your self-driving car starts to get up into higher speeds, it means there's less time for the AI to react and a greater chance of the self-driving car getting into untoward situations. [3] If the car ahead goes fast, it doesn?t have any significance to the AI, other than the now allowed space ahead of the AI self-driving car in case the self-driving car wants to speed-up. [3] You might contend it is solely and completely the human driver that is to be held responsible for the action of the AI self-driving car. [1] Suppose you put a human driver into a self-driving car, you tell them they are ultimately responsible for the actions of the car, but then when an accident is about to happen the AI hands-over the car controls to the human with only a split-second left to go. [1] AI self-driving cars are a serious innovation and I hope it's not going to get undermined, perhaps inadvertently in the rush to get it onto our streets and yet in the end produce untoward results that will kill the golden goose for all of us, society included. [1] The AI might be able to detect that something is wrong via the sensors at the rear of the AI self-driving car, and then go into a mode of perhaps bringing the towing to a safe stop. Or, the occupants might alert the AI, via spoken commands, and ask the AI to safely pull over for an inspection. [2] Today's AI self-driving cars act as though the other cars around them are transitory transactions. [3] We instead take the approach that the other cars around the AI self-driving car are creating a relationship with the self-driving car. [3] Not on the surface yet the NIPS conference is still oversold, corporate PR still has AI all over its press releases, Elon Musk still keeps promising self-driving cars, and Google keeps pushing Andrew Ng's line that AI is bigger than electricity. [4] The answer that an AI self-driving car can?t do towing won?t be satisfying. [2] How would the AI self-driving car even know that it is towing something? The AI isn?t a person. [2] At the Cybernetic Self-Driving Car Institute, we are developing AI systems for self-driving cars, including having the AI be savvy enough to handle doing towing. [2] There are some that feel any regulation that looks over the shoulder of self-driving car makers is going to stunt the growth and pace of advance for AI self-driving cars, and so there is resistance toward taking action against these makers. [1] On the left, we?ll stack the suggested and overt claims made by an auto maker or tech firm about what the AI self-driving car can do. [1] This is what is going to keep happening with today's AI self-driving cars. [1] You might already be aware that Google's Waymo has been keen to develop an AI self-driving car and has tended towards achieving a Level 5. [1] For now, only once the journey itself starts, the AI of the self-driving car mainly comes to play. [2] I suppose we might have robots that can do this for us, but I?ll bet that we?ll have AI self-driving cars sooner than we have robots that will do so. [2] Imagine that you bought this expensive AI self-driving car, and you later discover that you can?t tow anything with it. [2] The early adopters will probably accept this idea, and say that you can?t expect the world of an AI self-driving car. [2] The AI self-driving cars will communicate with each other via V2V (vehicle-to-vehicle communication). [3] Forcing or tricking an AI self-driving car into doing towing, if it doesn?t realize what's taking place, I?d say is a formula for disaster. [2] By far the biggest pin punching through the AI bubble was the accident in which an Uber self-driving car killed a pedestrian in Arizona. [4] We have 200+ million conventional cars today in the United States alone, and those aren?t going to overnight suFdenly all become AI self-driving cars. [3] The AI notices the driving nature of the other cars and builds up a track record about how those other cars are driving. [3] In addition to detecting the driving styles of other cars, the AI is also programmed to be able to undertake a driving style as warranted for a given circumstance. [3] This continues over and over, and the AI is able to continually update the tracking to reflect what the other cars actually do. [3]

As mentioned earlier, a slow driver will not always necessarily drive slowly, and so the AI needs to be careful in not creating a myopic prediction. [3] "Most of last year was spent understanding this realm of ethics and AI and really educating ourselves, and I feel that 2018 has really become the year of doing the year of moving beyond virtue signaling. [5]

The driver of the car was then signaled to drive up the ramp and go park the car someplace. [2] Assuming that you are doing a trip like the one of taking items on a journey, you?ll need to drive the car with the now hitched trailer vehicle to wherever you need to load it. [2] About halfway of loading up the storage compartment, a police car drives up to us, tells us we have to immediately move. [2] Looking at last year's California DMV disengagement reports, Nvidia-equipped cars could not drive ten miles without a disengagement. [4] There are so few self-driving cars on the roadways that if you do encounter one, it's a novelty and you are glad to let it drive however it wants. [3] There are some that drive fast because others around them are doing so, and likewise some that drive slowly because other cars around them are doing so. [3]

No longer will those cameras provide an accurate depiction of what's really behind the self-driving car. [2] In essence, the above statement should be interpreted: "We currently don't have the technology that could safely drive us coast to coast, though we could have faked it if we really wanted to (maybe). [4]

"The problem with poorly maintained roads is not only that they're harder to navigate," he asserted in a recent Wired article, "Self-Driving Cars Won't Work Until We Change Our Roads," "but that computers and humans are no longer able to accurately anticipate where others will drive, thus reducing predictability." [6] I predict it will be like that with AI. Each fatality caused by a self driving car will cut the number of VC's likely to invest in AI by half. [7] In the long run, this is quite possible since the percent of AI featured cars is rising fast. [8] In the future, AI will shorten your commute even further via self-driving cars that result in up to 90% fewer accidents, more efficient ride sharing to reduce the number of cars on the road by up to 75%, and smart traffic lights that reduce wait times by 40% and overall travel time by 26% in a pilot study. [9] In the future, no self-driving car will ever be safer than the minimum viable AI necessary to get to market. [10] What can super-weak AI offer? Instead of performing all calculations itself, a car using super-weak AI can rely on the data from other cars and close infrastructure. [8]

AI has the capability to alter how we drive our cars, it will help automate our factories, it will both create and kill jobs, it can be used in warfare, and it's already being used to help improve our healthcare. [11]

You know, if you're making an action film, maybe you're having to drive a car off a cliff. [12] I can't wait for a time when really common because I just want to sit back and read books while the car does all the work. [12]

What's your take on self-driving cars? It'll take awhile before people are comfortable with having their hands off the steering wheel and letting the computer drive. [12]

"The business sector that is going to be most disrupted by computer vision and AI in the short term is transportation, so companies like Uber, taxi companies and the entire car and automatiove industry will completely change in the next years. [13] We were told these cars were using Artificial Intelligence (AI). [14] If the AI thinks a car is a cat, that doesn't matter very much so long as it knows not to hit it. [15] To get a sense of the incredible possibilities associated with AI, one need only listen to Sebastian Thrun -- the founder of Google X and Google's self-driving cars project, and the current CEO of Udacity -- who provides a highly optimistic appraisal of the endless possibilities for good that can come from advances in AI. [16] One exciting yet controversial application of AI concerns its use in self-driving cars. [16] In a review of the most recent news, Andy and Dave discuss the latest information on the fatal self-driving Uber accident, the AI community reacts (poorly) to Nature's announcement of a new closed-access section on machine learning, on-demand self-driving cars will be coming soon to north Dallas, and the Chinese government is adding AI to high school curriculum with a mandated textbook. [17]

That it can do all kinds of things that human brains aren?t fast enough to do! But in the end, AI is actually really dumb. [14] I really believe to be able to bring AI to billions of people you need to really understand content and people. [13] A 3-year-old can do things like pick up a dish from the sink and put it in the dishwasher and communicate with people and manipulate people and navigate a complex room without falling down or running into the furniture -- all kinds of things that AI really stinks at currently. [18]

Full autonomy is "really a software limitation: The hardware exists to create full autonomy, so it's really about developing advanced, narrow AI for the car to operate on." [19]

That's one potential bad outcome - a passenger who may not be allowed to order a car because they are moving in drunken ways, but they aren?t really drunk at all. [14] The problem with the way the mapping is done is that you have, say one of these cars with these expensive sensors, and basically you drive around the world, you have your data and then there is some labeling process where you basically say where are the roads, where are the lanes, where are the possible places where can park, etc. Okay, that makes you have very small coverage, because this is at the vehicle level and is very expensive. [13] If you cannot recognize the curve of the road, the car is going to drive to the sidewalk, which endangers the human pedestrian. [13] We're a company building Full-Stack Software for autonomous driving, which includes understanding perception, the study of auto-dynamic objects, as well as the ability to make decisions and train the car how to drive. [13] There haven't been any tests under standard driving circumstances in the U.S. Cars are not allowed to drive in full auto mode. [20] If you think about what is the major difference 30 years ahead from now, one of the biggest things is probably all the cars will be ready to drive by themselves. [13] A pedestrian viewed from low resolution is still probably able to recognize, but if you want to drive your car safely, you need to recognize some more subtle detail. [13] We now have commercialized cars that can drive for substantial amounts of time entirely autonomously. [15] It can converse, drive cars, beat video games, even paint pictures and detect some types of cancer. [21]

I think that it really could be that other developers of self-driving cars have better designed systems that would avoid such a collision. [20] The latest greatest news, I guess, as of May 1st 2017, I'm also heading a new lab of Uber ATG in Toronto, so self-driving cars are in Canada now and that's really, really exciting. [13] Sophisticated AI tools can also spot complex patterns of fraud-like groups of connected people filing similar claims, perhaps with overlapping networks of doctors or lawyers, about injuries from deliberately staged car accidents. [22] Can we teach the AI with millions of labelled natural photos (e.g., cars, faces, animals, buildings) and then use the acquired knowledge on histopathology images? Other potential remedies are to inject domain knowledge into deep networks, training "generative" models that do not directly deal with classification, and combining deep solutions with conventional algorithms and handcrafted features. [22] Self-driving cars will steal my job never; why would a software-writing AI be given a street-legal automobile as a freakin' case?! I'm thinking the AI that steals my job will fit in a standard 19" form factor server rack. [23] Abundant applications: The truly game-changing applications of AI are not necessarily in smart speakers or self-driving cars. [24]

You know, to me, I think the issue of AI has always been, not necessarily to make entities that will have exactly the human experience--although of course, that's the stuff that Hollywood movies and TV series have been made of--but really to get behavior out of them that is similar to what an intelligent human being will show in those scenarios. [25] Yes! That again is a really interesting, loaded question that people talk about, especially people outside of technical AI, and this sort of presupposes that human experience is somehow sort of, "one size fits all," and universal. [25] AI today is really good at improving the efficiency of repetitive tasks and working within a set of predefined rules. [26] It is only when the culture towards digital changes, everyone's views are aligned and AI becomes a board-level agenda that AI in business can really hope to succeed. [22]

"In fact, I believe that as AI advances, there will be a new class of marketers whose sole responsibility will be to drive this AI machinery, understand and take advantage of AI algorithms, and strategically point to the right data and goals which in turn will spark the integration between data and marketing, and ultimately, bring them closer together." [22] Naimat believes there is no risk in marketers becoming too dependent on AI. "Marketers will still need to drive AI tools that will help them do their jobs better and at scale," he said. [22]

The car attempted to drive full speed under the trailer, "with the bottom of the trailer impacting the windshield of the Model S." [19] What if cars weren't mass produced? Local Motors, a small-batch auto manufacturer, relies on an online design community and a "co-creation" business model to bring new vehicles to market, really fast. [27] This is a really great use I think of self-driving car tech, because it gives residents who may have trouble driving as they age a convenient and safe taxi they can use 24x7. [23]

Devin Greene sits in the front seat of an Uber driverless car during a test drive in San Francisco on Dec. 13. [27] Autopark drives the car into a parking spot, while Summon drives it out. [19] Google's Waymo self-driving car project employs the same technique, but these vehicles drive on virtual roads. [26] Road conditions in Indian cities are enough to drive any self-driving car algorithm insane. [24]

The most important thing is to get the AI portion working better and have the AI start to think more like a person behind the wheel of a car. [28] There is so much potential for what we can do with this vehicle, and so much we can train an AI to do with self-guiding a car, so this project will be going on for a while. [28] From SIRI to self-driving cars, artificial intelligence (AI) is progressing rapidly. [29]

"I really want to make clear that the reason why it works is that we?ve chosen very specific tasks. it's not a general purpose AI, but it's very good at doing these narrow and specific things." [30] We talked about games earlier and you pointed out that they were closed environments and that's really a place with explicit rules, a place that an AI can excel, and I?ll add to that, there's a clear cut idea of what winning looks like, and what a point is. [31] Now our AI system finds? "Oh, there is an anomalous weather condition and there is an uptick in selling that coat, you better do something to seize that opportunity to sell more coats," so either you have to send more inventory to that region to make sure that if somebody really wants a coat, you?re not out of stock. [31] Sure, AI is advancing but it is still really, really far from getting to the point where we should be scared. [32]

First of all, I don?t think it will totally allow it, because for it to really take hold you have to have a majority of cars on the road to be autonomous. [31] Autonomous cars, I would love to have one, that requires artificial intelligence, and I hate driving, I hate the fact that I have to drive for 30 minutes to an hour every day, and waste a lot of time, my cognitive time, thinking about the road. [31] Then there's a supervised mode, and then, "maybe the system is good enough where you can sit back and let the car drive itself," Huffman said. [30] Our first look at the car showed that it had broken a drive train, and there were little marks in the drywall where it crashed. [28]


Suppose the human that attaches the sensors to the back of the hitched item does a lousy job and those sensors are now misreporting data to the AI system. [2] She envisages the tool having wide application and utility across different industries and markets, suggesting early adopters are likely those in the most heavily regulated industries such as financial services and healthcare, where "AI can have a lot of potential but has a very large human impact". [5] This is one aspect that not many are considering, namely, the human driver needs to have a theory of mind about the AI driving capability. [1] The human driver tends to believe that if an accident hasn?t yet happened while the AI is driving, it probably implies there won?t be an accident. [1] I?ve had some AI developers from some of the auto makers that tell me that those human drivers going faster than the speed limit are wrong and driving illegally. [3] As I predicted, the place where the cracks in AI are most visible is autonomous driving an actual application of the technology in the real world. [4] Chowdhury says it's principally been tested on models that use classification to group people for the purposes of building AI models, so it may not be suitable for other types. (Though she says their next step will be to test it for " other kinds of commonly used models".) [5] The tool which uses statistical methods to assess AI models is focused on one type of AI bias problem that's "quantifiable and measurable". [5] The "AI fairness tool", as it's being described, is one piece of a wider package the consultancy firm has recently started offering its customers around transparency and ethics for machine learning deployments while still pushing businesses to adopt and deploy AI. ( So the intent, at least, can be summed up as: 'Move fast and don't break things'. [5] If the AI detects a fast driving style, it can anticipate that the human driver will take more risky actions. [3] We need to develop AI systems to be able to cope with human drivers. [3] We need to face reality and have the AI adjust to the human drivers. [3] When you put the AI system and the human driver into a co-shared responsibility situation, they each need to "know" what the other is supposed to do. [1]

Deep learning has been at the forefront of the so-called AI revolution for years now, and many people believed that it would take us to the world of the technological singularity. [4] VentureBeat meta name"description" content"Guest Deep learning has been at the forefront of the AI revolution for years. [4]

During towing, the AI should be monitoring the status of the brakes and the statue of the tires, using it to adjust how the driving is coming along. [2] When you have the AI doing the driving, you tend to allow mental distractions to takeover more of your mental processing. [1] When you are sharing the driving with the AI, you are likely to take your feet off the peddles, and your hands off the wheel. [1] The AI would also then switch into the mode of undertaking the driving elements I?ve mentioned earlier. [2] Right now, few of the human drivers have a sufficient understanding of the theory of mind about what the AI can and cannot do. [1] Or, maybe a combination of the auto maker, the AI maker, and the human driver. [1] There's definite wiggle room since the AI could have done something untoward, and thus it raises the question of whether even a human driver that was attentive and ready to take action, whether or not the human can still be held responsible. [1] While tech giants may have developed their own internal tools for assessing the neutrality of their AI algorithms Facebook has one called Fairness Flow, for example Chowdhury argues that most non-tech companies will not be able to develop their own similarly sophisticated tools for assessing algorithmic bias. [5] One specific draw back is that currently the tool has not been verified working across different types of AI models. [5] A quirk of AI algorithms is that when models are corrected for unfair bias there can be a reduction in their accuracy. [5] While Chowdhury concedes there is an accuracy cost to correcting bias in an AI model, she says trade-offs can "vary wildly". [5] The AI needs to also be able to deal with situations such as the case of the hitch going awry, which I mentioned happened while we were on the grapevine. [2] Yann Lecun warned about overexcitement and AI winter for a while, and even Geoffrey Hinton the father of the current outburst of backpropagation said in an Axios interview that this likely is all a dead end and we need to start over. [4] Predicting the AI winter is like predicting a stock market crash: It's impossible to tell precisely when it will happen, but it's almost certain that it will happen at some point. [4] I say this because many of the AI developers and the auto makers and tech firms are not treating this in a truly practical way. [1] What's going on : Andrew Moore, dean of computer science at elite Carnegie-Mellon University, tells Axios that while current AI displays impressive capability in visualization, speech, and difficult games, it "still contains no magic." [33] In my opinion, signs already show a huge decline in deep learning (and probably in AI in general as this term has been abused ad nauseam), yet hidden from the majority by an increasingly intense narrative. [4] Another common complaint about Tesla's approach is that they have "cleverly" named their AI system to be called Autopilot. [1] Next week professional services firm Accenture will be launching a new tool to help its customers identify and fix unfair bias in AI algorithms. [5] Why it matters : Moore's remarks align with a growing chorus of doubt in the AI community that current methods can attain what the field calls "artificial general intelligence." [33] Go deeper : In January, Gary Marcus, a New York University professor, ignited a firestorm in the field with a paper that catalogued doubts about machine learning, the most broadly practiced method of AI. Among the leading lights to deride him was Facebook's Yann LeCun. [33]

There's many truck drivers that can attest to the lack of civility of car drivers (humans). [2] It has been pointed out that the web site touting the Tesla Autopilot indicates that the Tesla car says this rather prominently "Full Self-Driving Hardware on All Cars," and that there is video that begins with this emboldened message shown on the screen (shown in all caps on the video): "THE PERSON IN THE DRIVER?S SEAT IS ONLY THERE FOR LEGAL REASONS. HE IS NOT DOING ANYTHING. THE CAR IS DRIVING ITSELF." [1] The deadly Tesla driving incidents of May 2016 and March 2018 have been assessed so far as occurring as a result of the human driver failing to do their part in terms of driving the car, and for which it is then been stated by the NTSB in the May 2016 case that the human driver had a "pattern of use of the Autopilot system indicated an over-reliance on the automation and a lack of understanding of the system limitations." [1]

The trouble is that, apart from a few daredevils abusing (and sometimes crashing) Teslas, there has been little indication to date that a significant number of the world's drivers want such cars. [33] The perspective of the car driver is apparently that if there is open roadway space, take it. [2] This is akin to if I had a novice teenage driver at the wheel of my car. [1] All it would take is for that car to get into a predicament and that head-out-the-window driver would likely get injured or killed (or, lose their head). [1] Many auto makers and tech firms that are aiming toward Level 5 self-driving cars are making them so that there aren?t any driver controls in them at all, at least none for humans to use. [1] I?ve analyzed this in my exploration of the human back-up drivers that are currently being used in self-driving cars that are being tested on our public roadways. [1]

It's still up to the human to make sure that their car is rightfully established for doing towing. [2] When you are solely driving a car, you might have some mental distractions like you are thinking about what you are going to wear to that party tonight. [1] When you are solely driving the car, you know that you need to be alert. [1] Most of the time, if you are solely driving a car, you have your foot on the gas or the brake, you have your hands on the wheel, all of this is continuous. [1] There's a fast driver ("F") that is also driving in an ill-advised manner, likely allowing insufficient braking space between them and the car ahead of them. [3] "The way to think about it is that you have a car that can travel 200 miles, and after five years it can go 800 miles," said Venkat Viswanathan, an assistant professor at Carnegie-Mellon University. [33] For a conventional car, I think we can pretty much agree that the human driver is the responsible party. [1] They could also increase the manner in which they warn human drivers about the capabilities of the car. [1] It absolutely requires that a human driver be present, and that the human driver be ready to take over control of the car. [1] Stomping on the accelerator pedal is the preferred mode of driving a car. [3] When "someone else" (something else) is driving the car, you let your guard down. [1] His father would take charge of making sure that the boat hitch was properly connected to the car, and his father also preferred to do all the driving, since he told us that towing something requires special attention and special skills. [2] For those auto makers and tech firms in the middle ground, above the low-tech conventional car and not yet at the true Level 5, it's going to be ugly times for all. [1] Some will say that they speed because they can, while others might be more deferential and merely blame the speeding on their car (sir, it's got a V8 engine made to go fast, what was I to do, they lament to a motorcycle cop). [3] What's happening : According to a new Axios/ SurveyMonkey poll, a lot of Americans are fearful of autonomous cars, but 33% of them are at least somewhat likely to buy one once they are available. [33] While a solid minority said they might buy one, the majority said they are scared of autonomous cars. [33] That's ostensibly why we are turning to autonomous cars. [33] We turned off the air conditioning in hopes that it would save the car some energy to use toward pulling the monstrosity that was connected to us. [2] Once again, no need therefore to track and ascertain styles of the surrounding cars. [3] A mishandled car is a much more dangerous machine and potential killer than is a scooter. [1] It can be cycled more than 23,000 times, while the typical electric car battery cycles 1,000 times. [33] Two cars could be going the proper speed limit, and yet one might be considered a slow driver and the other a fast driver. [3] Think of talking to the car akin to talking to Alexa or Siri. [2] Ensure that the car's curb weight, which is when the car is empty and has a full tank of gas, and when added with the weight of any passengers and cargo, and add the tongue weight of the trailer, all of that should not be more than the gross vehicle weight rating (considered the total weight bearing down on the car's tires). [2] Those fully compliant stops and long pauses at a stop sign are enough to make you want to honk your horn and maybe even shove the car ahead of you out of the way. [3] Why it matters : To the degree the survey is accurate and reflects a broad global trend, everything from the world's sprawling car industry to roads and cities themselves could be on the cusp of a fundamental transformation. [33]

Human drivers that encounter a self-driving car are right now pretty much willing to give the self-driving car some wiggle room and not be bothered by the slow driving aspects. [3] I?ve predicted many times that once we get a lot of self-driving cars on the roadways, the "polite" human drivers that are right now allowing self-driving cars to do what they want to do, will become a lot less polite. [3] Suppose we have a circumstance of a bunch of human drivers that are of the fast driving style, and they come into contact with a bunch of self-driving cars that are of a slow driving style. [3] Of course, it's a bit more serious in the sense that if you have human drivers that do not abide by certain kinds of necessary practices with a self-driving car, the end result can be real life-or-death consequences. [1] When we?re in the levels less than 5, it's pretty much the standard definition that the human driver is responsible for the actions of the self-driving car. [1] At the Level 5, the true self-driving car, we presumably can say that it is not the responsibility of the human driver since presumably there will not be a human driver and not even a provision to have a human driver in the self-driving car (all humans will be mere occupants, not drivers). [1] For instance, a self-driving car equipped with causal reasoning could encounter a situation for which it has no data, and instantly adapt, said Pearl. [33] I won?t go through each of the above aspects, but can provide you a glimpse at one of the elements, namely the notion of being watchful for other cars that might cut into a roadway opening when the self-driving car is trying to change lanes. [2] In the second example, the slow driver is driving fully in a legal manner, while the fast driver is driving in an illegal manner, which might consist of exceeding the speed limit and making maneuvers that endanger other cars and pedestrians. [3] The track record can be much longer such as if you are driving on a highway over a greater distance and you are jockeying with other cars that are nearby along the way. [3]

Yet, some would say that the manner in which Tesla is marketing and promoting the car would tend to mislead consumers into believing that it is a true self-driving car. [1] Tesla announced that its fully self-driving cars were very close, even selling that option to customers to be enabled later via a software update. [4]

You might find of interest my analysis of the autopilot notion and self-driving cars, published in July 2017. [1]

By observing a track record, it is possible to then classify the other cars as to whether they are medium in driving style, slow in driving style, or fast in driving style. [3]

When an accident occurs such as the March 2018 incident, they pointed out that: "The driver had received several visual and one audible hands-on warning earlier in the drive and the driver's hands were not detected on the wheel for six seconds prior to the collision." [1] For them, they believe that driving slowly is the right way to drive. [3] That last video is from March 2018, several months after the promised coast to coast Tesla autonomous drive that did not happen (the rumor is the company could not get it to work without about 30 disengagements). [4] We could have done the coast-to-coast drive, but it would have required too much specialized code to effectively game it or make it somewhat brittle and that it would work for one particular route, but not the general solution. [4]

They drive as though they are headed to the hospital and if they don?t make it there on time then their spleen will burst. [3] Even more beguiling is that they have no hesitation to drive slowly in the fast lane, though they would prefer to stay out of the fast lane because it is normally filled with brutes. [3] Running a yellow light is of course the only way to properly drive. [3]

There are some that suggest it is like in the movie Casablanca, wherein the French Captain Louis Renault pretends to look the other way and tells the police to round-up the usual suspects, when he really knows who done it. [1] Every company is really adopting this idea of agile innovation and development, etc. People are talking a lot about three to six month iterative processes. [5] "For many of us, especially those of us who are in this space all the time, we're tired of just talking about it we want to start building and solving problems, and that's really what inspired this fairness tool." [5] Why should you care that it now possibly is in the way of others? Shocking? Not really. [1]

People have been driving cars for over one hundred years, and you don't need to be a rocket scientist to know that the overwhelming majority of drivers have survived. [10] Waymo tests AV's without safety drivers! Yes, in the most quiet and slow block in Phenix with perfect cellular reception such that they could constantly monitor these cars remotely. [7] I think at the moment there's a human fear of relinquishing that much control, especially in a car, which is like a little metal missile that's flying. [12] For now, no car is safer than the human being behind the wheel. [10] "The driver lost control," police reports often say, but few look to the root cause of car crashes. [10]

With this level of AI, the driver can only dream of reading or watching a movie behind the wheel. [8] A few hours after Google announced an eerily human sounding smart voice assistant -- and set off panicked discussions about where artificial intelligence may be taking us -- I sat down with famed horror writer and director Leigh Whannell to talk about his new sci-fi thriller, which imagines a future where an AI takes over a man's life. [12] AI technology is all around us, including along the road, and as advances and the familiarity with autonomous vehicles grows, any number of opportunities to improve our roadways will emerge. [6] Once the data is collected and sent to the cloud, the data is analyzed using advanced AI technology. [6] AI autopilots in commercial airlines is a surprisingly early use of AI technology that dates as far back as 1914, depending on how loosely you define autopilot. [9] According to a 2014 SEC filing, the vast majority of major banks rely on technology developed by Mitek, which uses AI and ML to decipher and convert handwriting on checks into text via OCR. [9]

To simplify the discussion, think of AI as the broader goal of autonomous machine intelligence, and machine learning as the specific scientific methods currently in vogue for building AI. All machine learning is AI, but not all AI is machine learning. [9] As for super-weak AI, it can be a breakthrough for many sectors including autonomous driving. [8] Uber Advanced Technologies occupies a handful of industrial buildings; self-driving startups Argo AI and Aurora Innovation are nearby. [34] Part of what he's describing is the so-called productivity paradox: while big data, automation, and AI should in theory be making businesses more productive, boosting the economy and creating more jobs to offset the ones being lost, this hasn?t happened. [34]

RANKED SELECTED SOURCES(40 source documents arranged by frequency of occurrence in the above report)

1. (106) News stories with latest developments how artificial intelligence will make a difference to business -- AI Congress London

2. (65) LDV Blog -- LDV Capital

3. (49) Responsibility and AI Self-Driving Cars - AI Trends

4. (47) Driving Styles and AI Self-Driving Cars - AI Trends

5. (43) Tesla Autopilot - Wikipedia

6. (35) Towing and AI Self-Driving Cars - AI Trends

7. (33) The Ethical Implications of Artificial Intelligence | Law2020 Homepage | Above the Law

8. (32) Episode 52: A Conversation with Rao Kambhampati Voices in AI

9. (30) Here's How Microsoft Is Investing in AI -- The Motley Fool

10. (28) AI with AI | CNA

11. (25) Horror mastermind Leigh Whannell plays out our AI fears in Upgrade - CNET

12. (25) Op-ed: What are the ethical possibilities of artificial intelligence? | Deseret News

13. (21) Self-Driving Cars Likely Won't Steal Your Job (Until 2040) - Slashdot

14. (21) Elon Musk isn?t worried about killer robots, he's worried about the development of unregulated, self-learning Super AI- Technology News, Firstpost

15. (21) self-driving cars : NPR

16. (20) Super-Weak AI: Evolution Dead-End or Future of Autonomous Driving? | Intellias Blog

17. (18) AI winter is well on its way - we are not going to get self-driving cars within 20 years without some major breakthrough : TrueReddit

18. (18) Everyday Examples of Artificial Intelligence and Machine Learning

19. (13) The AI winter is well on its way | VentureBeat

20. (12) Accenture wants to beat unfair AI with a professional toolkit TechCrunch

21. (11) The Real Benefits of Artificial Intelligence in India | The Forum Network, hosted by the OECD

22. (11) Axios Future - June 3, 2018 - Axios

23. (10) Meet Uber's New Drunk Detection Technology

24. (10) Comments on The Economist explains: Why Uber's self-driving car killed a pedestrian | The Economist

25. (10) From rust belt to robot belt: Turning AI into jobs in the US heartland - MIT Technology Review

26. (8) What Is True Automotive Safety--and Does Anyone Really Care? - The Drive

27. (6) 'Westworld' science adviser shares his vision of robots and the future of AI

28. (6) Easy street: How we can use AI for infrastructure maintenance | GreenBiz

29. (6) AI winter Addendum Piekniewskis blog

30. (4) AI Thinks Using Deep Neural Networks. How Does It Work? (VIDEO)

31. (4) Gigaom | Voices in AI Episode 47: A Conversation with Ira Cohen

32. (4) Humans and AI could work together to prevent cars from? coming together

33. (3) GalecinoCar: A Java-based self-driving vehicle |

34. (3) Google Duplex Gets a Second, More Subdued, Demo | WIRED

35. (3) The researcher behind AI's biggest breakthrough has moved on from Google -- Quartz

36. (3) WordLift is a WordPress Plugin with AI Thatll Help You Improve Your SEO

37. (3) Mobacar | Intelligent Mobility Solutions

38. (2) How Netflix Uses AI to Find Your Next Binge-Worthy Show

39. (1) A Summary of Concrete Problems in AI Safety - Future of Life Institute

40. (1) Why does Elon Musk care so much about AI and its threat to the world? - Quora

Skip to toolbar