One of the difficulties for people is driving through the headlights. But it seems that people seem to understand at some point that they are going faster than they can see to stop and it becomes quite obvious when they are going in the fog and everyone starts hitting the brakes and slowing it down.
There are situations in which changes are very fast and violent. Maybe you are in one of those situations when driving, where suddenly there is a flood and rainfall and you can’t see it all your life. Meanwhile, you know, if not a slowdown, it may be in life and those in the vehicle with you who will pay the final price if you make this fatal mistake.
Now we can design autonomous cars to understand when they get into trouble, when their visual sensors are overloaded, and the computer system cannot process information quickly enough to remain safe. There are new software algorithms that are written to make this possible, that is, when the car knows it is confused, it tells the driver to take control.
It was an interesting Stanford Video posted on YouTube by a professor who worked on the Google Autonomous car project, who raised this issue, you may want to review this online video “5. The future of cars”, and yes, this is a video in a series of presentations on the future of our cars. Watch the first 6 minutes to see his comments on the subject.
Well, what if you drive and your car is in standalone mode and you play Solitaire on iPad or send text messages to your friends. Suddenly, your car’s computer system is in trouble, lagging behind and unable to process your computing data quickly enough. Then the car warns you to take control. That’s good, because you don’t want your car to drive into something and you need to take it over before it’s too late. The question is: how will the car’s autonomous computer system know when a problem arises?
This is a good question, and when it comes to people, neurologists have found out when people will make a mistake before they do so. Perhaps you have already heard the term ‘brain fart’, which we are talking about here is a computer version of the term. And a mistake or mistake made by an autonomous car. We know that this can happen, and it has happened in tests, and finally the algorithms are adjusted again, and hopefully they are better for him next time thanks to learning the AI strategy, along with all other autonomous vehicles using the same software.
Is it right that, since autonomous vehicles make fewer mistakes than people do when driving, it is now acceptable? If so, to what extent, one could ask when we are dealing with a computer-driven car. Thus, we seem to forgive some pilot error when people drive cars, but will we be able to do the same for computers, will we be able to afford something other than absolute perfection, and is it even possible, at all times, without exception?
And what happens when your car gets into little trouble and tells you to take over, but you are busy doing something else, and until you realize to take over the cars it has already crashed? That was one question asked by Stanford scientists, and that is a decent question that we must answer. I would say that all the important questions I mentioned need to be answered. We need to answer the questions before we have self-propelled cars that will cry wherever we go. Please consider all of this and think about all of this. More details at defensive driving course reviews.