Türkiye'nin en büyük bahis şirketi Mostbet ile büyük ikramiyeyi yakalayın. Burada spor bahisleri ve çevrimiçi kumarhane oyunları ile canlı etkinlikleri bulacaksınız. Bahis şirketinin avantajları arasında şunlar yer alır: yüksek oranlar, çeşitli bonuslar ve promosyonlar, ücretsiz bahisler ve ücretsiz dönüşler, ayrıca basit kayıt ve hızlı para çekme. Mostbet giriş yapmak için mobil uygulamamızı da kullanabilirsiniz.
Google’s autonomous car gets a ‘B’ in driving test: Not great, but better than most of us

Google’s autonomous car gets a ‘B’ in driving test: Not great, but better than most of us

Self-driving cars have come under increased scrutiny in the past week, as newly uncovered documents show that a 2012 road test for one of Google’s self-driving cars resulted in a pass very much without flying colors. The data in question come from government documents acquired through Freedom of Information laws, and show that on its Nevada driving test Google’s car had its share of small problems, and that it was never exposed to some difficult situations like railroad crossings, roundabouts, and school zones. There’s also some question as to whether Google was unfairly involved in designing the test, and that the Google team set the car’s route beforehand and specifically avoided troubling weather conditions. Additionally, at several trouble points the car decided it was incapable of proceeding safely and turned control over to its human occupants — and the irony of that limitation was simply too good for most media outlets to pass up.

Still, this latest “exposé” is not nearly as damning as some are framing it to be. To me, the interesting thing about self-driving cars is not the amount of trust that people are willing to put in self-driven vehicles, but rather the amount of trust that they are willing to put in human-driven vehicles. Most people’s reaction to driving algorithms involves questions such as, “What if there was a bug?” or, “What if you got hacked?” Such questions are best answered with a counter-question: What if your taxi driver had a seizure? What if your bus driver panicked in an unexpected situation? What if the trucker coming from the other direction simply fell asleep at the wheel?

Bear in mind that some maddeningly large portion of human drivers also don’t know how to deal with roundabouts, rail crossings, and similar situations, but that they have too much ego and self-interest to admit this fact and avoid a particular intersection. We accredit these people to drive because, a) the economy must continue to function even if most people are uncoordinated and easily distracted, and b) because we understand that licensing someone to drive is about telling whether or not they are good enough to drive. Every tiny mistake, from a missed shoulder check to an improper turning angle, could easily result in a death, so the point is not whether a driver could hypothetically make a fatal mistake, but how likely such a mistake is to occur.

Autopilot in planes, while less complex, has improved the safety of air travel.

Additionally, most drivers can’t be usefully trained how to react to things like aquaplaning or brake failure, and even if they have been trained they’ll often panic and take the wrong action. Self-driving vehicles, by contrast, can be given vicarious training on the level of a population — new research on how to handle ice can be distributed and perfectly internalized by every auto-car on the road, regardless of age. I can’t even get my grandpa to yield to buses! There seems to be something deeply welded into the human psyche, an impulse to be less fearful of dangers we understand. I can understand and empathize with my grandpa’s crusty stubbornness with regard to transit vehicles, and thus his dangerous driving is less distressing to me than the exact same behavior unconsciously executed by some faceless software construct.

Read: Google has built a Matrix-like simulation of California to test its self-driving cars

When people point to the early-stage limitations of self-driving software as an attack on its chance of success, they are also making a second, more strident statement: that self-driving vehicles don’t just have problems, but that those problems are in fact more dangerous than the problems with human drivers. I don’t have to cite a glut of horrifying driving statistics to point out how absurd such an idea is, do I?  The extreme fallibility (and physical limitations) of human drivers are in fact pushing self-driving technology forward, as industry sees a chance to reduce liability; if you don’t trust the public-safety motivations of government overseers, then trust the profit incentives pushing corporations like Walmart away from accident-prone mammalian car-pilots.

A self-driving Prius much like the one that took the Nevada road test.

Imperfection in a self-driving system is fixable — a self-driving mistake that leads to a fatality can be used to prevent all such mistakes from happening again in the future. As such, bugs in software ought to distress us far less than similar or identical bugs in human ability. The safety of a road with even one human driver is dictated by the worst moment of the worst human driver in the area, while the safety of a totally self-driven roadway is dictated by the pinnacle of human mastery of software and multi-variable kinetics.

This test shows not that self-driving cars are as bad as a middling driver, but that they are asgood as one. That’s better than any highway-driving population on Earth could ever hope to collectively deliver. Remember: it’s not about the car being better than you. It’s about you being worse than the car.

Author: By Graham Templeton

Permanent article address

 

Call us today at (650)-789-6317 or use the form below to request a
consultation to discuss how Excess Logic can help your company
with all of its Surplus Asset Recovery needs.

    SCHEDULE PICKUP/FREE CONSULTATION

    Schedule Pickup
    Menu