worm
Well-known member
Can it go off roading in the mud through 3 feet of water and dodging rocks to get to my buddy's fishing cabin in the middle of nowhere?
it will probably just hover above the mud/water
Can it go off roading in the mud through 3 feet of water and dodging rocks to get to my buddy's fishing cabin in the middle of nowhere?
you'll always be allowed to do that.
Oh, it can't do that.
Good to know.
the efficiency in fuel, and moving of traffic
i assume over time we will need less and smaller roads
Regarding autonomous vehicles, I have a really hard time believing that ******* drivers in sports cars will willingly give up their keys to get in a vehicle that will travel slower, more safely, and in an orderly fashion.
If anything, these drivers will love autonomous vehicles - they are designed to avoid collisions and will be ripe for cutting off.
I had a chat with a guy who works for a cyber-security firm overseas - he told me that the hackers that ran the Jeep off the road (of course this wasn't an autonomous vehicle, it just had all of the new self parking and "assited" driving features) had a hard time targeting their test vehicle. Meaning it would have been easier to run multiple vehicles off the road than just their own. Security will always be an issue.
The other interesting hurdle to jump over when it comes to commercially-available self-driving vehicles is the ethical questions that come up when programming them.
For example, how does your car prioritize human life?
When it's a person behind the wheel, generally you're going to place the greatest amount of importance on saving your own life, particularly when it comes to split-second decisions/reactions. But what is your self-driving car going to be programmed to do if it's faced where a situation where a collision is unavoidable, and it can either:
A) Make the maneuver most likely to save the life of the passenger inside the car, but kill multiple people outside the car (whether they be pedestrians or passengers of other vehicles)
B) Make the maneuver most likely to save the largest number of lives, but kill the person inside the car
The other interesting hurdle to jump over when it comes to commercially-available self-driving vehicles is the ethical questions that come up when programming them.
For example, how does your car prioritize human life?
When it's a person behind the wheel, generally you're going to place the greatest amount of importance on saving your own life, particularly when it comes to split-second decisions/reactions. But what is your self-driving car going to be programmed to do if it's faced where a situation where a collision is unavoidable, and it can either:
A) Make the maneuver most likely to save the life of the passenger inside the car, but kill multiple people outside the car (whether they be pedestrians or passengers of other vehicles)
B) Make the maneuver most likely to save the largest number of lives, but kill the person inside the car
Those drivers will be uninsurable. Already insurance companies are floating policy rate reductions for drivers willing to have monitoring devices installed temporarily to measure how quick you accelerate, how quick you break, average speeds on different types of roads, etc. Feed the data through an algo and if you're a safe driver, get a discount. How long until insurance companies won't touch you without that "temporary" install? How long after that until they require you to provide the data on a full time basis?
I really believe that the people arguing on the other side of this simply don't understand how insurance works, and how autonomous vehicles will change that landscape. There simply won't be the large pool of "safe" (or rather, drivers who aren't running into shit) drivers available to off set the costs of bad drivers for these insurance companies. Accident rates and dollars spent on claims are lagging indicators for the insurance industry, they're already trying to transition to utilizing leading indicator data to determine who are risks not worth insuring.
Security will become a selling feature on autonomous vehicles the same way safety is today. Look at the damage a few safety incidents had to the Toyota brand a few years ago. Cyber security will become a similar issue in the automotive world within the next 10-20 years. It's too damaging to multi billion dollar brand images to have some kid with a laptop able to run cars off the road for a joke whenever he wants to.
not going to happen. You know why this won't happen? Poor people is why this won't happen. Poor family has their older but well maintained car, can't afford a new 20 thousand dollar AI car. These are the families that jump from one 2 grand car to the next. Insurance rates start rising on them, no fault of their own or their driving, just because they can't afford a new 20 grand AI car. These people will vote for the party that cashes in on the issue and says they will ensure they can get affordable insurance.
Unless you figure out a way for everyone to be able to afford that AI car, its not happening.
you are over estimating how fast this will come to pass. 2050,with 2030 as the point where most cars being manufactured are autonomous.I've addressed this mess already man. You're just wrong here. You're working under this assumption that things will be one way on Tuesday and then boom, change on Wednesday. Adoption will occur over a generation of vehicles, and costs for non adopters will rise the further into that adoption cycle we get. You're literally making the same argument someone would have made in favour of horses on the streets of Manhattan 115 years ago.
In 1905 there was 150,000 working horses on the streets of NYC. By 1922, there was a couple thousand. This will be no different. New autonomous vehicles will become used, they will be bought second half by people further down the economic ladder, those will then get sold again a few years later and before you know it 99% of the vehicles on the road will be autonomous with a few rich ****s, able to afford the significant costs of human driving insurance (likely in tracked vehicles, with limited insurance company liability for driving outside of their safe driving parameters) will be rolling around in their 2035 AMG/M/ or exotic whatevers.
you are over estimating how fast this will come to pass. 2050,with 2030 as the point where most cars being manufactured are autonomous.
And again, very much grandfathered in. No government is going to make it so the urban poor has to sell their human controlled car because they can't afford insurance or a new car.
Isn't mass transit already a better investment?
not going to happen. You know why this won't happen? Poor people is why this won't happen. Poor family has their older but well maintained car, can't afford a new 20 thousand dollar AI car. These are the families that jump from one 2 grand car to the next. Insurance rates start rising on them, no fault of their own or their driving, just because they can't afford a new 20 grand AI car. These people will vote for the party that cashes in on the issue and says they will ensure they can get affordable insurance.
Unless you figure out a way for everyone to be able to afford that AI car, its not happening.