Tesla bankruptcy?

Picked up one of those just two weeks ago :slight_smile:

5 posts were merged into an existing topic: Off Topic Landfill

A post was merged into an existing topic: Off Topic Landfill

If the reason you ended up with QX60 is that your Model X on autopilot crashed, then it is relevant. If not …

1 Like

What’s the problem?..

A post was merged into an existing topic: Off Topic Landfill

3 posts were merged into an existing topic: Off Topic Landfill

GM is selling the bolts as fast as it makes them though? There is currently a huge backlog for bolts in Europe but GM doesn’t care since it no longer owns Opel.

1 Like

Exactly gm has lost the plot on the bolt. Also it seems Musk is back in ludicrous hubris mode. He is now fighting the ntsb? Regulatory hell!!!

PR Hell
Legal Hell
Financial Hell
High Blood Pressure Hell
Narcolepsy Hell
Regulatory Hell

This guy has a lot of “hells”

Some would say Musk is Satan himself!!???

I do agree that Tesla is not being responsible about their Auto Pilot mode too. When I tested drove the X, the sales spent so much time on the AP mode and told me how good it is and how we can pay more attention to the baby and let the car drive itself. The way they pitch it sounds really convincing. The fact is a lot of cars out there have similar capability, none pitch them like Tesla does. When they do this enough, people may just believe it is safe enough.

This is all CYA from Tesla.

Either you can operate safely in autonomous mode or you can’t. You don’t get to have it both ways.

The spin from Tesla won’t change anything

My school of thought is: Salesman is your enemy, never believe anything he says. Perhaps, that’s why I can’t get a good deal on a car.:cry:

Car crashes will always happen, regardless of whether things are completely autonomous or driven by people - so the fact that a Tesla (or any autonomous vehicle) accident occurred is not surprising. What is more interesting is whether, on average, autonomous driving is leading towards a lower incidence rate compared to human drivers (believe stats say both sides right now). My sense is that eventually, the algorithms governing these will become so much more sophisticated that they will far outperform human drivers and save countless lives.

What I find really fascinating and complex is how you program ethics into an algorithm. Here is a thought experiment:

An autonomous vehicle is driving along and now predicts an accident will occur. There are 10 pedestrians walking along the sidewalk. The vehicle has two options in this scenario, 1) swerve into the group of people, likely killing or injuring many of them, yet saving and protecting the driver, or 2) swerve into a light pole with a high likelihood of killing or significantly killing the driver, although saving the lives of the walking pedestrians.

On the surface, it would feel like #2 would be the right altruistic answer; however, as a consumer deciding between two autonomous cars, if you learn one car is programmed to protect you and one is programmed to protect the greater good… which would you actually buy knowing your family may be in the car that you choose as well (assuming all other features are equal)?

There are more crazy ethical questions they are trying to figure out in all this… pretty wild stuff.

It’s been discussed somewhere long time ago, don’t recall where exactly.

Wow, this is a totally entire different ball game.

For me, I will only buy the concept of autonomous car if and only if these cars get their own motorways to operate on. So there is NO human behavior mixed into the AI. Human is born to behavior unexpected. No AI that I can foresee in near future can handle human induced corner cases. But it is my very own conservative opinion.

Someone needs to ask Wilton Knight how he programmed his algroithm. KITT was designed to protect human life, and that was back in 1982.

image

You’d think Elon could master it almost 40 years later

2 Likes