Tesla bankruptcy?

Yup, the Falcon 9, Falcon Heavy, the self landing boosters, the Model S, Model X, Paypal, Boring Company, all cons. The man has no credibility whatsoever. Run along now and get back to your Uber shift you dolt.

2 Likes

Speaking of Uber, isn’t it a con as well?

We’re learning a lot in this thread.

Any attempt at new technology is a scam.
Any girl who dates a rich guy is a whore.
Any guy trying to innovate is a charlatan.

3 Likes

And any minor thing that happens at a company is worthy of national news and simultaneously causes bro1999 to cream himself

2 Likes

I hear the men’s room at the Tesla plant is out of toilet paper.

Better drop your Model 3 reservations and short the stock. :wink:

3 Likes

literally every tesla crash is going to be investigated now it seems.

1 Like

That’s exactly how I feel about autopilot

1 Like

only can read the first paragraph unless i pay. pfft.

Even if you don’t own a Tesla, you are, or might soon become, part of the company’s massive experiment in automotive safety.
There are already more than 200,000 Teslas on the road, and all of them built after early 2015 are capable of Autopilot, that is, semiautonomous driving. This makes drivers, and anyone encountering these cars on the road, guinea pigs who are helping to train the artificial intelligence Tesla ultimately hopes to use for a fully autonomous driving system.
During this experiment, at least two people have died in driver’s seats of Teslas that crashed while Autopilot was engaged, but Chief Executive Elon Musk argues the system continues to improve and, overall, Teslas are safer than they would be without the technology.
Alphabet Inc.’s [US:GOOGL] Waymo and Uber Technologies Inc., among others, are also road testing on public streets. They’re experimenting at much smaller scales, though an Uber autonomous vehicle struck and killed a pedestrian in March. Subsequently, Uber suspended its self-driving-car program. CEO Dara Khosrowshahi has said it will resume within a few months.
These experiments are based on a number of assumptions about the abilities of AI, and the compatibility of humans and partially autonomous driving systems. If automobile companies are wrong about any of them—and there’s reason to believe they are—we’ll almost certainly see more self-driving car accidents, as semiautonomous technology becomes commonplace.
That isn’t to say we shouldn’t be on this path. Every year, some 40,000 people die in the U.S. in traffic-related accidents—a situation made worse by distracted driving.
We have established methods for responsibly rolling out life-saving new technologies before—think of clinical trials for new drugs, or seatbelts and airbags—and we can do it again. But it might mean pumping the brakes on the rollout of self-driving cars.

Tesla’s dangerous game

When engaged, Autopilot keeps the car within in its lane, can automatically change lanes, and maintains a safe distance from cars ahead and behind. When it senses a dangerous situation it alerts the driver, whether or not Autopilot is engaged. Sometimes, however, it’s up to the driver to realize the Autopilot system isn’t doing what it should.
Tesla says that its cars with autonomous driving technology are 3.7 times safer than the average American vehicle. It’s true that Teslas are among the safest cars on the road, but it isn’t clear how much of this safety is due to the driving habits of its enthusiast owners (for now, those who can afford Teslas) or other factors, such as build quality or the cars’ crash avoidance technology, rather than Autopilot.
In the wake of a fatal 2016 crash, which happened when Autopilot was engaged, Tesla cited a report by the National Highway Traffic Safety Administration as evidence that Autopilot mode makes Teslas 40% safer. NHTSA recently clarified the report was based on Tesla’s own unaudited data, and NHTSA didn’t take into account whether Autopilot was engaged. Complicating things further, Tesla rolled out an auto-braking safety feature—which almost certainly reduced crashes—shortly before it launched Autopilot.
There isn’t enough data to independently verify that self-driving vehicles cause fewer accidents than human-driven ones. A Rand Corp. study concluded that traffic fatalities already occur at such relatively low rates—on the order of 1 per 100 million miles traveled—that determining whether self-driving cars are safer than humans could take decades.
What we do have is evidence—acknowledged in Tesla’s own user manuals—that Tesla’s semiautonomous driving system is easily fooled by bright sunlight, faded lane markings, seams in the road, etc. Researchers continue to document other ways to trick these systems, as well.
Tesla emphasizes its system is driver-assist technology, not full autonomy, and blamed the driver in the most recent crash that occurred when the system was engaged. Yet Tesla drivers and news reports suggest that in some cases, the only thing keeping drivers from getting into Autopilot-related accidents is their own reflexes.
The company promised a cross-country drive accomplished entirely by its self-driving tech sometime in 2017 but decided the system wasn’t yet ready.

AI’s limitations

None of this surprises experts who understand the AI at the heart of autonomous driving systems. Deep learning—the “intelligent” component of these systems—is “brittle, opaque and shallow,” says Gary Marcus, a professor of psychology and neural science at New York University and the former head of Uber’s AI lab.
AI is brittle because it can’t carry over insights from one context to another, opaque because humans can’t evaluate its neuron-like tangle of connections, and shallow because it’s easy to fool. You can’t just throw more deep learning at a problem and expect it to be as good as a human, says Dr. Marcus.
Decades of research on autopilot systems—whether in airplanes or automobiles—have shown that the most dangerous kind is that which requires the driver to take action when it fails. Less sophisticated semiautonomous driving systems, like adaptive cruise control and enhanced warnings, have been shown to increase safety. Full automation, where ultimately there’s no steering wheel or gas pedal, has only begun to be road tested.
Alphabet’s Waymo decided it was too dangerous to let drivers take control when needed, and skipped right to a fully self-driving ride-share service, Waymo CEO John Krafcik has said. According to the company, and many who research self-driving technology, a system that never asks a driver to take over is safer than making potentially tricky machine-human handoffs.
Tesla promised to release safety data on its self-driving tech regularly starting next quarter. It isn’t clear what kind of data it will release, but experts say public sharing of data, from all makers of autonomous vehicles, is the only way to ensure proper evaluation of the safety of these new technologies. Given that we already evaluate the safety of every other part of a motor vehicle in this way, it just makes sense.

2 Likes

Why do you keep making these overblown claims?

Gasoline-fueled vehicles are much more dangerous when the fuel system is compromised.

You could well have less than a minute to exit the vehicle if a (now even-higher-pressure-with-direct-injection) fuel line under the hood leaks and sprays gasoline on a hot engine/exhaust.

Contrast the above with when road debris punctured a Tesla battery - the system warned the battery pack was compromised and it finally started to burn well after the driver had pulled over and exited the vehicle.

I don’t think the site owners would like paywalled content being posted here. It could easily become a problem for them.

Well, they will let me know if that’s the problem.

Another Tesla executive bails the sinking mothership.

I actually laughed out loud at the “going to these degenerate celebrities parties”
I want to go to those.
Especially if they are degenerate.
I don’t know what qualifies a party as degenerate but i want to find out.

Apparently, the NTSB, NHTSA and the IIHS see things differently, as there are more and more “bombs” driving on the road every day.

He may talk out of his ass alot to build hype, and never meets a deadline he sets, but I’d hardly call him a con artist. The 200k Teslas on the road today, along with various other endeavors would also make this a silly argument.

another day, another guinea pig…er…crash…hasn’t been confirmed if autopilot was in use or not.

The NTSB is gonna need to do some hiring.

Who needs more sensors for safety?! Not Elon. $$$ >>>> safety!

misleading people to obtain a goal is the definition of conning.

He’s not misleading…he misses targets, yes, but he’s producing.

As an engineer, this NEVER happens ANYWHERE with ANYTHING. Execs and people counting money pushing engineering to reduce the costs or eliminate parts? Wow, never ever happens.