Posts with «transport accident» label

Tesla settles lawsuit over fatal Model X crash that killed an Apple engineer

Back in 2019, the family of Apple engineer Wei Lun Huang (aka Walter Huang) sued Tesla a year after he was killed when his Model X crashed into a median in Mountain View while Autopilot was engaged. That case is officially closed, now that the automaker has settled the lawsuit on the very day jury selection was supposed to take place. According to CNBC and The New York Times, Tesla's lawyers asked the court to seal the settlement agreement so that the exact amount the company paid wouldn't be made public. The company didn't want "other potential claimants (or the plaintiffs' bar) [to] perceive the settlement amount as evidence of Tesla's potential liability for losses, which may have a chilling effect on settlement opportunity in subsequent cases."

Tesla confirmed shortly after the accident that Autopilot was switched on at the time of the crash, but it also insisted that Huang had time to react and had an unobstructed view of the divider. In a statement to the press, the company insisted that the driver was at fault and that the only way for the accident to have occurred was if Huang "was not paying attention to the road, despite the car providing multiple warnings to do so." In the lawsuit, Huang's lawyers pointed to Autopilot marketing materials from Tesla suggesting that its cars are safe enough to use on the road without drivers having to keep their hands on the wheel at all times. We took the image above from a video on Tesla's Autopilot page, showing a driver with their hands on their lap. 

The incident became big enough to attract the attention of the National Transportation Safety Board (NTSB), which conducted an investigation and found that Huang previously reported that the car steered away from the highway on prior trips. In fact, his family said that he used to complain about his car swerving towards the exact barrier he crashed into and had even reported it to the Tesla dealership, which couldn't replicate the issue. The agency also concluded that Tesla's collision warning system didn't alert the driver and that its emergency braking system didn't activate as it should have when the car started making its way toward the barrier. 

That said, the NTSB discovered, as well, that Huang was running a mobile game on his phone at the time of the accident. It just couldn't determine whether the phone was in his hands when the crash occurred. The Times said Tesla was preparing to show proof to the court that Huang was playing a game when he crashed, which his lawyers denied. Regardless of who's truly at fault, a trial would've called renewed attention to the safety of Tesla's driver assistance system. Settling puts an end to the case a few months before the company unveils its own robotaxi on August 8.

This article originally appeared on Engadget at

Waymo issued a recall after two robotaxis crashed into the same pickup truck

Last year, two Waymo robotaxis in Phoenix "made contact" with the same pickup truck that was in the midst of being towed, which prompted the Alphabet subsidiary to issue a recall on its vehicles' software. A "recall" in this case meant rolling out a software update after investigating the issue and determining its root cause. 

In a blog post, Waymo has revealed that on December 11, 2023, one of its robotaxis collided with a backwards-facing pickup truck being towed ahead of it. The company says the truck was being towed improperly and was angled across a center turn lane and a traffic lane. Apparently, the tow truck didn't pull over after the incident, and another Waymo vehicle came into contact with the pickup truck a few minutes later. Waymo didn't elaborate on what it meant by saying that its robotaxis "made contact" with the pickup truck, but it did say that the incidents resulted in no injuries and only minor vehicle damage. The self-driving vehicles involved in the collisions weren't carrying any passenger. 

After an investigation, Waymo found that its software had incorrectly predicted the future movements of the pickup truck due to "persistent orientation mismatch" between the towed vehicle and the one towing it. The company developed and validated a fix for its software to prevent similar incidents in the future and started deploying the update to its fleet on December 20. 

Waymo's rival company Cruise was involved in a more serious incident last year, wherein one of its robotaxis accidentally dragged someone hit by another vehicle a few dozen feet down a San Francisco street. California then suspended its license to operate in the state, and Cruise eventually paused all robotaxi operations, even the ones with a human driver behind the wheel, as part of a safety review. Meanwhile, it's business as usual for Waymo, which recently announced that it will start testing driverless vehicles on highways and freeways in and around Phoenix. 

This article originally appeared on Engadget at

GM's Cruise is being investigated by the DoJ and SEC following a pedestrian accident

GM's driverless Cruise division is under investigation by both the Department of Justice (DoJ) and Securities and Exchange Commission (SEC), The Washington Post has reported. The probes follow an incident last year in which a jaywalking pedestrian was struck by a Cruise autonomous vehicle and then dragged 20 feet, worsening her injuries.

At the same time, yesterday Cruise released its own third-party findings regarding the accident, which took place on October 2 and involved another vehicle (a Nissan). The company said it "failed to live up to the justifiable expectations of regulators and the communities we serve... [and] also fell woefully short of our own expectations," adding that it's "fully cooperating" with investigators. According to its own findings, that's an understatement to say the least. 

According to the report, Cruise withheld crucial information from officials during a briefing the day after the accident. Specifically, the company failed to mention that its autonomous vehicle (AV) had dragged the victim 20 feet at around 7 MPH, causing serious injuries. According to the internal report, that occurred because the vehicle mistakenly detected a side (rather than a frontal) collision and attempted to pull over rather than stopping. 

At least 100 Cruise employees, including members of senior leadership, legal and others, were aware of the dragging incident — but failed to disclose it during October 3 meetings with the San Francisco Mayor's Office, NHTSA, DMV and other officials, the report states.

The company said it intended to let a video of the dragging incident speak for itself, then answer questions about it. However, the video didn't play clearly and fully due to internet connection issues, and then Cruise employees failed to verbally affirm the pullover maneuver and dragging of the pedestrian. In case that's not bad enough, the third-party findings state:

Cruise leadership was fixated on correcting the inaccurate media narrative that the Cruise AV, not the Nissan, had caused the Accident. This myopic focus led Cruise to convey the information about the Nissan hit-and-run driver having caused the Accident to the media, regulators and other government officials, but to omit other important information about the Accident. Even after obtaining the Full Video, Cruise did not correct the public narrative but continued instead to share incomplete facts and video about the Accident with the media and the public.

The report says the failings came about due to "poor leadership, mistakes in judgment, lack of coordination, an 'us versus them' mentality with regulators, and a fundamental misapprehension of Cruise’s obligations of accountability and transparency to the government and the public." 

Prior to the crash, Cruise was facing other problems with its autonomous vehicles (AVs) failing to recognize children and the frequency with which human operators took control. According to former CEO Vogt, human drivers needed to intervene in trips every four to five miles. 

Cruise had its license to operate suspended in California back in October. The company also laid off 24 percent of its workforce late last year, following the resignation of co-founder Daniel Kan and the departure of its CEO Kyle Vogt. On top of the two federal investigations, the company is also facing a lawsuit from the city of San Francisco. 

This article originally appeared on Engadget at

FAA grounds roughly 171 Boeing 737 Max 9 planes after a cabin panel blew out during flight

The Federal Aviation Administration (FAA) has ordered airlines to temporarily ground some Boeing 737 Max 9 planes for safety inspections after an Alaska Airlines plane lost a cabin panel during a flight on Friday with about 180 people on board. The plane, which had only been in service since November, according to the New York Times, was able to safely land back at Portland International Airport in Oregon, where it had taken off from. There were no major injuries, though the Alaska division of the Association of Flight Attendants said workers described “explosive” decompression in the cabin and reported one flight attendant sustained minor injuries.

“The FAA is requiring immediate inspections of certain Boeing 737 Max 9 planes before they can return to flight,” FAA Administrator Mike Whitaker said. “Safety will continue to drive our decision-making as we assist the NTSB’s investigation into Alaska Airlines Flight 1282.” 

Immediately following the incident, Alaska Airlines CEO Ben Minicucci put out a statement saying the company would be grounding its fleet of 65 Boeing 737-9 aircraft for what it expects to be a few days as it conducts safety checks. “Each aircraft will be returned to service only after completion of full maintenance and safety inspections,” Minicucci. The FAA order extends the grounding to “approximately 171 airplanes worldwide” that are either operated by US airlines or in US territory.

Minicucci also said that the National Transportation Safety Board is investigating what happened with Flight 1282 and “we will fully support their investigation.” The plane had been on its way to Ontario, California. Reuters, citing FlightRadar24, reported that the blowout occurred at around 16,000 feet. In social media posts shared with Reuters and the NYT, passengers can be seen sitting right next to the gaping hole and the fully exposed sky.

Boeing's 737 Max was previously grounded for almost two years after fatal crashes in 2018 and 2019. All 189 people on board the plane were killed in the 2018 crash in Indonesia, and another 157 died in the 2019 crash in Ethiopia. In 2021, Boeing agreed to pay $2.5 billion in a settlement with the Department of Justice to avoid criminal charges over the crashes.

This article originally appeared on Engadget at

Waze will now warn you if a road has a history of crashes

Waze's latest feature focuses on safety and will give you the knowledge needed to make an informed choice about the route you're taking. The Google-owned navigation app has launched crash history alerts, which will send you a notification if you're driving along a crash-prone road. Waze will publish a prompt that says "history of crashes" in-app before you reach, say, a curve that's particularly tricky to navigate. That way, you can slow down or be on the lookout for anything that could derail your vehicle. 


The app decides whether to show you a notification based on reports from the Waze community and an AI analysis of your route, such as its traffic levels, its elevation and whether it's a highway or a smaller local road. It will not show you crash alerts for routes you usually take in order to minimize distractions, which suggests that its main purpose is to give you a heads up if you should drive with more caution than usual in places you're not familiar with. 

Waze has released several protective features intended to keep you safe on the route you're planning to take over the years. A few years ago, it started sending out real-time accident data so that you can take an alternate route if needed and first responders can get to accident sites sooner. In 2020, it also rolled out guidance prompts telling you to get in the right spot for an upcoming merge or exit before you get there. 

This article originally appeared on Engadget at

Tesla's Autopilot was not to blame for fatal 2019 Model 3 crash, jury finds

A California jury has found that Tesla was not at fault for a fatal 2019 crash that allegedly involved its Autopilot system, in the first US trial yet for a case claiming its software directly caused a death. The lawsuit alleged Tesla knowingly shipped out cars with a defective Autopilot system, leading to a crash that killed a Model 3 owner and severely injured two passengers, Reuters reports.

Per the lawsuit, 37-year-old Micah Lee was driving his Tesla Model 3 on a highway outside of Los Angeles at 65 miles per hour when it turned sharply off the road and slammed into a palm tree before catching fire. Lee died in the crash. The company was sued for $400 million plus punitive damages by Lee’s estate and the two surviving victims, including a boy who was 8 years old at the time and was disemboweled in the accident, according to an earlier report from Reuters.

Lawyers for the plaintiffs argued that Tesla sold Lee defective, “experimental” software when he bought a Model 3 in 2019 that was billed to have full self-driving capability. The FSD system was and still is in beta. In his opening statement, their attorney Jonathan Michaels also said that the “excessive steering command is a known issue at Tesla.”

Tesla’s defense argued that there was no such defect, and that an analysis cited by the plaintiffs’ lawyers identifying a steering issue was actually looking for problems that were theoretically possible. A fix to prevent it from ever happening was engineered as a result of that analysis, according to the company. Tesla blamed human error for the crash, pointing to tests that showed Lee had consumed alcohol before getting in the car, and argued that there’s no certainty Autopilot was in use at the time.

The jury ultimately found there was no defect, and Tesla was cleared on Tuesday. Tesla has faced lawsuits over its Autopilot system in the past, but this is the first involving a fatality. It’s scheduled to go on trial for several others in the coming months, and today's ruling is likely to set the tone for those ahead.

This article originally appeared on Engadget at

A pedestrian was pinned under a Cruise robotaxi after another car’s hit-and-run

A Cruise autonomous vehicle (AV) was reportedly involved in a horrific accident in San Francisco on Monday evening. A pedestrian crossing a street was hit by a car, which sped off. However, the hit-and-run hurled her in front of a Cruise driverless taxi, which stopped on top of her leg as she screamed in pain. According to the San Francisco Chronicle, the woman was still in critical condition at 9:30AM ET on Tuesday.

The pedestrian was reportedly walking in a crosswalk at Market and Fifth in San Francisco when she was hit by a green car, which fled the scene. A witness allegedly told investigators that he watched the first car strike the woman, causing her to roll off its side and into the path of the Cruise car. As the autonomous taxi proceeded through the green light, it ran over her and came to a complete stop, pinning her leg under its rear axle and tire. Cruise says there weren’t any passengers in the AV, which was in autonomous mode.

The SF Chronicle says that it viewed a video recording of the incident provided by Cruise to confirm the sequence of events. The company offered to make the video available to Engadget, but we declined.

A bicycle delivery person reportedly tried to reassure the woman that an ambulance was coming and that it would be okay. “She was just screaming,” the cyclist reportedly told the SF Chronicle. City firefighters arrived and used the jaws of life to lift the car off the woman, who was transported to San Francisco General Hospital with “multiple traumatic injuries,” according to fire captain Justin Schorr. He said the car appeared programmed to stop and turn on its hazard lights after sensing an obstruction (in this case, a human being) beneath it.

“At approximately 9:30 pm on October 2, a human-driven vehicle struck a pedestrian while traveling in the lane immediately to the left of a Cruise AV,” Cruise communications manager Hannah Lindow wrote in a statement to Engadget. “The initial impact was severe and launched the pedestrian directly in front of the AV. The AV then braked aggressively to minimize the impact. The driver of the other vehicle fled the scene, and at the request of the police the AV was kept in place. Our heartfelt concern and focus is the wellbeing of the person who was injured and we are actively working with police to help identify the responsible driver.”

The nightmarish incident occurred as driverless taxis have expanded their reach in the city. Cruise and Waymo got approval from California regulators this year to operate and charge fares for fully autonomous cars in San Francisco at any time of the day. However, the state’s DMV asked the company in August to reduce its fleet of driverless taxis by half, pending an investigation into crashes involving the AVs. Cruise agreed to operate no more than 50 autonomous taxis during the day and no more than 150 of them at night.

This article originally appeared on Engadget at

California DMV is investigating a Cruise robotaxi's collision with a fire truck

Cruise will temporarily be deploying fewer autonomous vehicles in San Francisco while investigators are looking into "recent concerning incidents" involving its fleet. According to The New York Times and TechCrunch, the California Department of Motor Vehicles asked the company to cut its fleet in half after an incident wherein one of Cruise's robotaxis collided with a fire truck at an intersection. The fire truck had its sirens and red lights on and was responding to an emergency at the time, while the robotaxi has passengers onboard who sustained non-life-threatening injuries. In another, perhaps less controversial, incident a few days before that, a Cruise vehicle got stuck in wet concrete

The DMV said in a statement that its primary focus is "the safe operation of autonomous vehicles and safety of the public who share the road with these vehicles." It also added that it "reserves the right, following investigation of the facts, to suspend or revoke testing and/or deployment permits" if it determines that a company's vehicles is a threat to public safety. The agency has asked Cruise to limit its driverless vehicles in operation to 50 during daytime and 150 at night, at least until the investigation is done. 

In an explanation about the collision posted on the company's website, Cruise's General Manager for San Francisco, Greg Dietrerich, said the robotaxi identified the emergency vehicle as soon as it came into view. It was also able to distinguish the fire truck's sirens "as soon as it was distinguishable from the background noise." However, it wasn't possible to see vehicles coming from around the corner "until they are physically very close to the intersection" where the incident happened. Further, the autonomous vehicle had trouble predicting the fire truck's path, because it moved into the "oncoming lane of traffic" to bypass a red light. Dietrerich said Cruise's AV identified the risk of a collision and hit the brake to reduce its speed, but it wasn't able to avoid the crash completely due to those conditions. 

The DMV's request comes just a few days after the California Public Utilities Commission (CPUC) voted in favor of allowing both Cruise and Waymo to charge fares for fully driverless rides any time of the day in San Francisco. Before that, Cruise could only offer fared rides with no safety driver onboard in limited areas of the city between 10PM and 6AM. The only commissioner who voted against the companies' paid ride expansion argued that the CPUC didn't have enough information to accurately evaluate the impact of autonomous vehicles on first responders.

This article originally appeared on Engadget at

Tesla faces fresh safety probe following fatal accident

Regulators with the National Highway Traffic Safety Administration (NHTSA) are opening a probe involving a fatal crash involving a Tesla Model Y. The accident, occurring on July 19, found a Tesla striking a tractor-trailer truck in Virginia, fatally wounding the driver of the automobile. These regulators believe that the 57-year-old Tesla driver was relying on the company’s advanced driver assistance programs at the time of the accident, according to a report by Reuters.

The Fauquier County Sheriff's Office provided more details on the accident, saying that the tractor trailer attempted to turn onto a highway from a truck stop when the Tesla struck the side and slid underneath the trailer. The Tesla driver was pronounced dead at the scene. As for the truck driver, authorities issued a summons for reckless driving.

The summons indicates that authorities blame the truck’s driver for the incident, but Tesla’s assistance program is supposed to account for mistakes stemming from other people on the road, thus the NHTSA investigation. To that end, the safety regulator has opened more than three dozen investigations into crashes involving Tesla vehicles and their advanced assistance algorithms. All told, the agency suspects the system has been involved in 23 deaths since 2016.

In 2021, the National Transportation Safety Board (NTSB) urged the NHTSA to issue stricter regulations for autonomous driving, stating in its letter that “Tesla is testing on public roads a highly automated AV technology but with limited oversight or reporting requirements.”

Tesla’s proprietary Autopilot technology is intended to steer, accelerate and brake within the vehicle’s lane, while an enhanced system assists with changing lanes on highways. Tesla says the system isn’t truly automated and requires active human supervision. The company hasn’t responded to a request for comment by Reuters regarding this latest accident and the newly-opened probe.

This article originally appeared on Engadget at

Uber safety driver involved in fatal self-driving car crash pleads guilty

The Uber safety driver at the wheel during the first known fatal self-driving car crash involving a pedestrian has pleaded guilty to and been sentenced for an endangerment charge. Rafaela Vasquez will serve three years of probation for her role in the 2018 Tempe, Arizona collision that killed Elaine Herzberg while she was jaywalking at night. The sentence honors the prosecutors' demands and is stiffer than the six months the defense team requested.

The prosecution maintained that Vasquez was ultimately responsible. While an autonomous car was involved, Vasquez was supposed to concentrate on the road and take over if necessary. The modified Volvo XC90 in the crash was operating at Level 3 autonomy and could be hands-free in limited conditions, but required the driver to take over at a moment's notice. It noticed Herzberg but didn't respond to her presence.

The defense case hinged on partly blaming Uber. Executives at the company thought it was just a matter of time before a crash occurred, according to supposedly leaked conversations. The National Transportation Safety Board's (NTSB) collision findings also noted that Uber had disabled the emergency braking system on the XC90, so the vehicle couldn't come to an abrupt stop.

Tempe police maintained that Vasquez had been watching a show on Hulu and wasn't paying attention during the crash. Defense attorneys have insisted that Vasquez was paying attention and had only been momentarily distracted.

The plea and sentencing could influence how other courts handle similar cases. There's long been a question of liability surrounding mostly driverless cars — is the human responsible for a crash, or is the manufacturer at fault? This suggests humans will still face penalties if they can take control, even if the punishment isn't as stiff for conventional situations.

Fatal crashes with autonomy involved aren't new. Tesla has been at least partly blamed for collisions while Full Self Driving was active. The pedestrian case is unique, though, and looms in the background of more recent Level 4 (fully driverless in limited situations) offerings and tests from Waymo and GM's Cruise.While the technology has evolved since 2018, there are still calls to freeze robotaxi rollouts over fears the machines could pose safety risks.

This article originally appeared on Engadget at