Autonomous & Self-Driving Vehicle News: Usher, Hyperspec AI, SOSLAB, Smart Eye, Quanergy, Eyeris & Continental | auto connected car news

2022-09-24 09:20:10 By : Ms. Fiona hu

In autonomous and self driving vehicle news are Usher, Hyperspec AI, SOSLAB, Smart Eye, Quanergy, Eyeris and Continental.

Ushr Inc. has expanded its high-definition map database for General Motors to cover more than 400,000 miles of roads in the United States and Canada, enabling the automaker to double the road network for Super Cruise*, the industry’s first true hands-free driver assistance system.

For this expansion, Ushr utilized precise LiDAR data and proprietary algorithms to map primary roads. These include undivided high-speed roads, like state highways, that separate the direction of travel with paint lines. Hands-free driving on undivided roads requires precise vehicle localization. This driving scenario demands a high level of HD map accuracy that only Ushr provides across North America.

“Primary roads are far more complex than highways,” said Dr. David K. Johnson, chief scientist at Ushr. “They contain a higher diversity of road objects, density of crossings, pavement types and lane types.”

LiDAR map data is critical to the safe deployment of advanced driver assistance systems (ADAS) at General Motors, as it is a key component in “sensor fusion”, where precision LiDAR map data, real-time cameras, radars and GPS create a sensory field around the vehicle that assist in keeping it centered in the lane while elevating the driver’s comfort and convenience.

“When ADAS is done right, it’s safe for everyone on the road,” said Chris Thibodeau, chief executive officer of Ushr. “For many drivers, this is their introduction to hands-free driving and it’s important to present it to the public in a deliberate way. The industry needs to get it right.”

*Always pay attention while driving and when using Super Cruise. Do not use a hand-held device. Requires active Super Cruise plan or trial. Terms apply. Visit cadillac.com/supercruise for compatible roads and full details.

Hyperspec AI, an artificial intelligence startup, released a new tool for developers working on ADAS enabled and autonomous vehicles (AV). The company has developed a unified platform called RoadMentor that allows users to create, train, and deploy machine-learning (ML) models for real-time mapping. Hyperspec integrates the map into the ML training loop so that real-time mapping models can be developed, giving ADAS enabled and autonomous vehicles the ability to perform outside of the HD map geofence. This expands navigable roads from less than 5% to over 95%, for any vehicle, so the autonomous systems can learn from the ubiquitous exposure.

The ML development process is fragmented with no integrated process for data collection, data management, model training, verification & validation, deployment, and fleet learning. Each step is another data transfer, leading to inefficiency and a lack of true visibility. RoadMentor enables the industry to scale through deep learning by consolidating the loop training process into one optimized infrastructure designed specifically for autonomous driving.

“We wanted to create a product that focuses on our customers’ pain points. RoadMentor streamlines data flow, drastically reducing processing time, standardizes data throughout the cycle, and moves data access and control in-house rather than with a third-party,” says Sravan Puttagunta , CEO and co-founder of Hyperspec.

Autonomous driving data is largely skewed towards highway and arterial road domains. The release of RoadMentor increases test coverage from less than 5% to over 95% across all roads, enabling edge case library build out across the long tail of scenarios. Now ADAS functionality and autonomous driving usage and coverage can further develop allowing us to reach levels 3, 4, and 5 autonomy.

“We’re very excited about the progress we have made in recent months,” says Puttagunta, “right now it’s all about collecting usable test miles and collating good data to improve the system. Solving the long-tail problem is something we have been thinking about for a long time.”

RoadMentor is offered as a freemium SaaS product, so users can process a certain amount of data at no cost. We invite developers to sign up for exclusive beta access to RoadMentor through our developer program, limited seats are available. Attendees of the International Auto Show Tech Days will be able to learn first-hand how RoadMentor improves the release cadence of autonomous driving technology development.

The Hyperspec team is pushing the boundaries of superhuman perception.

Despite the rise of autonomous vehicle technology, a new survey from insurtech leader Policygenius finds widespread apprehension about self-driving cars.

The survey found a majority of Americans (76%) say they would feel less safe driving or riding in cars with self-driving features. Similarly, 73% of people would feel less safe knowing others on the road are traveling in cars with self-driving features.

Americans are also skeptical about the potential of this technology, with one third (33%) saying that even a car with “full self-driving capability” would require constant attention. Nearly 80% of people say they would not pay more to own a car with self-driving features.

“Whether because of road rage, reckless driving, or car accidents, it’s understandable that many people are wary of taking their eyes off the road and relying on a self-driving car,” Rachael Brennan , a licensed property and casualty insurance expert at Policygenius, said. “As advances in autonomous vehicle technology continue, auto companies and insurance companies will need to resolve a number of challenges, from helping people feel safe on the road to navigating new insurance implications, like who is at fault in an autonomous vehicle incident.”

Other findings from the Policygenius 2022 Self-Driving Cars Survey include:

Policygenius commissioned Google Surveys to poll a nationally representative sample of 1,500 adults aged 18 and older. The average margin of error for responses is +/- 6.1%.

SOSLAB, a self-driving technology startup that has won recognition for its global-standard LiDAR technology, on the 15th announced the launch of its next-generation 3D Solid-state LiDAR, ML-X.

ML-X Driving Test: https://youtube.com/watch?v=ZbEhTwk71Cc

SOSLAB’s new 3D Solid-state LiDAR, ML-X is characterized by its compact size along with the measuring distance and resolution more than doubled from those of previous products.

SOSLAB tripled each angular resolution of ML-X (0.5° to 0.208° @ FOV 120°). In addition, through the application of SOC developed exclusively for laser control in the transmission unit, the overall product size and weight have been drastically reduced to 9.5 x 5.0 x 10.2 (cm3) and 860g, respectively. This new product also maximizes user convenience as it does not require any additional external module for LiDAR operation.

SOSLAB is scheduled to unveil the new ML-X at the IAA TRANSPORTATION 2022 to be held in Hannover, Germany from September 20 to 25. Organized by the German Association of the Automotive Industry, this event provides up-to-date information on the global technological trend through the participation of the world’s leading major commercial and special vehicle makers and is drawing a rapidly increasing number of visitors each year.

In Korea, SOSLAB plans to display ML-X at the KES 2022 to be held at COEX in Seoul from October 4 to 7, to provide the visitors with an opportunity to experience the new LiDAR product.

In addition to jointly developing LiDAR for mobile robots with Hyundai Motor Group, SOSLAB is continuously cooperating with global automotive OEM companies and top-tier suppliers by delivering ML-X samples. It has also attracted Series B Investment to a scale of KRW 19.3 billion (USD 138 million) and is preparing for an IPO (initial public offering) with the goal of KOSDAQ listing as a special technology company in the second half of 2023.

Smart Eye, the global leader in Human Insight AI, announced a collaboration with ams OSRAM, a global leader in optical solutions, to deliver a new technology that will allow driver monitoring systems (DMS) and occupant monitoring systems (OMS) to detect the driver and passenger status and position more accurately than ever before. The ICARUS: Structured Light Evaluation Kit is a proof-of-concept that leverages existing architecture inside a vehicle to efficiently and cost-effectively provide high-performance 3D sensing capabilities for driver monitoring (DMS) and Occupant Monitoring (OMS).

Driver monitoring continues in importance as it is a key factor in helping to create a safer road environment. Through monitoring the driver’s actions, technology in the vehicle could be able to determine if the driver is tired or distracted and send a warning. In addition, it can provide more driving comfort, which in turn often leads to less distraction.

ICARUS combines Smart Eye interior sensing software with ams OSRAM’s dot illumination technology to generate a more accurate depth map of the driver using a structured light method. ams OSRAM’s dot illuminator installs on top of conventional flood illumination mechanisms, and when integrated with Smart Eye’s Human Insight AI-driven software, the DMS can support valuable new features with high precision including augmented reality heads-up display (AR-HuD), secure driver authentication, and advanced body position to further enhance road safety by delivering inputs to airbag deployment decisions and pre-crash safety measures.

ICARUS is simple, easy to install, and serves as a low-cost upgrade for existing near infrared-based DMS hardware. It only requires the addition of an ams OSRAM dot illuminator to function. By building the evaluation kit to be compatible with existing 2D architecture, Smart Eye and ams OSRAM provide automotive OEMs and Tier One suppliers with the maximum amount of value without overhauling an entire system.

Quanergy Systems, Inc., (NYSE:QNGY) a leading provider of LiDAR sensors and smart 3D solutions, announced  that the company will partner with Fabrinet, a leading provider of advanced precision optical and electronic manufacturing services, for the production of Quanergy’s LiDAR sensors.

Fabrinet is a trusted partner of the world’s most demanding original equipment manufacturers (OEMs). Their proven track record of customer service, flexibility and skill in managing complex operations aligns with Quanergy’s commitment to maintaining high quality and industry standards in manufacturing. With Fabrinet, Quanergy will be able to expand its global manufacturing and scale as demand increases, to deliver greater efficiency for customers.

Quanergy’s customers can be confident that their sensors are high-quality and built by a trusted, seasoned manufacturer with extensive, relevant certifications, including ISO9001, IATF16949 and ISO14001.

Eyeris Technologies, Inc., a world leader in automotive in-cabin sensing AI, announced today that it will introduce its latest monocular 3D sensing AI software solution at the InCabin event at Autoworld in Brussels on September 15, 2022 . Eyeris will have by-invitation meetings and demonstrations to showcase its latest in-cabin sensing AI solutions including the new monocular 3D sensing AI models, which predict depth-aware vehicle interior monitoring features in 3D from a single RGBIR image sensor to further improve in-vehicle safety, comfort and convenience.

Eyeris’ Monocular 3D Sensing AI Solution Addresses Increasing In-cabin 3D Sensing Feature Requirements from Carmakers

3D sensing has been slow to be adopted in car interiors because of industry-known high costs, as well as interference and resolution limitations. Also, 3D sensing historically demands higher computation requirements than 2D. However, there has been a remarkable increase for in-cabin 3D feature requirements, rather than 2D, from car OEMs over the last year, demanding the use of a single RGBIR image sensor, rather than stereo camera, ToF sensors or others, because of its low cost and high image quality advantages.

Eyeris uses proprietary technology that accurately regresses depth information with 3D output from 2D image sensors, which applies to all in-cabin features. It is achieved through rigorous collection of naturalistic in-cabin 3D data to train compute-efficient depth inference models that run on AI-enabled processors. The data generated can be used to map the interior of a car, for example, and accurately determine in three dimensions the location of occupants’ face, body, hands, objects and everything else inside the car.

Eyeris’ monocular 3D sensing features and benefits include:

“We’re proud to introduce our monocular 3D technology that optimizes sensing performance beyond 2D, with added depth estimation of the entire in-cabin space,” said Modar Alaoui , founder and chief executive of Eyeris.

“This 3D sensing technology enables automotive OEMs with a new host of features that were never available before, and further enhances safety and comfort by leveraging additional depth data about driver, occupants, objects and surfaces using a single 2D RGBIR image sensor. Additionally, this newly introduced technology will help improve the accuracy of the recently mandated driver monitoring and child presence detection features,” added Alaoui.

Highlighting the exceptional innovation work realized by the Eyeris team across industry organizations and academia, this award nomination by a panel of industry experts is a testament for the impact and significant technology advancements that Eyeris is committing to the In-cabin industry, globally.

“Congratulations to Eyeris on their nomination as a finalist in the Most Novel Research category, their work on monocular 3D sensing is impressive and we are also proud to be working with them as a sponsor of the inaugural InCabin conference taking place this month,” said Robert Stead , Managing Director Sense Media Group.

“Spun out of our successful AutoSens series, this launch InCabin event gives a platform to the key technology developers that are making an impact in automotive safety for drivers and occupants. We are proud to play a role in saving lives, and building a community of experts that will drive this technology

Continental is presenting an innovative sensor solution for commercial vehicles at this year’s IAA TRANSPORTATION in Hanover (September 20 to 25). A significant increase in transport volumes, enormous growth in the equipment rates of trucks with new assistance systems and increasingly complex fleet management require innovative technologies and products for efficient technology integration in the commercial vehicle business. The multi-​sensor system, also known as the Continental Sensor Array, can be mounted above the windshield on the vehicle. Continental is thus presenting a compact, integrated solution for the growing number of intelligent and automated driving systems in commercial vehicles. Furthermore, higher levels of automation such as L4 require a larger number of different sensors that can be installed intelligently, quickly, and safely. In the comprehensive solution, all integrated sensors – lidar, radar, cameras – for adaptive cruise control, emergency braking and blind spot assist as well as automated driving functions are pre-​calibrated and coordinated with each other. This significantly simplifies the installation of many sensors, their integration into the vehicle architecture and the complex process of calibration. In addition, the effort for maintenance and thus the downtimes of vehicles are reduced. Continental is thus helping to make commercial vehicles and logistics companies fit for the mobility of tomorrow.

The market for automated driving is growing substantially – commercial vehicles play a key role in this. Especially on long distances between logistics centers – from “hub to hub” – innovative assistance systems are already contributing to significantly increased safety and will continue to revolutionize the transport business, right up to autonomous trucks on the motorways. Sensors, software, and intelligent connectivity concepts are the basis for this. As a leading system expert in radar, lidar and camera solutions for assisted and automated driving, Continental sees the intelligent combination of these different technologies as a decisive added value in terms of safety. “With the Continental Sensor Array, we are demonstrating a tailor-​made system solution approach for commercial vehicles,” says Claudia Fründt, Head of Market Trucks Autonomous Mobility Business Area at Continental. “We see great potential in automated and ultimately autonomous driving in the commercial vehicle sector, especially in view of the high number of downtimes. Highly complex sensor systems are necessary for essential, holistic environment perception. Our solution offers uncomplicated assembly and calibration of multiple sensor systems in a single, compact module.”

With the new multi-​sensor system, Continental offers everything from a single source when it comes to safety-​relevant sensor technology: sensors, central computing units for control, calibration of the individual sensors among each other, and calibration of the overall solution when mounted on the vehicle. “A multi-​sensor system in a compact solution offers vehicle manufacturers and users many advantages,” explains Dr. Andree Hohm, Head of Innovation Line Driverless Autonomous Mobility Business Area at Continental. “Radar, lidar and cameras are pre-​calibrated and matched to each other. The effort for the final calibration of the sensor solution during assembly, the effort for the assembly of the sensor systems themselves and, finally, for the maintenance of the vehicles is significantly reduced. When trucks drive autonomously and are on the road around the clock, our sensor array can be easily replaced for maintenance work, for example. In this way, costly downtimes can be avoided.” In addition, the electronic architecture and wiring harness in commercial vehicles are significantly streamlined – and later updates can be implemented quickly. Sensors for new assistance systems can be easily integrated into the compact solution. The overall sensor system does not require any fundamental modifications to the driver’s cab or vehicle body for future generations of trucks. It can also be mounted on existing generations of modern trucks, provided that the on-​board electrical system is prepared for this.

Cleaning and air conditioning of the sensor units are also simplified: Continental has an automatic “Camera and Sensor Cleaning, Cooling and Heating” system in its portfolio. All sensors independently monitor the degree of their pollution. Camera lenses are automatically cleaned by water jet. In addition, intelligent thermal management ensures the flawless use of sensor technology in all weather conditions.

The combination of different sensor systems and their redundancy is crucial for the reliable use of driver assistance systems and autonomous driving. All sensors must be calibrated individually and coordinated with each other in combination. Continental sees the joint use of the three sensor systems – camera, lidar and radar – as the ideal solution for reliable recognition of objects and holistic detection of the vehicle’s surroundings. The technology company has more than 25 years of experience in the development and integration of individually tailored, safe and robust sensor solutions from individual components to complete systems. To date, Continental has brought more than 150 million sensors for assisted and automated driving functions on the road.

At the IAA TRANSPORTATION 2022, visitors to the Continental booth (Hall 12, Booth C29) will not only experience the multi-​sensor solution for commercial vehicles, but also many other solutions for the major challenges of the mobile future in freight transport.