Why Autonomous Vehicles?
Anyone who has been following the evolution of the automotive industry and it’s progress toward autonomous vehicles (AV) understands the importance of accurate and reliable sensors such as LiDAR, RADAR, cameras, etc. We also realize the importance of machine learning, deep learning and other techniques of artificial intelligence (AI) needed to translate data from these sensors into the appropriate actions required by the vehicle’s control system. More importantly, we have seen tremendous progress made in all of these areas over the past few years by various automotive OEM’s, their partners and other niche players in this growing ecosystem. Given this progress, there is now light at the end of tunnel in reducing the millions of deaths and disabilities occurring each year resulting from human error in operating vehicles.
Related: Who's Winning the Self-Driving Car Race
Current Challenges
However, we have also seen where these very sophisticated technologies, using extremely complex sensors and software algorithms, are still far from perfect. In certain situations, configurations or even as a result of human oversight, these intricate systems can perform inferior to humans and result in death. The Uber tragedy in Phoenix is an example of this rare, but possible, occurrence. I say rare because we see a lot more success than failure in this endeavor, however, it seems like everyone is doing trials in Phoenix and in areas where conditions are always perfect. What will happen as these trials and deployments expand northward during the winter or where other environmental conditions are far from ideal? This problem is not only what everyone working on sensor, AI and AV technology are trying to solve, it is also the problem that will limit and delay the broad deployment of this life saving technology. Can more be done to solve these problems than just better sensors and AI algorithms?
Related: Uber in Fatal Crash Detected Pedestrian but had Emergency Braking Disabled
Key Components
For an autonomous vehicle to get from point A to point B, while keeping passengers and everyone else along the journey safe, it requires many different forms of technologies and data. I already pointed out the need for accurate and reliable sensors, such as solid-state LiDAR from companies like Quanergy. I also eluded to the need for sophisticated AI to not only interpret the data (e.g., point cloud) from the sensors, but also to take the correct action based on all the split-second data being ingested from every aspect of the vehicle control system. The most notable input data comes from GPS navigation systems and high-definition maps, such as those from Telenav, required to accurately guide the vehicle to its destination. Besides better sensors and AI algorithms mentioned before, can better vehicle position information help make these AVs even safer?
Related: Rising to the Challenge of Driverless Cars
Can Better GPS Help?
The Global Positioning System (GPS) used in most vehicles, smartphones, etc., today refers to the US system developed in the 1970’s by several notable scientists and engineers from the USAF, including one of Telenav’s co-founders, Dr. Bob Rennard. There has been many enhancements to this system in the past 30 years and many other Global Navigation Satellite Systems (GNSS), such as GLONASS, Galileo and Beidou have been deployed by other countries. However, the original GPS still has many limitation when it comes to accuracy (i.e., 2-20 meters depending on conditions) and reliability (e.g., multi-path error, obstructions) so there remains significant room for improvements. In fact, to deal with the current inaccuracies, AV engineers have had to rely more on sensor data and even more complex AI algorithms to compensate for these short-comings. Unfortunately, more complexity leads to more things that can go wrong even with the best machine learning training data and significant training time. As most scientists, engineers and even business people will tell you simplicity is best. When you can distill complexity into much simpler form, you will usually be more successful in your effort and why this is one of Proxcent's core values.
Related: Will I Arrive Alive?
RTK and Other GNSS Techniques Can Make AVs Safer!
Given this, is seems evident that better vehicle position data can help make AV control systems simpler and thus safer. With ongoing upgrades to the new L5 GPS signal with better coding and signal strength, and technologies like Real-Time Kinematics (RTK) using multiple signals from companies like Swift Navigation, the accuracy can be reduced to under 1 foot. These systems can also use multi-frequency (i.e., L1, L2 & L5 signals) and multi-constellation (e.g., GPS, GLONASS, Galileo) techniques to further improve accuracy and reliability. Most importantly, these new approaches to more accurate and reliable positioning data will allow AV engineers to focus using sensors and associated AI to decipher the objects and hazards around the vehicles. They will no longer have to develop significant amounts of specialized code trying to figure out where the vehicle is really located. Therefore, as we improve the quality of the GNSS data being ingested by the AV system, we can reduce the complexity of the software controlling the vehicle. As a result, AV technologist should be able to make this life-saving technology even safer and widely available sooner. I only wonder what else is just around the corner to better enable the future of transportation.
Related: Building an Autonomous Future Through Smart Vehicle Positioning