Cameras looking down at road surface – new localization in autonomous stack
Project Idea Metadata
- Project Idea Name: Cameras looking down at road surface – new localization in autonomous stack
- Date: 9/19/2025 2:53:24 PM
-
Administrators:
Project Idea Description
This project innovation is to provide vehicles with a reliable source of localization information based on appearance of road surface and study how autonomous stack of technologies for robots and cars will be modified by this. Localization with road surface provides centimeter precision in situations where gps, lidars or cameras struggle like in case of bad weather, tunnels and parkings. We would like to study how autonomous stack of technologies could be made more reliable, simpler, more redundant and cheaper. And we expect to increase acceptance of autonomous driving by contributing in areas of 1) Technical & Safety 2) Regulations and Legal 5) Federalism.
Problems
Autonomous stack in cars or mobile robots defines overall route with gps, avoids obstacles with cameras and lidars and performs localization with gps,cameras and lidars (that combine their efforts). The localization functionality (SLAM) helps f.ex. to precisely follow a lane or corner on a street. In case of bad weather lidars, however, provide unreliable information. In good weather, for localization, lidars and cameras observe and recognize objects around the car and if distance to those objects is big, precision of that localization is not high. If appearance of those objects is not very varying like tunnel walls, lidars and cameras can’t see a difference. All of the above means that autonomous driving has corner cases that are not covered by existing technologies and failures in those areas cast doubts on overall technology performance.
Technical solution
Appearance of asphalt, concrete or stone on the roads, sidewalk, driveways, walls and ceilings is quite varying at cm and mm scale due to the way how it is produced (jar and pebbles mix for asphalt). This appearance is unique in roughly 10x10cm patches across at least a city. This means that appearance of asphalt-road surface and concrete walls could be used as a robust source of localization information, that is always at hand underneath the car. This localization, above all, useful when usual sensors fail: areas where there is not gps (tunnels, parkings), in bad weather where lidars struggle and in environments where cameras does not see visual difference between two viewpoints few meters away from each other. Besides complementing existing sensors in their fail zone new localization could produce new functionalities.
Technically, appearance of road surface has to be acquired previously with cameras (on robots f.ex.), registered with gps coordinates and stored in a database that can be later downloaded by autonomous vehicle. It would be equipped with downward pointing cameras capturing sections of road surface, compare it with section of stored database, find a match and obtain information about exact geographic coordinates with mm precision.
Contribution
First contribution to autonomous driving acceptance is undeniably on Technology and Safety side. Localization with cm-precision in any conditions would be a great plus to existing autonomous stack of technologies for cars and robots. It would close the gap where existing technologies fail (even temporarily) and cover many corner cases where performance drops and where performance of autonomous systems is criticized.
On the Regulations and Legal side as cameras look downward, it avoids privacy concerns of taking pictures of surrounding buildings and private homes. On the federal level, building a detailed map of surfaces for highways, secondary roads, streets, parkings, indoors and walls of entire country would create a national road surface map infrastructure. It could serve as a standard reference coordinate system for comparing localization capacities of various autonomous vehicles. Millimeter-precision information would be a ground truth (sic) for positioning systems of various car manufacturers and act as homologation and performance comparison benchmark in same situations.
Project Workplan
In this project we try to define how current autonomous stack of technologies can be potentially made more robust, simpler and cheaper.
1) Re-validate the concept (that was proven to work in other contexts)
- Take pictures of asphalt section of 1m2, label patches of 10x10cm
- Train few network architectures to classify patches and measure the performance
2) Study autonomous stack technologies for cars
- Study how down-looking cameras integrate into sensors layer of autonomous stack technologies
- Study failure cases of autonomous driving performance
- Trace failures to sensors, missing of data or software misfunction
- Attempt to reconstruct (with autonomous robot) situations of failures
- Complement incomplete localization with surface information
- Judge if newly available localization information can fill the “data gap” that caused failure
- Estimate how autonomous stack (firmware, sensor fusion, mapping) can be simplified with this new tech
- See to which lower level of autonomous driving we can drop in this setup and what will be cost gain
- Study what complementarity this localization source would bring to existing autonomous systems
3) Functionalities in mobile robotics
- Study how new localization integrates into the autonomous stack of robots
- Evaluate how precise positioning and orientation improve localization
- Assess increase in motion speed as more precise and rapid information is available
This project is planned to be conducted with HSLU professor Jensen Björn HSLU I <bjoern.jensen@hslu.ch> (we had some issues adding the person and organization)