The General Robotics Lab at Duke University has introduced WildFusion, a robotic system designed to navigate rugged environments using a combination of vision, touch, and sound. This WildFusion robot multisensory mapping platform integrates RGB cameras, LiDAR, contact microphones near its footpads, tactile sensors on its limbs, and an inertial measurement unit (IMU). These sensors allow WildFusion not just to see its surroundings, but to feel and hear them—detecting the subtle sound and pressure differences of surfaces to guide its next move with remarkable nuance.
The data from these sensors are processed through a deep learning framework known as an implicit neural representation, enabling the robot to construct continuous, detailed maps of its environment. This allows it to detect obstacles hidden by vegetation and anticipate unstable terrain. By fusing sensory inputs, WildFusion responds more intelligently to the ground beneath it—adjusting its movements to avoid slipping, tripping, or getting stuck, even when visual cues are limited.
In field tests at Eno River State Park, North Carolina, WildFusion successfully traversed complex terrain, including dense foliage, creek beds, and uneven trails. Its ability to identify stable footholds and adapt on the fly makes it highly promising for disaster response, where navigating debris is critical, and for missions requiring remote environmental access. The WildFusion robot multisensory mapping system may play a future role in both natural disaster recovery and remote fieldwork.
Designed with modularity in mind, WildFusion can also incorporate tools like thermal imagers or chemical sensors. This adaptability positions it as a versatile platform for environmental monitoring, precision agriculture, and exploration in areas too hazardous or remote for humans.
To stay up to date on the interface of robotics and lidar — and much more — subscribe below.


