|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Conventional sensor systems record information about directly visible objects, whereas occluded scene components are considered lost in the measurement process. Non-line-of-sight (NLOS) methods try to recover such hidden objects from their indirect reflections -- faint signal components, traditionally treated as measurement noise. Existing NLOS approaches struggle to record these low-signal components outside the lab, and do not scale to large-scale outdoor scenes and high-speed motion, typical in automotive scenarios. In particular, optical NLOS capture is fundamentally limited by the quartic intensity falloff of diffuse indirect reflections. In this work, we depart from visible-wavelength approaches and demonstrate detection, classification, and tracking of hidden objects in large-scale dynamic environments using Doppler radars that can be manufactured at low-cost in series production. To untangle noisy indirect and direct reflections, we learn from temporal sequences of Doppler velocity and position measurements, which we fuse in a joint NLOS detection and tracking network over time. We validate the approach on in-the-wild automotive scenes, including sequences of parked cars or house facades as relay surfaces, and demonstrate low-cost, real-time NLOS in dynamic automotive environments. |
|
|
|
|
|
Validation and Training Dataset Acquisition and Statistics.
(a)
Prototype vehicle with measurement setup. To acquire training data
in an automated fashion we use GNSS and IMU for a full pose
estimation of egovehicle and the hidden vulnerable road users.
|
|
|
|
|
|
|
Single Car |
Van |
Three Cars |
Three Cars |
Guard Rail |
|
|
|
|
|
Mobile Office |
Utility Access |
Garage Doors |
Curbstone |
Marble Wall |
|
|
|
|
|
House Corner |
Garden Wall |
House Facade |
House Facade |
Building Exit |
NLOS training and evaluation data set for large outdoor scenarios.
We capture a total of 100 sequences in-the-wild automotive scenes with 21 different scenarios. We split the dataset into non-overlapping training and validation sets, where the validation set consists of four scenes with 20 sequences and 3063 frames. |
|
Joint detection and tracking results.
Joint detection and tracking results for automotive scenes with different relay wall type and object class in each row. The first column shows the observer vehicle front-facing camera view. The next three columns plot BEV radar and lidar point clouds together with bounding box ground truth and predictions. NLOS velocity is plotted as line segment from the predicted box center: red and green corresponds to moving towards and away from the vehicle. |
|
Tracking trajectories for both training and testing
data sets.
Here we show nine scenes in total. The top-middle scene and the last three scenes are from the testing data set. For each scene, the first row is the trajectory and the second row is the front-facing vehicle camera. We can see a variety of wall types, trajectories and viewpoints of the observing vehicles. The predictions consist of segments, with each corresponding to a different tracking ID visualized in different colors. |