Range-Doppler imaging utilizes Doppler shift frequencies to achieve resolution beyond the diffraction limit. This imaging method enables the creation of a 2D image by integrating both range and Doppler information obtained from a single detector. This approach is effective for radar systems due to the typically larger beam width compared to LiDAR, which facilitates the spread in Doppler shift, enhancing cross-range resolution beyond the diffraction limit.
To form a Range-Doppler image larger than a single pixel, a broader area must be illuminated, and a detector array is essential. If not, the field of view (FOV) of the LiDAR is limited to the angular subtense of the diffraction limit. Utilizing a Detector Array System (DAS) equal to the diffraction limit, Doppler imaging can offer elevation information surpassing the diffraction limit and achieve higher resolution than conventional DAS systems. For long-range imaging, such as when viewing from an aerial vehicle, this capability is highly advantageous.
Microwave radars can generate 2D images using a single detector, but LiDAR systems, with a narrower FOV and diffraction-limited spot size, offer superior range resolution. When a LiDAR system moves toward a target, the velocity toward the target VT is expressed as:
where ϑ is the elevation angle and φ is the azimuth angle. If the LiDAR travels in a specific direction at a constant velocity, the Doppler shift changes with angle. This allows the system to differentiate between angles and provide improved cross-range resolution.
In summary, Range-Doppler imaging in LiDAR systems, while challenging due to its narrower beam width compared to radar, provides exceptional long-range and elevation detection capabilities. By leveraging a DAS configuration and comprehensive detector arrays, these systems offer high-resolution imaging that surpasses conventional diffraction limits, making them ideal for aerial surveillance and monitoring applications.
Doppler Shift and Velocity Table
Velocity (µm/s) |
Frequency (Hz) |
1 |
1.29 |
10 |
12.9 |
100 |
129.03 |
1000 |
1290.32 |
Doppler Frequencies from Surface Vibration
Vibration Frequency (Hz) |
Vibration Amplitude (µm) |
Max. Velocity (µm/s) |
Max. Doppler Shift Frequency (Hz) |
Required Sample Time (ms) |
10 |
1 |
63 |
81 |
24.7 |
50 |
1 |
314 |
405 |
4.9 |
100 |
1 |
628 |
810 |
2.5 |
200 |
1 |
1257 |
1622 |
1.2 |
10 |
0.1 |
6 |
8 |
250.0 |
50 |
0.1 |
31 |
40 |
50.0 |
100 |
0.1 |
63 |
81 |
24.7 |
200 |
0.1 |
126 |
163 |
12.3 |
Cross-range resolution refers to the ability of a radar, LiDAR, or imaging system to distinguish between two separate objects or points that are at the same distance (range) from the sensor but have different positions perpendicular to the sensor’s line of sight. Essentially, it is the resolution across the plane that is orthogonal to the direction of propagation of the sensor signal.
Key Points:
- Orthogonal Measurement: Cross-range resolution measures the ability to resolve objects in a direction perpendicular to the sensor’s line of sight. In radar or LiDAR imaging, this is typically across the width of the image or scene being captured.
- Dependent on Beamwidth and Frequency: The cross-range resolution is influenced by the sensor’s beamwidth, wavelength, and aperture size. A narrower beamwidth or higher frequency typically results in better (finer) cross-range resolution.
- Improvement through Doppler Shift: In moving platforms or objects, the Doppler shift can be used to improve cross-range resolution, by distinguishing objects based on their velocity or position changes over time, beyond what is possible with just the spatial resolution.
- Applications: Good cross-range resolution is crucial in applications like synthetic aperture radar (SAR), where detailed imaging of terrain or structures is required, or in automotive LiDAR, where distinguishing closely spaced objects (such as pedestrians or vehicles) is necessary for safety and navigation.
In summary, cross-range resolution is a measure of how finely a sensor can distinguish objects that are side-by-side or separated horizontally (or in any direction perpendicular to the sensor’s view), and it is a critical parameter for achieving high-quality spatial imaging in various sensing technologies.