The lidar sensor helps the AV understand its environment. Simply put, lidar counts events based on the brightness of the light sensed. Lidar imaging measures depth by calculating the delays between light pulses emitted by a source using time of flight (TOF). Lidar is a non-contact active rangefinder that illuminates a target and detects the reflected or backscattered signal. Data and processing to compute distance can build a 3D point cloud of a section of the unified environment. Therefore, the round-trip delay of the light wave going to the target is utilized to compute R, or target distance. Due to the short frequency modulation ambiguity distance, accuracy comparable to the pulse approach can be reached at modest distances. Because the emission is continuous, the signal reflected from distant objects is likewise weaker than pulses because the amplitude is constantly below the safe limit for the eyes. Additionally, digitizing back reflection intensity is challenging at long distances. In the third approach, frequency modulated continuous wave (FMCW) techniques directly modulate and demodulate signals in the frequency domain, allowing the superposition of produced and observed waves to be detected. FMCW provides two major advantages over other methods. Its primary advantage is to obtain velocity measurements at the same time to make the data gap by Doppler effect, but its distance measurement resolution is 150 m with an accuracy of 1 m over long distances [10].