1. INTRODUCTIONThe use of sensors in AI is increasing. Sensors are used to collect data and help AI make sense of the world around it. They can be used for various purposes, such as monitoring the environment, gathering information about people or things, and navigating vehicles. The sensors are not only used for collecting data but also for controlling the actions of a machine or system. They can also be used to trigger an event when a particular condition is met. Today, most robots need three main types of senses, Sight, Hearing, and Touch. For AI to be able to “see,” “listen,” or “feel physical objects,” it has to use some sensor. This paper aims to explore the history of the sensors and technologies used for computers or AI to sense its environment. The technologies used for seeing include cameras and lidar sensors. A combination of speech recognition and microphones allows AI to “listen,” and tactile sensors are used to “feel” physical objects. AI can also be used to aid humans. For example, AI can be used to understand the sounds in the environment and use it to develop active noise cancellation, which can prevent hearing loss.2. SIGHT2.1 LiDARComputers or AI need some sort of sensation to map and understand its environment. There is a difference between perception and sensation. In 1960, a scientist by the name of Theodore Maiman created the first laser beam. The first laser was created when he and a group of scientists shined a beam of light on a rugby rod, resulting in a beam of light. The laser was an important invention in distance measuring devices because lasers have good brightness, high resolution, and precision. One of the main applications of lasers is a technology called LiDAR. LiDAR stands for ”Light Detection and Ranging”, which is used for scanning and mapping the environment by sending beams of lasers to the object and then receiving the distance between the sensor and the object. When sending thousands of beams, a LiDAR sensor can create a 3D map of the environment. LiDAR is important because it can give the machine information about the 3D environment, which can help it to navigate and complete tasks.In 1961, laser technology captured the interest of the U.S. Military. They began implementing the laser on different technologies. 10 years later, the U.S. Military created the first generation of a rangefinder, a sensor that measures the distance of an object, called the AN/GVS-3. The rangefinder used a photomultiplier detector and a ”photomultiplier detector and red outer precious stone light exciter.” The first-generation rangefinder was heavy, enormous, and had high power consumption.The U.S. military then built the second generation, which used a ”near-infrared neodymium laser” or Nd:YAG and PIN photodiode or avalanche photodiode. The second generation rangefinder was smaller and consumed less power. 6 years after developing the first rangefinder, the U.S. military developed a rangefinder called the ”AN/GVS-5”. AN/GVS-5 was the first rangefinder that was small enough to be held in one hand. It was light, with a weight of approximately 2kg (4.4 pounds). However, the previous rangefinders were dangerous because it damaged the eyes, and it was expensive, reaching thousands of dollars.Because of the safety concerns and the price, the second generation of the rangefinder was mainly used in military uses and scientific research. With the development of electronic technology, a new generation of rangefinder was developed, a generation where the technology became safer, smaller, and consumed less power. The price also dropped from thousands of dollars to hundreds, making it more accessible for universities, institutions, and individual researchers. The third generation had multiple types. The first is a single-beam, which was used to measure the distance. The second is a two-dimensional rangefinder, which scans a plane. The third is a three-dimensional rangefinder, which can scan and get the coordinates of an object. The technology evolved and became reliable, fast, and inexpensive.After 1995, the development of range finders that are safe, precise, and inexpensive was increasing throughout universities, institutions, and researchers. Rangefinders were able to capture details as small as 800nm to 900nm. The rangefinders were consuming around 10 watts.In the 1990s, new products started to appear. One of the main companies was Bushnell. Bushnell developed a rangefinder called the ”400 LD” in 1996. 400 LD was able to measure distances up to 400 meters. This technological advancement was crucial and got the ”top 100 important scientific and technological advancements in the world” award. Two years later, Tasco developed a rangefinder that was equipped with a camera and lasers. Because of the combination of a camera and lasers, the rangefinder could measure distances up to 800 meters. (1)Humans can understand 3D Models and environments, however, they can’t measure the exact distance between objects. LiDAR sensors can create 3D Models of the surroundings and have the exact distance between objects.2.2 CAMERAS AND COMPUTER VISIONThe uses of cameras that are built-in devices can go back to the 1990s (Around the same time that built-in cameras, such as the ones in computers and cellphones, were invented.) In the 1990s, cameras were used only to capture and save images on the device. In the 2000s, photos with cameras could be transmitted and shared using sources like social media. AI was used to compress the images to decrease bandwidth usage. Today, combining 5G technology, computational power, and new methods to develop image recognition software allowed cameras to recognize and count objects, map environments, and help robots and autonomous vehicles with navigation, transportation, and tracking. (2)In the 1960s, researchers were extremely optimistic about the possibilities of artificial intelligence. Researchers received both private and public funds to develop and create AI. However, people were met with the difficulty of AI and were disappointed.In 1966, Gerald Sussman was given the task to use a computer and a camera and let the computer detect what the camera was seeing. After multiple attempts, Sussman failed to finish the task because of how difficult it was.In 1960, Larry Robers, also known as ”The Father of Computer Vision”, mentioned in his thesis that it’s possible to extract 3D geometry from 2D polyhedra. In 1978, David Marr used the bottom-up approach to detect edges and segmentations, this approach is called a ”low-level” vision.In the 1970s, investors, including institutions and governments, began to decrease funding because researchers weren’t delivering what was promised. The period after that was called the ”AI Winter”.A combination of the development of computational power, neural networks and algorithms, and access to data through the internet, in 2012, at ILSVRC (ImageNet Large Scale Visual Recognition Challenge), a team from the University of Toronto used a deep neural network algorithm to create a program called ”AlexNet”. AlexNet was able to detect objects with 83.6 percent accuracy. (3)3. HEARING3.1 SPEECH RECOGNITIONSpeech Recognition is software that allows spoken words to be translated into written words. Speech recognition can also be used for robots or autonomous systems receiving instructions from humans using microphones. In 1956, RCA Laboratories developed a program that can distinguish 10 syllables. 3 years later, at the University College in England, software was built to recognize 4 vowels and 9 constants. In the same year, at MIT Lincoln Laboratories, software was developed to recognize 10 vowels.In the 1960s, researchers from RCA Laboratories developed a system that can recognize when speech started and ended. In the 1970s, researchers from CMU, a project funded by the Defence Advanced Research Projects Agency (DARPA), were able to develop a program that can recognize continuous speech. The program was able to recognize 1,011 words with good accuracy. The software was called ”Hearsay I”. The second generation of Hearsay software was developed and it was called ”Hearsay II”. Hearsay II used the parallel asynchronous process approach, which processes information faster than traditional methods.In the 1980s, a new method called ”Statistical Modeling” was created. This method is being used to this day. Statistical modeling uses probability and mathematics to select the most probable result. In the 1980s, IBM focused on the development of grammar. They used statistical rules to predict patterns in words to produce the most accurate result. This method was called the n-gram model. In the 1950s, neural networks were introduced to speech recognition, but they had multiple problems and weren’t useful. However, in the 1980s, it was reintroduced because neural networks were better understood, and the strengths and weaknesses of the neural networks were clear.In the 1990s, multiple methods were developed, including Maximum Likelihood Linear Regression (MLLR), Model Decomposition, Parallel Model Composition (PMC), and Structural Maximum a Posteriori (SMAP), to decrease background noise and confusion between different human voices, microphones, and transmission channels.In the 2000s, the DARPA program continues. The program funded a software called Effective Affordable Reusable Speech-to-Text (EARS). EARS was developed to detect sentence boundaries and fillers. It was able to detect natural human speech with improved accuracy.However, the spontaneous speech was still difficult to detect with accuracy. Multiple projects were developed to address and solve this problem. For example, a project called ”Spontaneous Speech: Corpus and Processing Technology” and ”Corpus of Spontaneous Japanese (CSJ)”, both were developed in Japan. CSJ was able to integrate 7 million words and 700 hours of speech in their software to successfully improve the accuracy of spontaneous speech. (4)3.2 NOISE CANCELLATIONOne of the modern uses of AI is noise cancellation. Today, our lives are much busier resulting in much longer exposure to loud sounds, which results in some sort of hearing loss. However, companies started to develop noise cancellation headphones to protect our ears. Noise cancellation can also be used by pilots to protect them from loud noises.In 1933, Paul Leug developed a theory with the principles of noise cancellation technology. In the 1950s, Dr. Fogel invented a noise cancellation headphones that was used by pilots to decrease the amount of sound a pilot experiences, which can prevent hearing loss. In 1989, the founder of Bose, Dr. Amar Bose, invented one of the first commercial noise cancellation headphones.Fast forward to 2013, a tech company was founded in the United Kingdom called Kokoon Technology, which developed noise cancellation devices to improve and track sleeping. Another company that started developing noise cancellation headphones other than Bose and Kokoon Technology is Apple. In 2019, Apple released its first AirPods Pro with the noise cancellation technology.4. TACTILE SENSINGOne way humans can identify objects around them is by the sense of touch. Humans can use the sense of touch to determine the hardness, flexibility, and texture of an object. Robots can also use sensors to understand their environment in different methods than computer vision or speech recognition.In 1982, Harmon designed a tactile sensor to be used in a humanoid robot that was designed in the 1980s. However, these designs weren’t implemented until the late 1990s and 2000s.In 2008, a humanoid robot has been developed, it was called the ”iCub”. iCub used piezoelectric sensors integrated into the fingertips of the humanoid robot. It was able to recognize various hardness, textures, and force. This allowed iCub to be able to handle objects with precision and understand how the object looked.Moreover, in 2013, a robot called ”PUMA” used a planer tactile sensor array and was able to recognize the edges of the object in addition to recognizing the orientation of the object. In the same year, a robotic arm called ”KUKA” was developed that used the same approach as PUMA. Similar to the iCub, The KUKA arm could understand how the object looked and be able to handle the object with precision. (6)5. CONCLUSIONFor the past 60 years, scientists from all over the world are trying to develop robots or artificial intelligence that can sense and understand their environments. Progress has been made since the 1960s. AI is currently able to understand its surroundings using multiple types of sensors and computer software. LiDAR, Cameras, Computer Vision, Speech Recognition, and Tactile Sensing are all examples of sensors and software that were developed and can help AI to make sense of its environments. Scientists are still perfecting the sensors and software; however, the current technologies are viable and can allow robots to sense the world. Computer vision is still not as perfect as human recognition, but computers can get the exact distance between objects. So, can robots one day surpass and be able to give more information than human sensation in the future?6. BIBLIOGRAPHYXin Wang et al 2020 IOP Conf. Ser.: Earth Environ. Sci. 502 012008Suo, J., Zhang, W., Gong, J., Yuan, X., Brady, D. J., & Dai, Q. (2021). Computational imaging and artificial intelligence: The next revolution of mobile vision. arXiv preprint arXiv:2109.08880.“A History of Computer Vision & How It Lead to ’Vertical Ai’ Image Recognition.” Pulsar Platform, 26 Mar. 2019.Furui, S. (2005, November). 50 years of progress in speech and speaker recognition. In Proc. SPECOM (pp. 1-9).Pascua, Dionne. “The Fascinating History of Noise-Cancelling Headphones.” Headphonesty, 9 June 2022.Uriel Martinez-Hernandez (2015) Tactile Sensors. Scholarpedia, 10(4):32398.