Ward's World + McGraw Hill's AccessScience

NEW_39006_Ward's World+MGH Self-driving Car

Issue link: https://wardsworld.wardsci.com/i/1471429

Contents of this Issue

Navigation

Page 2 of 4

3 Relative location systems calculate the vehicle's position, speed, and direction of motion relative to a defined starting point. An inertial navigation system (INS) is a commonly used relative location system that measures the angular velocity and the acceleration of the vehicle using a gyroscope and an accel- erometer, from which the distance and direction of the vehicle can be calculated by an onboard computer when combined with measurements of wheel rotations. Absolute location systems use an external signal to locate the vehicle's position. One common approach is to use satel- lite positioning systems, such as GPS, GLONASS, Galileo, and China's Beidou Relative location and absolution location systems, however, have their own faults. In the real world, hybrid location, which combines the two methods, is the common location method. Besides the location system, another important technology is the digital map database. The map information in a digital map database includes road location information that can be matched with satellite position information, as well as more particular information such as geographical features, road con- ditions, buildings, traffic signs, and roadside facilities. Environment perception Environment perception is a key technology of automated driving that enables a vehicle to sense the surrounding en- vironment. Environment perception involves road tracking, and obstacle detection and recognition. As shown in Fig. 3, an environment perception system generally has a laser-sensing system, a visual-sensing system, and radar ranging. In the future, roads might provide communication of environmen- tal information to vehicles. At present, however, automated driving systems must have the ability to operate without such intelligence. Therefore, radar, laser-sensing systems, and visual-sensing systems are the key technologies that enable the vehicle to sense the environment. Current research is focused on building a representation of the environment based on data collected by lidar (light detection and ranging), a camera, or both, in a process known as simultaneous localization and mapping (SLAM). Lidar can match reflections off roadside objects with a "point cloud" of reflections collected in advance and obtain a 3D representation of the environment without being affected by ambient light, which is vital for the implementation of self-driving cars. A camera is much less expensive, but it requires more computing power to recognize objects and calculate distance; a camera is also much less accurate. Therefore, a key to automated driving is the system's ability to reconstruct the environment based on data from multiple sensors. In addition, deep learning, a type of machine learning, is becoming mainstream for environment reconstructions. Self-Driving Car (continued) + ward ' s science + ward ' s science Fig. 3: The main sensors of a car's automated driving system.

Articles in this issue

Archives of this issue

view archives of Ward's World + McGraw Hill's AccessScience - NEW_39006_Ward's World+MGH Self-driving Car