DOC

PROJECT No

By Stacy Brown,2014-01-20 03:05
7 views 0
PROJECT No

Appendix V: Development of Data Acquisition

    System (DAS)

    In evaluating the data acquisition needs, an evaluation of existing DAS‟s was conducted. It was determined through the study that no commercial or government developed system (including DASCAR) was available that would meet all the performance requirements. PATH therefore designed a system that is composed of two distinct systems - one system records engineering data and the other records video data. The engineering data is recorded with a PC based computer. The computer used is an Industrial Computer Systems 9300? series bench top computer using ISA/PCI architecture. This computer records the output from a variety of sensors. The sensors selected by PATH to capture the environment around the bus include commercially available mono-pulse millimeter-wave radars and scanning infrared lasers. Both the radar and scanning laser measure distance and azimuth angle for multiple targets. The radar units are mounted on the front bumper, one on each end, pointing forward. Ultrasonic sensors were originally used as corner sensors, however they did not work well for two reasons. Firstly, the ground was being picked up as a target as the sensitivity was adjusted to a high level. Secondly, as ultrasound transceiver surface was not water proof it was decided that they were not appropriate as corner sensors. It was then decided that Denso LIDAR sensors would be better for this role, so several of these were acquired from Denso. Three lidar units are mounted on the bumper. The units mounted at each end of the bumper are pointing out 20 degrees and the one mounted near the center is pointing straight ahead. Other sensors record the driver inputs to the bus, such as steering wheel angle, brake line pressure, throttle position, and turn signal activation. Other sensors include an accelerometer and a GPS system. The radars, lidars, and GPS data are recorded using RS232 communication protocol. The remaining sensors are recorded using an analog to digital board and anti-aliasing filters.

    - 1 -

Fig. 1 Sensors installed on a bus

    Video data is recorded using a commercially available digital video system. The first digital video recording system implemented saved the video as a series of still images in an encrypted proprietary format. This limited the level of compression and allowed only three days of data to be collected before the removable hard disks had to be changed. This also required that the video data first be converted to a standard still-picture format, and then be converted to a standard moving-picture format (MPEG-1). This was a very time consuming manual process. The video recorder was not reliable such that it crashed the flash-ROM system several times. A Loronix?

    video system was found that offered several improvements over the previous system. This system records video in a standard still format (AVI) and allows for automated conversion to MPEG-1 format. Much less time is required to convert the video data now that the process is automated. The system also has greater storage capacity than the previous one, allowing one week of data collection before the removable hard disks need to be changed. This system was retrofitted on the first bus and has proven to be much more reliable and easier to use. The video cameras in the originally developed system were too obtrusive, and easily damaged or moved by passengers. A different style of video camera was selected to replace them. These cameras have a form factor that allowed them to be installed in the destination window of the bus. This makes them less obtrusive and prevents them from being tampered with. This system records up to six cameras in AVI format onto a PC hard drive. Four miniature “board cameras” capture video images around

    the bus. The cameras capture the front road scene, the left and right front corner road scene, and the passenger compartment of the bus. The video streams from the four cameras are combined into one video stream by a quad image combiner to extend the hard drive storage capacity.

    Synchronization between engineering and video data is very important for later playback. The first item of information for synchronization is the time stamp recorded in the video frame as a title. This time stamp is generated by a title generator which receives the clock time from the engineering computer. This title allows for manual synchronization. The engineering computer

    - 2 -

also sends three synchronization signals to the video recorder through the alarm inputs. These

    signals and their triggering time stamps are recorded separately by both the engineering computer

    and the video recorder. The signals are triggered every one minute, 15 minutes and 60 minutes

    respectively. By matching the signal records in the engineering data with the records of alarms in

    the video recorder, time difference between the two computers can be determined. Once the

    computer time difference is matched, the video clips can be synchronized with the engineering

    data streams. The synchronization occurs as part of the process of transferring the data from the

    removable hard disks to a permanent data base storage system. The permanent data base storage

    system is composed of a Redundant Array of Inexpensive Disks (RAID). Once the data base has

    been synchronized and broken into small data clips each set of data clips is saved in one folder for

    easy access.

    Corner Lidar

    Radar

    Lidar Computer Enclosure

    Radar

    Corner Lidar

    Fig. 2 System layout on the bus

The data acquisition system has been installed on three buses in the SamTrans fleet. A fourth

    system has been prepared for installation on a yet to be determined bus from another agency in the

    Bay Area. The first system started collecting data in August 2000. The second system started

    collecting data in April 2001. After the second system started running, the first system was updated

    with the new design. The third bus started collecting data in January 2002.

    Calibration of DAS

    The location and direction of some sensors will influence the system performance. Before running

    the bus out to collect data, the sensors and the entire system must be calibrated. The calibration

    process involves the following three tasks: 1) measure the location and direction of the sensors, 2)

    correct the location and direction of some sensors, and 3) examine the system alignment.

This section describes the calibration process of the first DAS on the first bus and gives the results. stndThe 1 section gives the measurements of location and sensor direction. The 2 section describes rdthe laser radar calibration procedure and results. The 3 section describes the calibration thapproaches for cameras. Calibration of system alignment is given in the 4 section. Calibration of thother sensors is given in the 5 section. The DAS design was changed after the first DAS was

    calibrated. However, the calibration process and the techniques presented in this document were

    conducted to calibrate all the systems. For convenience, the following abbreviations are used.

    - 3 -

     Table 1 DAS calibration abbreviations

    Sensor Name Abbreviation

    passenger side corner camera P-CAM

    front-looking camera F-CAM

    driver side corner camera D-CAM

    passenger side upper ultra-sensor UP-SONAR

    passenger side lower ultra-sensor LP-SONAR

    passenger side radar P-RADAR

    laser radar LIDAR

    front-looking ultra-sensor F-SONAR

    driver side radar D-RADAR

    driver side upper ultra-sensor UD-SONAR

    driver side lower ultra-sensor LD-SONAR

    Interior-looking camera I-CAM

    rear-looking camera R-CAM

    rear radar R-RADAR

    global positioning system GPS

Sensor position

    Coordinate systems

    To locate the sensors, two reference frames were built on the bus. One is the Front Coordinate

    System (FCS) and the other is the Rear Coordinate System (RCS). Locations of front sensors,

    including P-CAM, F-CAM, D-CAM, UP-SONAR, LP-SONAR, P-RADAR, LIDAR, F-SONAR,

    D-RADAR, UD-SONAR, LD-SONAR and I-CAM, are measured in the FCS. Locations of rear

    sensors, including R-CAM, R-RADAR and GPS are measured in the RCS. The reference points of

    the coordinates and the positions of the sensors are illustrated in the following figures. The positive

    x-axis is horizontally to the left, the positive y-axis is vertically upward, and the positive z-axis is

    horizontally to forward. The basic dimensions of the bus are: length = 12200 mm, width = 2750

    mm.

    - 4 -

Front Sensors

    The reference point of the FCS and the locations of the front sensors are illustrated in Fig. 3.

Fig. 3 FCS and front sensors

The reference point is on the front center of the bus. The height of the reference point from the

    ground is 585mm. The coordinates of the front sensors are listed in the following table.

    Table 2 Front sensor locations

    1. Sensors 2. x 3. y 4. z 5. Angle (Deg)

    (mm) (mm) (mm) 16. LIDAR 7. -836 8. -195 9. 78 10. N.A.

    11. P-RADAR 12. -1050 13. -132 14. 70 15. N.A. 216. UP-SONAR 17. -1201 18. -97 19. 64 20. -36 221. LP-SONAR 22. -1201 23. -176 24. 64 25. -26

    26. D-RADAR 27. 985 28. -135 29. 67 30. N.A. 231. UD-SONAR 32. 1190 33. -95 34. 64 35. 35 236. LD-SONAR 37. 1190 38. -175 39. 64 40. 26

    41. F-SONAR 42. 790 43. -161 44. 61 45. N.A. 346. D-CAM 47. 396 48. 991 49. -80 50. 14 351. F-CAM 52. -69 53. 1653 54. -61 55. 13 356. P-CAM 57. -109 58. 1563 59. -95 60. 25

    61. I-CAM 62. -409 63. 2186 64. -365 65. N.A.

    66. 1: N.A. = Not available;

    67. 2: These are azimuth angles;

    68. 3: These are tilting angles.

    - 5 -

Rear Sensors

    The reference point of the RCS and the locations of the rear sensors are illustrated in Fig. 4.

Fig. 4 RCS and rear sensors

The reference point is on the rear center of the bus. The height of the reference point to the ground

    is 790mm. The coordinates of the rear sensors are listed in the following table.

    Table 3 Rear sensor locations

    69. Sensors 70. x 71. y 72. z 73. Angle (Deg)

    (mm) (mm) (mm) 174. R-RADAR 75. 950 76. -154 77. -39 78. N.A.

    79. GPS 80. 590 81. 2220 82. 800 83. N.A. 284. R-CAM 85. 500 86. 1500 87. 140 88. 16

    89. 1: N.A. = Not available;

    90. 2: Tilting angle.

    LIDAR calibration

    Optical axis orientation

    The LIDAR beam is scanning in 2D by rotating a hexagon mirror. The equivalent detection scope

    is 16 degrees in horizontal and 4.4 degrees in the vertical direction. The equivalent optical axis is

    defined to originate from the LIDAR lens extending to the center of the detection scope, i.e. eight

    - 6 -

degrees to both the left and the right margins and 2.2 degrees to both the top and the bottom

    margins. There are two adjustable screws on the front face of the LIDAR, which can be rotated to

    adjust the optical axis in 2D (both horizontal and vertical directions). As the LIDAR has been stmounted on the passenger side on the 1 bus, to calibrate the LIDAR, we must first adjust the

    optical axis to an appropriate direction [1].

The LIDAR optical axis is set horizontally to the point on the bus‟s longitudinal center line, 50

    meters away from the bus front reference point, and vertically 2.2 degrees up with respect to the

    horizontal plane. The geometric relationship is illustrated in Fig. 5.

     16? BUS

    2.2? longitudinal center

    Reflector LIDAR R=50m 4.4? Detection Scope Fig. 5 LIDAR calibration geometry

LIDAR calibration procedure

    91. LIDAR calibration was done by the following procedure.

    1. Measure LIDAR lens vertical position (height to ground) H =_0.425__(m).

    2. Measure R=_50m_ from bus front reference point along the longitudinal direction.

    3. Set the reflector at R=50m with vertical position = H.

    4. Adjust both the lower and the higher screws simultaneously, make reported “lateral position” = __0__. Change lateral position to check the adjustment.

    Table 4 LIDAR lateral position test

    thActual lateral position Expected report number LIDAR report (5 col)

    6m Left -60 *.1m _____-61___

    3m Left -30 *.1m _____-30___

    3m Right 30 *.1m _____30___

    6m Right 60 *.1m _____61___

5. Adjust the lower screw, make reported “Vertical Position” changing from smaller to larger numbers thru

    __12__.

    6. Adjust the lower screw to “ – direction” __0.3-0.5__ rev, make sure that the LIDAR keeps detecting the

    reflector.

    7. Change distance to check the adjustment:

    Table 5 LIDAR range test

    stndActual distance Expected report number LIDAR report (1-2 col)

    40m 31*1.28m 32*.01m _31__*1.28m _98__*.01m

    30m 23*1.28m 56*.01m _24__*1.28m _14__*.01m

    20m 15*1.28m 80*.01m _16__*1.28m _48__*.01m

    10m 7*1.28m 104*.01m _8___*1.28m _46__*.01m

8. Put the reflector at R=10m, with vertical position changing, check the adjustment:

    - 7 -

    Table 6 LIDAR vertical position test

    thActual vertical position Expected report number LIDAR report (9 col)

    H+0.76m 2 __2_____

    H+0.57m 3-4 __4_____

    H+0.38m 6-7 __5_____

    H+0.19m 9-10 __6_____

    H+0m 12 __8_____

Camera calibration

    Rough adjustment

    Three different options of focal length are available: 3mm, 4mm, and 7.5mm. Lenses with

    different focal length were fitted on the camera heads. Comparing the field of view and selecting

    the one list that best matches the area of interest around the bus, the optimal fitted focal length was

    chosen for each camera, as in the following table.

    Table 7 Focal length of cameras

    Camera Focal length

    D-CAM 4mm

    F-CAM 7.5mm

    P-CAM 4mm

    I-CAM 4mm

    R-CAM 7.5mm

Image plane rotation and optical axis direction of each camera was roughly adjusted by monitoring

    the video output. The factors of interest while adjusting are: range coverage, azimuthal direction of

    interest, and consistency between adjacent cameras. The tilting angle of each camera was

    measured with a level and an angle measure.

    Intrinsic and extrinsic parameters calculation

    Control points

    To calibrate the cameras, 20 control points arranged in 4 lines with 5 points in each line were made

    on a vertically standing black screen. The adjacent lines are 50 centimeters apart. The distance

    between adjacent points in each line is also 50 centimeters. The screen was put in front of each

    camera with the points facing the camera. A picture was taken and stored in the computer. The

    screen was then moved 25 centimeters (for F-CAM and R-CAM) or 20 centimeters (for D-CAM

    and P-CAM) closer to the camera. This process was repeated until five pictures were taken for

    each camera. Every time a picture was taken, the position of the screen in the bus coordinate

    system was marked on the ground and measured later to calculate the control point coordinates.

    - 8 -

The pictures were opened in Microsoft Photo Editor? to read the image coordinates of the control

    points. We get the coordinates of the control points in the bus coordinate system and their

    corresponding image coordinates in the picture. Each control point and its image are called a

    calibration pair. By substituting the coordinates of the calibration pairs in the camera model

    described below, two equations for each pair were obtained. We can solve the unknown camera

    parameters from the equations for all pairs in the sense of Least Square Error (LSE).

    Camera model

    T??P?X,Y,Z represent the coordinates of a point in the bus coordinate system (FCS or RCS), Let

    T represent the coordinates of the point in the camera coordinate system, ??P?X,Y,ZCCCC(x,y)(x,y) and represent the undistorted and distorted image coordinates of the point UUDDrespectively, and represent the coordinate read in Microsoft Photo Editor?, i.e. the pixel (i,j)location with respect to the top-left corner in the image, viz. the computer image coordinate. The

    relationship between the bus coordinate system and the camera coordinate system is given by [2]:

P?RP?T (1) C

    ??R?rwhere is a 3?3 ortho-normal rotation matrix defining the camera orientation and ij

    T is a translation vector defining the camera position. The camera coordinate system ??T?t,t,t123

    is transformed to the undistorted image coordinate (2D) system according to the pin-hole model:

    X?Cx?fU?Z?C (2) ?YC?y?fU?ZC?

    where f is the focal length. The distortion of image coordinates can be modeled by [4]:

    222?????2pxy?pr?2x?kxr?xUUUU121 (3) ?222???pr2y2pxykyr?????yUUUU121?

    222r?x?yp,pkwhere , are coefficients of tangential distortion, and is the coefficient of UU121

    radial distortion. The distorted image coordinates are then obtained:

    x?x???DUx (4.1) ?yy???DUy?

    or

    222???x?x?2pxy?pr?2x?kxrDU1UU2U1U (4.2) ?222??yypr2y2pxykyr?????DU1U2UU1U?

    The relationship between the distorted image coordinates and the computer image coordinates is

    given by:

    - 9 -

    ??x??i?i?Dx0 (5) ???y?jj??Dy0?

    ?,?where are the distance between the adjacent imaging sensor elements in rows and columns, xy

    ??i,jrespectively, represents the computer image coordinate of the principal point of the image 00coordinate system.

The model itself is a nonlinear one. The unknown parameters can be categorized into intrinsic and

    extrinsic, or linear and non-linear parameters, as follows:

    Table 8 Parameter table

     Linear Nonlinear

    Intrinsic ?,???i,j,, k,p,p fxy00112

    TExtrinsic , ??R?r??T?t,t,tij123i,j?1,2,3

Calibration procedure

    It is hard to solve all the parameters simultaneously from the complete nonlinear camera model.

    However, if the nonlinear distortion can be neglected, the model becomes linear. Once the linear

    parameters are known, the nonlinear parameters can be solved from linear equations (3). These

    properties of the camera model help us to simplify the calibration procedure into the following

    steps [3]:

    Step 1: Assume no distortion, calculate linear model parameters Step 2: Calculate distortion using the linear parameters estimated in Step 1

    Step 3: Calculate nonlinear parameters using the distortion and linear parameters estimated in Step

    2

    Step 4: Calculate distortion using the linear and nonlinear parameters estimated in Step 2 and 3

    Step 5: Subtract the distortion estimated in Step 4 from the image coordinates, loop to Step 1 or

    terminate

    The procedure is terminated when it is convergent. As noise exists in the calibration pair

    coordinates, the distortion used in Step 5 was multiplied with a positive fraction to confirm

    convergence. The positive fraction used in our calculation is 0.999.

    Calibration results

    Control point images

    Control point image coordinates estimated with linear-only and nonlinear-plus models together

    with the actual image coordinates read in Photo Editor? are illustrated in the following plots,

    where the „o‟ signs represent the actual images read in Photo Editor?, the „+‟ signs represent the

    - 10 -

Report this document

For any questions or suggestions please email
cust-service@docsford.com