Keywords
Lane Departure Warning, Data Fusion, Image Processing, Fuzzy Logic, Vehicle Dynamics
This article is included in the Research Synergy Foundation gateway.
Lane Departure Warning, Data Fusion, Image Processing, Fuzzy Logic, Vehicle Dynamics
According to 1, single vehicle road departure incidents account for the majority of road accidents. The frequency of deadly car accidents on the road has become one of today’s most significant issues. Lane departures cause the bulk of highway fatalities, resulting in hundreds of deaths, thousands of injuries, and billions of dollars in damages each year. Malaysia has been rated as the nation with the greatest number of road related deaths per 100,000 inhabitants every year since 19962. According to World Health Organization statistics from 2013, released in 3, Malaysia placed third among developing countries for hazardous roads, behind only Thailand and South Africa. As a consequence, car safety systems such as lane departure warning (LDW) systems have been proven to be essential for avoiding lane departure.
According to 2, the geographical distribution of the 750,000 annual fatalities due to road accidents in 1999 put almost half of them in Asia. Furthermore, the statistical data on road deaths published in 4 show an increase in worldwide traffic fatalities, which is consistent with the predicted future rise in road fatalities in different geographical regions revealed in 5. The total number of road fatalities predicted for 2020 is over 3.5 times more than the total number of road fatalities recorded in 1990, with South Asia bearing the brunt of the increase. The data trend displayed in 5 also shows that the overall number of victims of road accidents in developing nations is increasing. In contrast, in high-income nations, there has been a continuous decline over the past 20 years. The reduction in road deaths in high-income nations was mostly driven by legislative enforcement, such as the requirement that LDWS be installed in all vehicles sold in the country.
As a result, it is important to disclose the proportion of road deaths from motorised four-wheeled vehicles compared to motorcyclists, bicycles, and pedestrians in terms of World Health Organization sub-region categorization, as shown in 6. This article displays a breakdown of road traffic fatalities by road user group in World Health Organization sub-regions as well as the global average breakdown for the road user group. The breakdown of road user groups presented in 6 was based on public and unpublished information on country-specific road traffic injuries from 1999 to 2006. According to 6, the worldwide road traffic deaths among motorised four-wheeled vehicles are on average expected to make up almost half of all fatalities (45%), followed by pedestrian, motorcyclist6, and bicycle user groups at 31%, 18%, and 7%, respectively.
In 2013, a similar distribution of road traffic deaths by road user type was found, with four-wheeled vehicles accounting for 35%, followed by pedestrians, motorised two- or three-wheel vehicles, cyclists, and other road users accounting for 31%, 22%, 11%, and 1%, respectively. Based on statistical data contained in, four-wheeled vehicles continue to be the leading contributor to worldwide road deaths when compared to other road user categories. Moreover7, revealed that approximately 37.4% of all fatal vehicle crashes in the United States are caused by single vehicle lane departure. According to related research8, single-vehicle lane departure accidents accounted for the majority of road traffic fatalities caused by drifting into oncoming traffic, adjacent traffic, or off the highway.
Most road casualties have a close connection with the driver’s behaviour, such as unintended steering wheel motions, dozing, negligence, fatigue, drowsiness, intoxication, or utilisation of cell phones9. As a result, automobile safety has become a worry for road users, as the majority of road fatalities are the result of a driver’s faulty judgement of vehicle route10. Over the past decade, automobile safety has received a lot of attention, with numerous researchers working to improve car safety and comfort, according to 11. One of the main attempts by researchers has been to use a calculated risk indicator to provide a triggered warning signal to the driver right before the accident occurrence to prevent road casualties12, such as the LDW discussed in this article.
The present LDW framework is mostly made up of the environment detection component of the vision sensor, which detects the lane border, lane marker, and road contour. The determination of lane location is a critical component of LDW application. It is paramount to evaluate how the lane is detected and determine its accuracy with applicable metrics in various environmental conditions13. As a result, LDW is often only used on roads with well defined lane markers, and the systems may be harmed by erroneous activity and circumstances on the road. The example of erroneous activity on the road is failing to engage a turn signal before making a lane change. As a result, performance assessment criteria for the general LDW system include lane detection rate, false-positive rate, and false-negative rate. Numerous prior studies have looked at and reported on these14.
Image quality difficulties, low eyesight circumstances, and a variety of lane conditions are problems for LDW, according to 15. LDW decreases as a result of performance flaws brought on by environmental constraints. Examples of lane detection challenges from environmental limitations are the roadway lane markings in the daytime and night-time. Due to LDW constraints caused by environmental circumstances that make it difficult to identify correct lanes16, a new framework for LDW development is needed to improve the system’s resilience in coping with the present difficulties. Most of the developed LDW techniques are based on the vision sensor and some have global positioning system (GPS) integration17. Still, the lane departure detection results depend on the linking reliability between the GPS and satellite.
This research aims to design a data fusion-based LDW framework that improve the lane departure detection in daytime and night-time driving environments. The main motivation is to investigate frameworks that combine vision data from vision-based LDW and vehicle dynamical state from model-based vehicle dynamics so that the effectiveness of data fusion-based LDW could be enhanced in solving lane departure detection problems. Considering the fact that various disturbances exist in a vision system, vision based LDW in lane departure detection performance could be severely degraded if the vision disturbances are appeared in vision-based lane detection. It is, thus, desirable to design data fusion-based LDW that is capable of enhancing the lane departure detection through the combination of yaw acceleration from model-based vehicle dynamics and lateral offset ratio from vision-based LDW, meanwhile accounting for the effect of vision disturbances in daytime and night-time driving environments.
As part of an intelligent transportation system, LDW plays a vital role in reducing road fatalities by giving a warning to the driver about any accidental lane departure. Prior to lane departure, the driver or monitoring system detects one lane boundary moving horizontally towards the centre of the front view. A lane departure may be recognised by analysing the horizontal position of each detected lane boundary, which corresponds to the X -coordinates of the bottom end-points of the lane borders in the image plane18. The technology issues a warning message when the vehicle reaches a set distance from the lane boundary. An LDW warns that a vehicle is about to leave its current lane or is about to cross a lane boundary.
Figure 1 shows an overall data fusion-based LDW framework. The vision-based lane detection framework found in 19 is extended by determining lateral offset ratio based on the detected X12 and X22 coordinates. The model-based vehicle dynamics framework can be found in 20. The lane departure detection is based on the pre-defined vision-based LDW’s lane departure identification for lateral offset ratio.
However, all vision-based LDW systems have encountered performance constraints21. Undetectable lane boundary markings limit the performance of the LDW of vision-based systems and their supporting algorithms. Limitations include environmental conditions, highway condition, and other marker occlusions. Hence, many research communities are finding new ways to improve the LDW system. In this article, data fusion between vision data and vehicle data enhances the LDW results. The lateral offset ratio and yaw acceleration are calculated using a combination of vision-based LDW and model-based vehicle dynamics. These two signals are then utilised as the input variables for fuzzy logic. The computed fuzzy output variable of LDW, f (u), is then used for detecting lane departure based on the defined fuzzy logic rules for LDW.
In 22, the vehicle’s lateral offset in relation to the lane centre was utilised to forecast lane departure. However, existing techniques depend on camera calibration to obtain the lateral offset, while vision-based LDW does not require any intrinsic or extrinsic camera parameters23. In this article, both lane boundary X -coordinates (X12 and X22) are analysed for each frame to compute the lateral offset ratio (LOR).
where X12 is the detected left bottom end-point of the right lane border, and TH is the LDW threshold, which is set to a constant value of 0.8. The right lane boundary’s identified right bottom end-point is X22. Xm is one-half of the picture plane’s horizontal width. There is no provision in ISO 17361:200724 regarding how early before crossing the lane to make the warning threshold. However, ISO 17361:200724 does provide a warning threshold reference of approximately 80% of the lane width from the centre of the lane border.
The absolute |Xm − X12| value of the horizontal pixel distance between detected Xm and X12 is indicated on the image plane. |X22 − Xm| is the absolute value of the horizontal pixel distance between the detected X22 and Xm of the image plane. The min function uses |Xm − X12| and |X22 − Xm| set to picture each frame with a minimum. A difference between the minimum selected and THXm shows the horizontal pixel distance between the warning threshold and the observed X12 or X22. Thus the horizontal distance of the pixel between the warning threshold and the detected X12 or X22 is computed by dividing the horizontal distance of the pixel between warning thresholds with the detected Xm. The projection of a path in the image plane for lateral offset rate calculations is shown in Figure 2.
Assume the car is travelling in the middle of the lane, parallel to the lane borders. The lateral offset ratio is therefore constant and equal to 0.25 as a function of the left bottom end-point of the left lane boundary, X12, the right bottom end-point of the right lane boundary, X22, one-half the horizontal width of the image plane, Xm, and the warning threshold, TH. Assume that the car is travelling parallel to the lane borders and has left the lane’s centre. The lateral offset ratio is constant in this instance, but it is less than 0.25. Because the vehicle does not seem to be leaving its lane, no LDW signal should be activated. As the vehicle approaches the lane border, the lateral offset ratio will decrease from 0.25 to -1. Because the vehicle seems to have strayed from its lane, an LDW signal should be activated. As a result, the word "Lane Departure" appears in the series of detected lane departure frame pictures.
Lane departure detection utilising lateral offset ratio in vision-based LDW is shown in Table 1, with values ranging from -1 to 0.25. For a lane departure zone, the lateral offset ratio range is -1 ≤ lateral offset ratio ≤ 0. Furthermore, a lateral offset ratio of zero indicates that the vehicle has crossed the alert threshold. As a result, the phrase ’Lane Departure’ appears on the frame picture. For no lane departure zone, the lateral offset ratio ranges from 0 < lateral offset ratio ≤ 0.25, with the highest value indicating that the vehicle is in the middle of the lane. As a result, the phrase ’Lane Departure’ is not visible on the frame picture. When the lateral offset ratio is equal to -1, the vehicle is crossing one lane border. As a result, the phrase ’Lane Departure’ appears on the frame picture.
Analysing only the value of lateral offset ratio using vision-based LDW is not adequate to detect the likelihood of lane departure because the vision-based method tends to provide more false warnings due to deterioration of road conditions. The computed lateral offset ratio from vision-based LDW is usually noisy, caused by a road sign, leading vehicle on the next lane, occluded lane boundaries, or poor condition of road painting23. These interferences multiply when computing lateral offset ratio in particular.
Fuzzy logic is used to smartly determine the LDW to eliminate such high rate differences in the computed lateral offset ratio. In this article, lateral offset ratio and yaw acceleration are chosen as the input variables for the fuzzy logic, reflecting the strictly dynamic characteristic of the output variable26. It is crucial to analyse vehicle dynamics responses such as yaw acceleration during lane departure activity. It has been shown that yaw acceleration response can provide earlier insight in predicting the forthcoming lane departure event compared to other vehicle dynamics responses. The fuzzification, rule-based inference engine27, and defuzzification used in data fusion-based LDW are presented in Figure 3.
Lateral offset ratio input and LDW, f (u), output variables can be divided into two membership function (MF) levels. In comparison, the yaw acceleration input variable is divided into three levels of MF. They are defined as positive (PO) and negative (NE) for lateral offset ratio; lane departure (LD) and no lane departure (NLD) for LDW; f (u), and positive (PO), zero (ZE), and negative (NE) for yaw acceleration. Gaussian MF is chosen to improve fuzzification speed because the centre and spread need to be updated and Gaussian MF is very well known and widely used. The lateral offset ratio and yaw acceleration range are (-1, 0.25) and (-0.1, 0.1), respectively. The range of LDW, f (u), the output variable is a singleton within -5 to 0.6. The chosen range of output variables can be any value within the range of LDW, f (u), for various road conditions. The average gravity centre method for defuzzification is adopted.
Table 2 tabulates the centre and spread parameters used in the lateral offset ratio and yaw acceleration input MFs to compute the output variable of LDW, f (u). The PO Gaussian MF of the lateral offset ratio input variable is made up of 0.25 centre and 0.087 spread. Two Gaussian MFs are chosen due to vision-based LDW’s lane departure identification for lateral offset ratio as described in Table 1. The PO Gaussian MF is made up of 0.1 centre and 0.03538 spread in yaw acceleration input variable. The ZE Gaussian MF is made up of 0 centre and 0.03538 spread. The NE Gaussian MF is made up of -0.1 centre and 0.03538 spread. Three Gaussian MFs are chosen due to full coverage for the range of yaw acceleration.
LOR, lateral offset ratio; PO, positive; NE, negative; ZE, zero.
Input variable | MF | Centre | Spread |
---|---|---|---|
LOR | PO | 0.25 | 0.087 |
LOR | NE | -0.5 | 0.17 |
Yaw acceleration | PO | 0.1 | 0.03538 |
Yaw acceleration | ZE | 0 | 0.03538 |
Yaw acceleration | NE | -0.1 | 0.03538 |
Data fusion-based LDW can be controlled smartly to the fuzzy rules established based on the control scheme shown in Table 3. Table 3 tabulated a fuzzy rule matrix used in data fusion-based LDW, consisting of six defined rules. Rule 1 defined that if the input variable lateral offset ratio is NE and input variable yaw acceleration is PO, then the output variable LDW, f (u), is LD. Rule 2 defined that if the input variable lateral offset ratio is NE and input variable yaw acceleration is ZE, then the output variable LDW, f (u), is NLD. Rule 3 that defined if the input variable lateral offset ratio is NE, and input variable yaw acceleration is NE, then the output variable LDW, f (u), is LD.
PO, positive; NE, negative; ZE, zero; NLD, no lane departure; LD, lane departure.
Input variable | Lateral offset ratio | ||
---|---|---|---|
PO | NE | ||
Yaw acceleration | PO | NLD (Rule 4) | LD (Rule 1) |
ZE | NLD (Rule 5) | NLD (Rule 2) | |
NE | NLD (Rule 6) | LD(Rule 3) |
Rule 4 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is PO, then the output variable LDW, f (u), is NLD. Rule 5 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is ZE, then the output variable LDW, f (u), is NLD. Rule 6 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is NE, then the output variable LDW, f (u), is NLD.
Table 4 tabulates the lane departure identification in data fusion-based LDW, which uses the output of fuzzy logic, LDW, f (u). The LDW, f (u), has a range of values between -5 and 0.6 with the range of values of LDW, f (u), for lane departure falling between -5 ≤ f (u) ≤ 0. In addition, a value of LDW, f (u), equal to zero indicates the vehicle is crossing the warning threshold. Hence, the ’Lane Departure’ text is presented in the frame image. The range of value of LDW, f (u), for no lane departure zone falls in between 0 < f (u) ≤ 0.6 with the maximum value of LDW, f (u), indicates the vehicle is located at the centre of the lane. Hence, no ’Lane Departure’ text is presented in the frame image. The minimum value of LDW, f (u) = -5, indicates that the vehicle is crossing one lane boundary. Hence ’Lane Departure’ text is presented in the frame image.
Data fusion-based LDW with real-life datasets is simulated with MATLAB and Simulink version R2019a installed on a laptop with a Windows 10 operating system, using an Intel i5 1.60 GHz processor and 4GB RAM. Alternatively, Scilab’s Xcos28 is suggested as open-source alternative to replace MATLAB and Simulink. In order to simulate the model, real-life datasets (clip #5-#27) is input sequentially into the Simulink model. The Simulink model and its parameter are shared via Zenodo and Github25.
The testbed used for the experiment is shown in Figure 4, and the equipment used in the experimental testbed are shown in Table 5. The real-life datasets were generated by using a camera capturing the look ahead road footage and rotary encoders capturing steering wheel angle and vehicle speed responses. The real-life datasets of road footage, steering wheel angle responses and vehicle speed responses were acquired off-line at the rate of 30Hz and trimmed into corresponding clip numbers as presented in Table 6–Table 7 for daytime and night-time driving environments, respectively. The data trimming is applied to the real-life datasets before being transferred to a laptop for running of the data fusion-based LDW simulation, as illustrated in Figure 4. Data trimming is required to ensure all the responses acquired from various sensors are synced at the exact time-stamp. In this case, outlier/out of sync responses from the sensors at the early seconds of acquisition were usually discarded in order to match the reference time-stamp.
Clip no. | No. of frame | Real-life dataset |
---|---|---|
5 | 7049 | 29 |
6 | 2400 | 30 |
7 | 8400 | 31 |
8 | 2550 | 32 |
9 | 3750 | 33 |
10 | 2100 | 34 |
11 | 2100 | 35 |
12 | 2520 | 36 |
13 | 539 | 37 |
Clip no. | No. of frame | Real-life dataset |
---|---|---|
14 | 12719 | 38 |
15 | 510 | 39 |
16 | 1829 | 40 |
17 | 1079 | 41 |
18 | 329 | 42 |
19 | 1739 | 43 |
20 | 239 | 44 |
21 | 869 | 45 |
22 | 630 | 46 |
23 | 2070 | 47 |
24 | 900 | 48 |
25 | 2550 | 49 |
26 | 720 | 50 |
27 | 1079 | 51 |
In order to validate the effectiveness of the proposed data fusion-based LDW framework, real-life datasets with variation of driving environments (daytime and night-time), road structure (straight and curving roads), outlier road features (occluded lane markers and arrow sign printed on road surface) were considered. Author had spent effortless hours manoeuvring the instrumented car to generate real-life datasets specifically to be used in the current study.
The experimental testbed used for acquiring road footage for data fusion-based LDW is also described in 19. In 19, this paper proposes a vision-based lane departure warning framework for lane departure detection under daytime and night-time driving environments. The traffic flow and conditions of the road surface for both urban roads and highways in the city of Malacca are analysed in terms of lane detection rate and false positive rate. The proposed vision-based lane departure warning framework includes lane detection followed by a computation of a lateral offset ratio. The lane detection is composed of two stages: pre-processing and detection. In the pre-processing, a colour space conversion, region of interest extraction, and lane marking segmentation are carried out. In the subsequent detection stage, Hough transform is used to detect lanes. Lastly, the lateral offset ratio is computed to yield a lane departure warning based on the detected X -coordinates of the bottom end-points of each lane boundary in the image plane.
Ethical approval was obtained from Multimedia University with approval number EA1902021. Authors submitted a self-declaration form on 17/05/2021 stating that the research was conducted from 01/02/2012 - 30/04/2020. Ethical approval is required due to the institution requirement before the disclosure of the article.
The number of lane departure frames, the number of identified lane departure frames, and the number of false-positive frames were manually tallied frame by frame for the performance assessment of lane departure detection. The formula used for the total number of lane departure frame was:
The formula used for the lane departure detection rate was:
The formula used for the false positive rate was:
where
NTL - Total number of lane departure frame detected
NDL - Total number of correctly detected lane departure frame
NFL - Total number of false positive lane departure frame
All three test situations, namely straight road, curving road, and false alarms, were included in the real-life datasets presented in Table 6–Table 7. In both daylight and night-time driving situations, the effectiveness of data fusion-based LDW in lane departure detection is assessed. The efficacy of the data fusion-based LDW concept is shown by comparing the lane departure detection results for vision-based LDW and data fusion-based LDW. Table 8–Table 9 show the results of lane departure detection based on vision-based LDW in daylight and night-time driving situations, respectively. Table 10–Table 11 show the results of lane departure detection based on data fusion-based LDW in daylight and night-time driving situations, respectively.
No public or known datasets for road panels, vehicle speed responses and steering wheel angle responses for the identification of the lane departure were identified for fair comparison between data-fusion based LDW and vision-based LDW. The lane detection findings of data-fusion and vision-based LDWs reported above are thus used for comparison with actual datasets. Real-life datasets for clips 5–27 consist of 58.670 street video frames for the lane identification study. As the real-life datasets consist of 23 different clips, each clip has an average lane detection rate and false-positive rate for lane departure. 15,636 and 2,702 frames of all road video frames were LDW frames in vision-based LDW and data fusion-based LDW, respectively.
Vision-based LDW and data fusion based LDW detection is compared in Table 12. Regarding lane detection and false positive rates in lane detection analyses, it can be seen that circumstances during the day and night were taken into consideration in vision-based LDW and data fusion-based LDW conditions. Using the real world datasets (clips 5–27), LDWs achieved an average lane detection rate of 81.13% and a false positive rate of 18.87% in the daylight detection scenario in the field of lane departure. Vision-based LDW, on the other hand, gave an average lane identification rate of 83.73% and a false positive rate of 16.27% in night-time driving scenarios for lane detection. Although the results show that visually-based LDW are consistent in the detection of disturbances in a variety of driving environments, the false positive rate of vision-based LDW checks for lane departure is still more than 16% due to vision system limitations in overcoming disturbances such as usual lane markings, low illumination and other road markings such as lane markers. Consequently, data fusion-based LDW is shown in Table 12 for the comparison of lane departure detection using real-world datasets.
Data fusion-based LDW obtained substantial lane detection results in both daytime and night-time driving situations, in particular by decreasing the false positive rate of lane detection under unfavourable circumstances such as worn lane markings, poor light, obscured lane markings and other road signs. Data fusion-based LDW achieved an average lane detection rate of 99.96% and 0.04% false positive rate using real-life datasets in a day driving scenario. In the night-time driving scenario, the data-fusion LDW obtained an average lane detection rate of 98.95% and 1.05% false positive detection of lane departure using real-life datasets. The integration of vision systems such as a vision-based LDW and dynamic conditions for the vehicle substantially decreased the false positive rate for lane recognition and increased lane detection precision by elimination of superfluous LDW frames.
In real-world situations, low illumination and road surface interference are often encountered in our daily driving. Thus, a standalone vision-based system may be unreliable under complex driving environments and road surface conditions. The examples of complex driving environments and road surface conditions encountered in the experiments were low illumination at night, worn lane markings, arrow signs, and occluded lane markings. As part of an intelligent transportation system, lane departure detection performance can be further enhanced by combining the vision data with the vehicle’s dynamical data like steering wheel angle and vehicle speed.
A data fusion-based LDW system is given, which is made up of vision-based LDW and model-based vehicle dynamics, with multi-input-single-output fuzzy logic in between. The lateral offset ratio is used to determine if ’Lane Departure’ text should be shown on the frame picture. This calculation is based on the identified textitX-coordinates of each lane boundary’s bottom end-points in the picture plane. The LDW, f (u), is intelligently computed using multi-input-single-output fuzzy logic based on lateral off-set ratio input from vision-based LDW and vehicle’s yaw acceleration reaction from model-based vehicle dynamics. To assess performance in lane identification and lane departure detection, road video from urban roads and a highway in Malacca was gathered.
The false-positive rate and detection rate were investigated. In daylight and night-time driving situations, lane detection rates of 94.60% and 95.36%, respectively, were obtained. The false positive detection rates were 5.40% and 4.64%, respectively. The findings of the experiments indicate that vision-based lane identification is successful in identifying lanes under difficult driving situations, such as in daylight and at night-time. The performance assessment of this technique for lane identification revealed that neither the driving environment nor traffic flow is the most important element influencing the performance of vision-based lane detection. Instead, road surface characteristics were shown to be the major contributor to the false positive rate, especially for deteriorated lane markers.
In daylight and night-time driving situations, detection rates of 81.13% and 83.73%, respectively, were achieved in the assessment of lane departure detection utilising vision-based LDW. The false-positive rates were 18.87% and 16.27%, respectively. In daylight and night-time driving situations, the lane departure detection assessment utilising data fusion-based LDW yielded detection rates of 99.96% and 98.95%, respectively. The false-positive rates were 0.04% and 1.05%, respectively. The findings indicate that data fusion-based LDW is successful in identifying lane deviations in both daylight and night-time driving situations.
Although lane departure detection utilising data fusion-based LDW works better in a daylight driving environment, poor illumination during night-time driving has led to a slight decrease in lane departure detection rate, as shown in clips 25 and 26. Nonetheless, based on the findings of the experiments, data fusion-based LDW worked well throughout the day without being hampered by road surface conditions. Each frame was processed in about 18.7 milliseconds during the testing of data fusion-based LDW. Low light and road surface interference are common occurrences in real-world driving conditions. As a result, vision-based systems, particularly vision-based lane detection and vision-based LDW, are unreliable in challenging driving situations and road surface conditions. Low lighting at night, faded lane markings, arrow signs, and obscured lane markings were all instances of difficult driving situations and road surface characteristics observed in the tests. The focus of future study should be on all-weather test settings. The performance of data fusion-based LDW may be improved further by using adaptive tuning of fuzzy rules and MF parameters as part of an intelligent transportation system.
Mendeley Data: Clip #5. https://doi.org/10.17632/f24x2p6b5h.329.
This project contains the following underlying data:
clip5.avi
steering wheel angle and vehicle speed.mat
Clip 5_vehicle speed.xls
Clip 5_steering wheel angle.xls
Mendeley Data: Clip #6. https://doi.org/10.17632/xskxs82mz6.330.
This project contains the following underlying data:
clip6.avi
steering wheel angle and vehicle speed.mat
Clip 6_vehicle speed.xls
Clip 6_steering wheel angle.xls
Mendeley Data: Clip #7. https://doi.org/10.17632/dppstzh8n6.431.
This project contains the following underlying data:
clip7.avi
steering wheel angle and vehicle speed.mat
Clip 7_vehicle speed.xls
Clip 7_steering wheel angle.xls
Mendeley Data: Clip #8. https://doi.org/10.17632/hgt5whhj6n.332.
This project contains the following underlying data:
clip8.avi
steering wheel angle and vehicle speed.mat
Clip 8_vehicle speed.xls
Clip 8_steering wheel angle.xls
Mendeley Data: Clip #9. https://doi.org/10.17632/bvbykc4hxf.433.
This project contains the following underlying data:
clip9.avi
steering wheel angle and vehicle speed.mat
Clip 9_vehicle speed.xls
Clip 9_steering wheel angle.xls
Mendeley Data: Clip #10. https://doi.org/10.17632/g98zzcn6nr.334.
This project contains the following underlying data:
clip10.avi
steering wheel angle and vehicle speed.mat
Clip 10_vehicle speed.xls
Clip 10_steering wheel angle.xls
Mendeley Data: Clip #11. https://doi.org/10.17632/z3yjbd4567.335.
This project contains the following underlying data:
clip11.avi
steering wheel angle and vehicle speed.mat
Clip 11_vehicle speed.xls
Clip 11_steering wheel angle.xls
Mendeley Data: Clip #12. https://doi.org/10.17632/ytn823rw8j.336.
This project contains the following underlying data:
clip12.avi
steering wheel angle and vehicle speed.mat
Clip 12_vehicle speed.xls
Clip 12_steering wheel angle.xls
Mendeley Data: Clip #13. https://doi.org/10.17632/946jzttn7n.337.
This project contains the following underlying data:
clip13.avi
steering wheel angle and vehicle speed.mat
Clip 13_vehicle speed.xls
Clip 13_steering wheel angle.xls
Mendeley Data: Clip #14. https://doi.org/10.17632/cww75348bj.338.
This project contains the following underlying data:
clip14.avi
steering wheel angle and vehicle speed.mat
Clip 14_vehicle speed.xls
Clip 14_steering wheel angle.xls
Mendeley Data: Clip #15. https://doi.org/10.17632/k74tdgbhjm.339.
This project contains the following underlying data:
clip15.avi
steering wheel angle and vehicle speed.mat
Clip 15_vehicle speed.xls
Clip 15_steering wheel angle.xls
Mendeley Data: Clip #16. https://doi.org/10.17632/hps9jsjwxp.440.
This project contains the following underlying data:
clip16.avi
steering wheel angle and vehicle speed.mat
Clip 16_vehicle speed.xls
Clip 16_steering wheel angle.xls
Mendeley Data: Clip #17. https://doi.org/10.17632/bxmmttx535.341.
This project contains the following underlying data:
clip17.avi
steering wheel angle and vehicle speed.mat
Clip 17_vehicle speed.xls
Clip 17_steering wheel angle.xls
Mendeley Data: Clip #18. https://doi.org/10.17632/smx7tbx29p.342.
This project contains the following underlying data:
clip18.avi
steering wheel angle and vehicle speed.mat
Clip 18_vehicle speed.xls
Clip 18_steering wheel angle.xls
Mendeley Data: Clip #19. https://doi.org/10.17632/kcxpm835gw.343.
This project contains the following underlying data:
clip19.avi
steering wheel angle and vehicle speed.mat
Clip 19_vehicle speed.xls
Clip 19_steering wheel angle.xls
Mendeley Data: Clip #20. https://doi.org/10.17632/m25z57438h.344.
This project contains the following underlying data:
clip20.avi
steering wheel angle and vehicle speed.mat
Clip 20_vehicle speed.xls
Clip 20_steering wheel angle.xls
Mendeley Data: Clip #21. https://doi.org/10.17632/cjptbmddpk.445.
This project contains the following underlying data:
clip21.avi
steering wheel angle and vehicle speed.mat
Clip 21_vehicle speed.xls
Clip 21_steering wheel angle.xls
Mendeley Data: Clip #22. https://doi.org/10.17632/yhd2j7ddxc.346.
This project contains the following underlying data:
clip22.avi
steering wheel angle and vehicle speed.mat
Clip 22_vehicle speed.xls
Clip 22_steering wheel angle.xls
Mendeley Data: Clip #23. https://doi.org/10.17632/5zjf62drv7.347.
This project contains the following underlying data:
clip23.avi
steering wheel angle and vehicle speed.mat
Clip 23_vehicle speed.xls
Clip 23_steering wheel angle.xls
Mendeley Data: Clip #24. https://doi.org/10.17632/r8vm7nbgvm.348.
This project contains the following underlying data:
clip24.avi
steering wheel angle and vehicle speed.mat
Clip 24_vehicle speed.xls
Clip 24_steering wheel angle.xls
Mendeley Data: Clip #25. https://doi.org/10.17632/642n3xx8s6.349.
This project contains the following underlying data:
clip25.avi
steering wheel angle and vehicle speed.mat
Clip 25_vehicle speed.xls
Clip 25_steering wheel angle.xls
Mendeley Data: Clip #26. https://doi.org/10.17632/wmymrk79tg.350.
This project contains the following underlying data:
clip26.avi
steering wheel angle and vehicle speed.mat
Clip 26_vehicle speed.xls
Clip 26_steering wheel angle.xls
Mendeley Data: Clip #27. https://doi.org/10.17632/wb4hgnr6k3.351.
This project contains the following underlying data:
clip27.avi
steering wheel angle and vehicle speed.mat
Clip 27_vehicle speed.xls
Clip 27_steering wheel angle.xls
Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).
Zenodo: realone84/Data_Fusion-based_LDW. https:// doi.org/10.5281/zenodo.524145125.
This project contains the following extended data:
Data are available under the terms of the The MIT License (MIT).
The principal author would like to thank supervisors for their continuous supervision and supports throughout the tenure of this research.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)