ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Research Article

Data fusion based lane departure warning framework using fuzzy logic

[version 1; peer review: awaiting peer review]
PUBLISHED 07 Sep 2021
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS AWAITING PEER REVIEW

This article is included in the Research Synergy Foundation gateway.

Abstract

Background: Lane detection is a difficult issue because of different lane circumstances. It plays an important part in advanced driver assistance systems, which give information about the centre of a host vehicle such as lane structure and lane position. Lane departure warning (LDW) is used to warn the driver about an unplanned lane exit from the original lane. The objective of this study was to develop a data-fusion LDW framework to improve the rate of detection of lane departure during daylight and at night.
Methods: Vision-based LDW is a comprehensive framework based on vision-based lane detection with additional lateral offset ratio computations based on the detected X12 and X22 coordinates. The computed lateral offset ratio is used to detect lane departure based on predefined LDW identification criteria for vision-based LDW. Data fusion-based LDW was developed using a multi-input-single-output fuzzy logic controller. Data fusion involved lateral offset ratio and yaw acceleration response from the vision-based LDW and model-based vehicle dynamics frameworks. Real-life datasets were generated for simulation under the MATLAB Simulink platform.
Results: Experimental results showed that fusion-based LDW achieved an average lane departure detection rate of 99.96% and 98.95% with false positive rates (FPR) of 0.04% and 1.05% using road footage clips #5–#27 in daytime and night-time, respectively. The average FPR using data fusion-based LDW reduced by 18.83% and 15.22% compared to vision-based LDW in daytime and night-time, respectively.
Conclusions: The data fusion-based LDW is a novel way of reducing false lane departure detection by fusing two types of modalities to determine the correct lane departure information. The limitation is the constant warning threshold value used in the current implementation of LDW in the vision-based LDW framework. An adaptive mechanism of warning threshold taking various road structures into account could be developed to improve lane departure detection.

Keywords

Lane Departure Warning, Data Fusion, Image Processing, Fuzzy Logic, Vehicle Dynamics

Introduction

According to 1, single vehicle road departure incidents account for the majority of road accidents. The frequency of deadly car accidents on the road has become one of today’s most significant issues. Lane departures cause the bulk of highway fatalities, resulting in hundreds of deaths, thousands of injuries, and billions of dollars in damages each year. Malaysia has been rated as the nation with the greatest number of road related deaths per 100,000 inhabitants every year since 19962. According to World Health Organization statistics from 2013, released in 3, Malaysia placed third among developing countries for hazardous roads, behind only Thailand and South Africa. As a consequence, car safety systems such as lane departure warning (LDW) systems have been proven to be essential for avoiding lane departure.

According to 2, the geographical distribution of the 750,000 annual fatalities due to road accidents in 1999 put almost half of them in Asia. Furthermore, the statistical data on road deaths published in 4 show an increase in worldwide traffic fatalities, which is consistent with the predicted future rise in road fatalities in different geographical regions revealed in 5. The total number of road fatalities predicted for 2020 is over 3.5 times more than the total number of road fatalities recorded in 1990, with South Asia bearing the brunt of the increase. The data trend displayed in 5 also shows that the overall number of victims of road accidents in developing nations is increasing. In contrast, in high-income nations, there has been a continuous decline over the past 20 years. The reduction in road deaths in high-income nations was mostly driven by legislative enforcement, such as the requirement that LDWS be installed in all vehicles sold in the country.

As a result, it is important to disclose the proportion of road deaths from motorised four-wheeled vehicles compared to motorcyclists, bicycles, and pedestrians in terms of World Health Organization sub-region categorization, as shown in 6. This article displays a breakdown of road traffic fatalities by road user group in World Health Organization sub-regions as well as the global average breakdown for the road user group. The breakdown of road user groups presented in 6 was based on public and unpublished information on country-specific road traffic injuries from 1999 to 2006. According to 6, the worldwide road traffic deaths among motorised four-wheeled vehicles are on average expected to make up almost half of all fatalities (45%), followed by pedestrian, motorcyclist6, and bicycle user groups at 31%, 18%, and 7%, respectively.

In 2013, a similar distribution of road traffic deaths by road user type was found, with four-wheeled vehicles accounting for 35%, followed by pedestrians, motorised two- or three-wheel vehicles, cyclists, and other road users accounting for 31%, 22%, 11%, and 1%, respectively. Based on statistical data contained in, four-wheeled vehicles continue to be the leading contributor to worldwide road deaths when compared to other road user categories. Moreover7, revealed that approximately 37.4% of all fatal vehicle crashes in the United States are caused by single vehicle lane departure. According to related research8, single-vehicle lane departure accidents accounted for the majority of road traffic fatalities caused by drifting into oncoming traffic, adjacent traffic, or off the highway.

Most road casualties have a close connection with the driver’s behaviour, such as unintended steering wheel motions, dozing, negligence, fatigue, drowsiness, intoxication, or utilisation of cell phones9. As a result, automobile safety has become a worry for road users, as the majority of road fatalities are the result of a driver’s faulty judgement of vehicle route10. Over the past decade, automobile safety has received a lot of attention, with numerous researchers working to improve car safety and comfort, according to 11. One of the main attempts by researchers has been to use a calculated risk indicator to provide a triggered warning signal to the driver right before the accident occurrence to prevent road casualties12, such as the LDW discussed in this article.

The present LDW framework is mostly made up of the environment detection component of the vision sensor, which detects the lane border, lane marker, and road contour. The determination of lane location is a critical component of LDW application. It is paramount to evaluate how the lane is detected and determine its accuracy with applicable metrics in various environmental conditions13. As a result, LDW is often only used on roads with well defined lane markers, and the systems may be harmed by erroneous activity and circumstances on the road. The example of erroneous activity on the road is failing to engage a turn signal before making a lane change. As a result, performance assessment criteria for the general LDW system include lane detection rate, false-positive rate, and false-negative rate. Numerous prior studies have looked at and reported on these14.

Image quality difficulties, low eyesight circumstances, and a variety of lane conditions are problems for LDW, according to 15. LDW decreases as a result of performance flaws brought on by environmental constraints. Examples of lane detection challenges from environmental limitations are the roadway lane markings in the daytime and night-time. Due to LDW constraints caused by environmental circumstances that make it difficult to identify correct lanes16, a new framework for LDW development is needed to improve the system’s resilience in coping with the present difficulties. Most of the developed LDW techniques are based on the vision sensor and some have global positioning system (GPS) integration17. Still, the lane departure detection results depend on the linking reliability between the GPS and satellite.

This research aims to design a data fusion-based LDW framework that improve the lane departure detection in daytime and night-time driving environments. The main motivation is to investigate frameworks that combine vision data from vision-based LDW and vehicle dynamical state from model-based vehicle dynamics so that the effectiveness of data fusion-based LDW could be enhanced in solving lane departure detection problems. Considering the fact that various disturbances exist in a vision system, vision based LDW in lane departure detection performance could be severely degraded if the vision disturbances are appeared in vision-based lane detection. It is, thus, desirable to design data fusion-based LDW that is capable of enhancing the lane departure detection through the combination of yaw acceleration from model-based vehicle dynamics and lateral offset ratio from vision-based LDW, meanwhile accounting for the effect of vision disturbances in daytime and night-time driving environments.

Methods

As part of an intelligent transportation system, LDW plays a vital role in reducing road fatalities by giving a warning to the driver about any accidental lane departure. Prior to lane departure, the driver or monitoring system detects one lane boundary moving horizontally towards the centre of the front view. A lane departure may be recognised by analysing the horizontal position of each detected lane boundary, which corresponds to the X -coordinates of the bottom end-points of the lane borders in the image plane18. The technology issues a warning message when the vehicle reaches a set distance from the lane boundary. An LDW warns that a vehicle is about to leave its current lane or is about to cross a lane boundary.

Data fusion based lane departure warning framework

Figure 1 shows an overall data fusion-based LDW framework. The vision-based lane detection framework found in 19 is extended by determining lateral offset ratio based on the detected X12 and X22 coordinates. The model-based vehicle dynamics framework can be found in 20. The lane departure detection is based on the pre-defined vision-based LDW’s lane departure identification for lateral offset ratio.

d18ef17e-5bf9-4c22-8e34-d491225befc5_figure1.gif

Figure 1. Data fusion-based LDW framework25.

However, all vision-based LDW systems have encountered performance constraints21. Undetectable lane boundary markings limit the performance of the LDW of vision-based systems and their supporting algorithms. Limitations include environmental conditions, highway condition, and other marker occlusions. Hence, many research communities are finding new ways to improve the LDW system. In this article, data fusion between vision data and vehicle data enhances the LDW results. The lateral offset ratio and yaw acceleration are calculated using a combination of vision-based LDW and model-based vehicle dynamics. These two signals are then utilised as the input variables for fuzzy logic. The computed fuzzy output variable of LDW, f (u), is then used for detecting lane departure based on the defined fuzzy logic rules for LDW.

Lateral offset ratio

In 22, the vehicle’s lateral offset in relation to the lane centre was utilised to forecast lane departure. However, existing techniques depend on camera calibration to obtain the lateral offset, while vision-based LDW does not require any intrinsic or extrinsic camera parameters23. In this article, both lane boundary X -coordinates (X12 and X22) are analysed for each frame to compute the lateral offset ratio (LOR).

LOR=min[|X22Xm|,|XmX12|]THXmTHXm

where X12 is the detected left bottom end-point of the right lane border, and TH is the LDW threshold, which is set to a constant value of 0.8. The right lane boundary’s identified right bottom end-point is X22. Xm is one-half of the picture plane’s horizontal width. There is no provision in ISO 17361:200724 regarding how early before crossing the lane to make the warning threshold. However, ISO 17361:200724 does provide a warning threshold reference of approximately 80% of the lane width from the centre of the lane border.

The absolute |XmX12| value of the horizontal pixel distance between detected Xm and X12 is indicated on the image plane. |X22Xm| is the absolute value of the horizontal pixel distance between the detected X22 and Xm of the image plane. The min function uses |XmX12| and |X22Xm| set to picture each frame with a minimum. A difference between the minimum selected and THXm shows the horizontal pixel distance between the warning threshold and the observed X12 or X22. Thus the horizontal distance of the pixel between the warning threshold and the detected X12 or X22 is computed by dividing the horizontal distance of the pixel between warning thresholds with the detected Xm. The projection of a path in the image plane for lateral offset rate calculations is shown in Figure 2.

d18ef17e-5bf9-4c22-8e34-d491225befc5_figure2.gif

Figure 2. Image plane projection.

Lane departure identification in vision based lane departure warning framework

Assume the car is travelling in the middle of the lane, parallel to the lane borders. The lateral offset ratio is therefore constant and equal to 0.25 as a function of the left bottom end-point of the left lane boundary, X12, the right bottom end-point of the right lane boundary, X22, one-half the horizontal width of the image plane, Xm, and the warning threshold, TH. Assume that the car is travelling parallel to the lane borders and has left the lane’s centre. The lateral offset ratio is constant in this instance, but it is less than 0.25. Because the vehicle does not seem to be leaving its lane, no LDW signal should be activated. As the vehicle approaches the lane border, the lateral offset ratio will decrease from 0.25 to -1. Because the vehicle seems to have strayed from its lane, an LDW signal should be activated. As a result, the word "Lane Departure" appears in the series of detected lane departure frame pictures.

Lane departure detection utilising lateral offset ratio in vision-based LDW is shown in Table 1, with values ranging from -1 to 0.25. For a lane departure zone, the lateral offset ratio range is -1 lateral offset ratio 0. Furthermore, a lateral offset ratio of zero indicates that the vehicle has crossed the alert threshold. As a result, the phrase ’Lane Departure’ appears on the frame picture. For no lane departure zone, the lateral offset ratio ranges from 0 < lateral offset ratio 0.25, with the highest value indicating that the vehicle is in the middle of the lane. As a result, the phrase ’Lane Departure’ is not visible on the frame picture. When the lateral offset ratio is equal to -1, the vehicle is crossing one lane border. As a result, the phrase ’Lane Departure’ appears on the frame picture.

Table 1. Lane departure identification in vision-based lane departure warning (LDW).

Lateral offset ratioLane departure identification
LOR = 0.25No deviation from lane
0 < LOR 0.25No deviation from lane
LOR = 0Deviation from lane
-1 < LOR 0Deviation from lane
LOR = -1Deviation from lane

Fuzzy logic controller

Analysing only the value of lateral offset ratio using vision-based LDW is not adequate to detect the likelihood of lane departure because the vision-based method tends to provide more false warnings due to deterioration of road conditions. The computed lateral offset ratio from vision-based LDW is usually noisy, caused by a road sign, leading vehicle on the next lane, occluded lane boundaries, or poor condition of road painting23. These interferences multiply when computing lateral offset ratio in particular.

Fuzzy logic is used to smartly determine the LDW to eliminate such high rate differences in the computed lateral offset ratio. In this article, lateral offset ratio and yaw acceleration are chosen as the input variables for the fuzzy logic, reflecting the strictly dynamic characteristic of the output variable26. It is crucial to analyse vehicle dynamics responses such as yaw acceleration during lane departure activity. It has been shown that yaw acceleration response can provide earlier insight in predicting the forthcoming lane departure event compared to other vehicle dynamics responses. The fuzzification, rule-based inference engine27, and defuzzification used in data fusion-based LDW are presented in Figure 3.

d18ef17e-5bf9-4c22-8e34-d491225befc5_figure3.gif

Figure 3. Fuzzy logic blocks in data fusion-based lane departure warning.

Lateral offset ratio input and LDW, f (u), output variables can be divided into two membership function (MF) levels. In comparison, the yaw acceleration input variable is divided into three levels of MF. They are defined as positive (PO) and negative (NE) for lateral offset ratio; lane departure (LD) and no lane departure (NLD) for LDW; f (u), and positive (PO), zero (ZE), and negative (NE) for yaw acceleration. Gaussian MF is chosen to improve fuzzification speed because the centre and spread need to be updated and Gaussian MF is very well known and widely used. The lateral offset ratio and yaw acceleration range are (-1, 0.25) and (-0.1, 0.1), respectively. The range of LDW, f (u), the output variable is a singleton within -5 to 0.6. The chosen range of output variables can be any value within the range of LDW, f (u), for various road conditions. The average gravity centre method for defuzzification is adopted.

Table 2 tabulates the centre and spread parameters used in the lateral offset ratio and yaw acceleration input MFs to compute the output variable of LDW, f (u). The PO Gaussian MF of the lateral offset ratio input variable is made up of 0.25 centre and 0.087 spread. Two Gaussian MFs are chosen due to vision-based LDW’s lane departure identification for lateral offset ratio as described in Table 1. The PO Gaussian MF is made up of 0.1 centre and 0.03538 spread in yaw acceleration input variable. The ZE Gaussian MF is made up of 0 centre and 0.03538 spread. The NE Gaussian MF is made up of -0.1 centre and 0.03538 spread. Three Gaussian MFs are chosen due to full coverage for the range of yaw acceleration.

Table 2. Centre and spread of the input membership function (MF).

LOR, lateral offset ratio; PO, positive; NE, negative; ZE, zero.

Input variableMFCentreSpread
LORPO0.250.087
LORNE-0.50.17
Yaw accelerationPO0.10.03538
Yaw accelerationZE00.03538
Yaw accelerationNE-0.10.03538

Data fusion-based LDW can be controlled smartly to the fuzzy rules established based on the control scheme shown in Table 3. Table 3 tabulated a fuzzy rule matrix used in data fusion-based LDW, consisting of six defined rules. Rule 1 defined that if the input variable lateral offset ratio is NE and input variable yaw acceleration is PO, then the output variable LDW, f (u), is LD. Rule 2 defined that if the input variable lateral offset ratio is NE and input variable yaw acceleration is ZE, then the output variable LDW, f (u), is NLD. Rule 3 that defined if the input variable lateral offset ratio is NE, and input variable yaw acceleration is NE, then the output variable LDW, f (u), is LD.

Table 3. Fuzzy rule matrix.

PO, positive; NE, negative; ZE, zero; NLD, no lane departure; LD, lane departure.

Input variableLateral offset ratio
PONE
Yaw accelerationPONLD (Rule 4)LD (Rule 1)
ZENLD (Rule 5)NLD (Rule 2)
NENLD (Rule 6)LD(Rule 3)

Rule 4 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is PO, then the output variable LDW, f (u), is NLD. Rule 5 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is ZE, then the output variable LDW, f (u), is NLD. Rule 6 defined that if the input variable lateral offset ratio is PO and input variable yaw acceleration is NE, then the output variable LDW, f (u), is NLD.

Lane departure identification in data fusion based lane departure warning framework

Table 4 tabulates the lane departure identification in data fusion-based LDW, which uses the output of fuzzy logic, LDW, f (u). The LDW, f (u), has a range of values between -5 and 0.6 with the range of values of LDW, f (u), for lane departure falling between -5 ≤ f (u) 0. In addition, a value of LDW, f (u), equal to zero indicates the vehicle is crossing the warning threshold. Hence, the ’Lane Departure’ text is presented in the frame image. The range of value of LDW, f (u), for no lane departure zone falls in between 0 < f (u) 0.6 with the maximum value of LDW, f (u), indicates the vehicle is located at the centre of the lane. Hence, no ’Lane Departure’ text is presented in the frame image. The minimum value of LDW, f (u) = -5, indicates that the vehicle is crossing one lane boundary. Hence ’Lane Departure’ text is presented in the frame image.

Table 4. Lane departure identification in data fusion-based lane departure warning (LDW).

LDW, f (u)Lane departure identification
f (u) = 0.6No deviation from lane
0 < f (u) 0.6No deviation from lane
f (u) = 0Deviation from lane
5 ≤ f (u) 0Deviation from lane
f (u) = 5Deviation from lane

Experimentation setup

Data fusion-based LDW with real-life datasets is simulated with MATLAB and Simulink version R2019a installed on a laptop with a Windows 10 operating system, using an Intel i5 1.60 GHz processor and 4GB RAM. Alternatively, Scilab’s Xcos28 is suggested as open-source alternative to replace MATLAB and Simulink. In order to simulate the model, real-life datasets (clip #5-#27) is input sequentially into the Simulink model. The Simulink model and its parameter are shared via Zenodo and Github25.

The testbed used for the experiment is shown in Figure 4, and the equipment used in the experimental testbed are shown in Table 5. The real-life datasets were generated by using a camera capturing the look ahead road footage and rotary encoders capturing steering wheel angle and vehicle speed responses. The real-life datasets of road footage, steering wheel angle responses and vehicle speed responses were acquired off-line at the rate of 30Hz and trimmed into corresponding clip numbers as presented in Table 6Table 7 for daytime and night-time driving environments, respectively. The data trimming is applied to the real-life datasets before being transferred to a laptop for running of the data fusion-based LDW simulation, as illustrated in Figure 4. Data trimming is required to ensure all the responses acquired from various sensors are synced at the exact time-stamp. In this case, outlier/out of sync responses from the sensors at the early seconds of acquisition were usually discarded in order to match the reference time-stamp.

d18ef17e-5bf9-4c22-8e34-d491225befc5_figure4.gif

Figure 4. Real-life dataset acquisition flow for data fusion-based lane departure warning.

Table 5. List of equipment used in real-life dataset acquisition for data fusion-based lane departure warning.

Test bed equipmentQuantityDescription
Laptop1Intel i5 1.60 GHz processor and 4 GB RAM
Desktop PC1Intel Core i3 Processor, Intel B360 Chipset Mother-board, and 4 GB RAM
National Instruments PCIe-63211Data acquisition (DAQ) card
National Instruments CB-68LPR Connector Block1Pinout board
National Instruments SHC68-68-EPM Cable11 m length of cable connecting pinout board and DAQ card
24 inch LCD monitor1Output device
Wireless mouse1Input device
Wireless keyboard1Input device
APC Back-UPS 1400VA1Power source for desktop PC
Encoder Model TR12Acquisition sensor for steering wheel angle and vehicle speed datasets
Logitech C525 Camera1Acquisition sensor for road footage datasets

Table 6. Real-life datasets for daytime driving environment.

Clip no.No. of frameReal-life dataset
5704929
6240030
7840031
8255032
9375033
10210034
11210035
12252036
1353937

Table 7. Real-life datasets for night-time driving environment.

Clip no.No. of frameReal-life dataset
141271938
1551039
16182940
17107941
1832942
19173943
2023944
2186945
2263046
23207047
2490048
25255049
2672050
27107951

In order to validate the effectiveness of the proposed data fusion-based LDW framework, real-life datasets with variation of driving environments (daytime and night-time), road structure (straight and curving roads), outlier road features (occluded lane markers and arrow sign printed on road surface) were considered. Author had spent effortless hours manoeuvring the instrumented car to generate real-life datasets specifically to be used in the current study.

The experimental testbed used for acquiring road footage for data fusion-based LDW is also described in 19. In 19, this paper proposes a vision-based lane departure warning framework for lane departure detection under daytime and night-time driving environments. The traffic flow and conditions of the road surface for both urban roads and highways in the city of Malacca are analysed in terms of lane detection rate and false positive rate. The proposed vision-based lane departure warning framework includes lane detection followed by a computation of a lateral offset ratio. The lane detection is composed of two stages: pre-processing and detection. In the pre-processing, a colour space conversion, region of interest extraction, and lane marking segmentation are carried out. In the subsequent detection stage, Hough transform is used to detect lanes. Lastly, the lateral offset ratio is computed to yield a lane departure warning based on the detected X -coordinates of the bottom end-points of each lane boundary in the image plane.

Ethical approval

Ethical approval was obtained from Multimedia University with approval number EA1902021. Authors submitted a self-declaration form on 17/05/2021 stating that the research was conducted from 01/02/2012 - 30/04/2020. Ethical approval is required due to the institution requirement before the disclosure of the article.

Performance assessment

The number of lane departure frames, the number of identified lane departure frames, and the number of false-positive frames were manually tallied frame by frame for the performance assessment of lane departure detection. The formula used for the total number of lane departure frame was:

NTL=NDL+NFL

The formula used for the lane departure detection rate was:

Lanedeparturedetectionrate=NDLNTL×100%

The formula used for the false positive rate was:

Falsepositiverate=NFLNTL×100%

where

NTL - Total number of lane departure frame detected

NDL - Total number of correctly detected lane departure frame

NFL - Total number of false positive lane departure frame

Results

All three test situations, namely straight road, curving road, and false alarms, were included in the real-life datasets presented in Table 6Table 7. In both daylight and night-time driving situations, the effectiveness of data fusion-based LDW in lane departure detection is assessed. The efficacy of the data fusion-based LDW concept is shown by comparing the lane departure detection results for vision-based LDW and data fusion-based LDW. Table 8Table 9 show the results of lane departure detection based on vision-based LDW in daylight and night-time driving situations, respectively. Table 10Table 11 show the results of lane departure detection based on data fusion-based LDW in daylight and night-time driving situations, respectively.

Table 8. Lane departure detection results for vision-based lane departure warning in a daytime driving environment.

Clip no.No. of detected lane departure frameNo. of correctly detected lane departure frameLane departure detection rate, %No. of false positive lane departure frameFalse positive rate, %
52020174686.4427413.56
678975495.56354.44
73166296993.781976.22
8804961.253138.75
946945797.44122.56
102000.0020100.00
11831831100.0000.00
1280677195.66354.34
13359359100.0000.00
Total8540793681.1360418.87

Table 9. Lane departure detection results for vision-based lane departure warning in a night-time driving environment.

Clip no.No. of detected lane departure frameNo. of correctly detected lane departure frameLane departure detection rate, %No. of false positive lane departure frameFalse positive rate, %
142295184680.4444919.56
151079790.65109.35
1649843887.956012.05
1714813691.89128.11
1815314393.46106.54
1969238555.6430744.36
2012111494.2175.79
2118818196.2873.72
2232831596.04133.96
2381874190.59779.41
2417014987.652112.35
2590561868.2928731.71
2627320575.096824.91
2740025664.0014436.00
Total7096562483.73147216.27

Table 10. Lane departure detection results for data fusion-based lane departure warning in a daytime driving environment.

Clip no.No. of detected lane departure frameNo. of correctly detected lane departure frameLane departure detection rate, %No. of false positive lane departure frameFalse positive rate, %
550450299.6020.40
600100.0000.00
7209209100.0000.00
822100.0000.00
92525100.0000.00
1000100.0000.00
11175175100.0000.00
124040100.0000.00
135959100.0000.00
Total1014101299.9620.04

Table 11. Lane departure detection results for data fusion-based lane departure warning in a night-time driving environment.

Clip no.No. of detected lane departure frameNo. of correctly detected lane departure frameLane departure detection rate, %No. of false positive lane departure frameFalse positive rate, %
1448248099.5920.41
154242100.0000.00
163838100.0000.00
176868100.0000.00
188686100.0000.00
19140140100.0000.00
202121100.0000.00
215656100.0000.00
228383100.0000.00
23126126100.0000.00
241515100.0000.00
2539438296.95123.05
26625588.71711.29
277575100.0000.00
Total1688166798.95211.05

No public or known datasets for road panels, vehicle speed responses and steering wheel angle responses for the identification of the lane departure were identified for fair comparison between data-fusion based LDW and vision-based LDW. The lane detection findings of data-fusion and vision-based LDWs reported above are thus used for comparison with actual datasets. Real-life datasets for clips 5–27 consist of 58.670 street video frames for the lane identification study. As the real-life datasets consist of 23 different clips, each clip has an average lane detection rate and false-positive rate for lane departure. 15,636 and 2,702 frames of all road video frames were LDW frames in vision-based LDW and data fusion-based LDW, respectively.

Vision-based LDW and data fusion based LDW detection is compared in Table 12. Regarding lane detection and false positive rates in lane detection analyses, it can be seen that circumstances during the day and night were taken into consideration in vision-based LDW and data fusion-based LDW conditions. Using the real world datasets (clips 5–27), LDWs achieved an average lane detection rate of 81.13% and a false positive rate of 18.87% in the daylight detection scenario in the field of lane departure. Vision-based LDW, on the other hand, gave an average lane identification rate of 83.73% and a false positive rate of 16.27% in night-time driving scenarios for lane detection. Although the results show that visually-based LDW are consistent in the detection of disturbances in a variety of driving environments, the false positive rate of vision-based LDW checks for lane departure is still more than 16% due to vision system limitations in overcoming disturbances such as usual lane markings, low illumination and other road markings such as lane markers. Consequently, data fusion-based LDW is shown in Table 12 for the comparison of lane departure detection using real-world datasets.

Table 12. Comparison of lane departure detection results using real-life datasets.

MethodsAverage day-time lane departure detection rate, %Average night-time lane departure detection rate, %Average day-time false positive rate, %Average night-time false positive rate, %EnvironmentRuntime, ms
Vision-based LDW81.1383.7318.8716.274 cores @ 1.6GHz5.1
Data fusion-based LDW99.9698.950.041.054 cores @ 1.6GHz18.7

Data fusion-based LDW obtained substantial lane detection results in both daytime and night-time driving situations, in particular by decreasing the false positive rate of lane detection under unfavourable circumstances such as worn lane markings, poor light, obscured lane markings and other road signs. Data fusion-based LDW achieved an average lane detection rate of 99.96% and 0.04% false positive rate using real-life datasets in a day driving scenario. In the night-time driving scenario, the data-fusion LDW obtained an average lane detection rate of 98.95% and 1.05% false positive detection of lane departure using real-life datasets. The integration of vision systems such as a vision-based LDW and dynamic conditions for the vehicle substantially decreased the false positive rate for lane recognition and increased lane detection precision by elimination of superfluous LDW frames.

Discussion

In real-world situations, low illumination and road surface interference are often encountered in our daily driving. Thus, a standalone vision-based system may be unreliable under complex driving environments and road surface conditions. The examples of complex driving environments and road surface conditions encountered in the experiments were low illumination at night, worn lane markings, arrow signs, and occluded lane markings. As part of an intelligent transportation system, lane departure detection performance can be further enhanced by combining the vision data with the vehicle’s dynamical data like steering wheel angle and vehicle speed.

A data fusion-based LDW system is given, which is made up of vision-based LDW and model-based vehicle dynamics, with multi-input-single-output fuzzy logic in between. The lateral offset ratio is used to determine if ’Lane Departure’ text should be shown on the frame picture. This calculation is based on the identified textitX-coordinates of each lane boundary’s bottom end-points in the picture plane. The LDW, f (u), is intelligently computed using multi-input-single-output fuzzy logic based on lateral off-set ratio input from vision-based LDW and vehicle’s yaw acceleration reaction from model-based vehicle dynamics. To assess performance in lane identification and lane departure detection, road video from urban roads and a highway in Malacca was gathered.

The false-positive rate and detection rate were investigated. In daylight and night-time driving situations, lane detection rates of 94.60% and 95.36%, respectively, were obtained. The false positive detection rates were 5.40% and 4.64%, respectively. The findings of the experiments indicate that vision-based lane identification is successful in identifying lanes under difficult driving situations, such as in daylight and at night-time. The performance assessment of this technique for lane identification revealed that neither the driving environment nor traffic flow is the most important element influencing the performance of vision-based lane detection. Instead, road surface characteristics were shown to be the major contributor to the false positive rate, especially for deteriorated lane markers.

In daylight and night-time driving situations, detection rates of 81.13% and 83.73%, respectively, were achieved in the assessment of lane departure detection utilising vision-based LDW. The false-positive rates were 18.87% and 16.27%, respectively. In daylight and night-time driving situations, the lane departure detection assessment utilising data fusion-based LDW yielded detection rates of 99.96% and 98.95%, respectively. The false-positive rates were 0.04% and 1.05%, respectively. The findings indicate that data fusion-based LDW is successful in identifying lane deviations in both daylight and night-time driving situations.

Although lane departure detection utilising data fusion-based LDW works better in a daylight driving environment, poor illumination during night-time driving has led to a slight decrease in lane departure detection rate, as shown in clips 25 and 26. Nonetheless, based on the findings of the experiments, data fusion-based LDW worked well throughout the day without being hampered by road surface conditions. Each frame was processed in about 18.7 milliseconds during the testing of data fusion-based LDW. Low light and road surface interference are common occurrences in real-world driving conditions. As a result, vision-based systems, particularly vision-based lane detection and vision-based LDW, are unreliable in challenging driving situations and road surface conditions. Low lighting at night, faded lane markings, arrow signs, and obscured lane markings were all instances of difficult driving situations and road surface characteristics observed in the tests. The focus of future study should be on all-weather test settings. The performance of data fusion-based LDW may be improved further by using adaptive tuning of fuzzy rules and MF parameters as part of an intelligent transportation system.

Data availability

Underlying data

Mendeley Data: Clip #5. https://doi.org/10.17632/f24x2p6b5h.329.

This project contains the following underlying data:

  • clip5.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 5_vehicle speed.xls

  • Clip 5_steering wheel angle.xls

Mendeley Data: Clip #6. https://doi.org/10.17632/xskxs82mz6.330.

This project contains the following underlying data:

  • clip6.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 6_vehicle speed.xls

  • Clip 6_steering wheel angle.xls

Mendeley Data: Clip #7. https://doi.org/10.17632/dppstzh8n6.431.

This project contains the following underlying data:

  • clip7.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 7_vehicle speed.xls

  • Clip 7_steering wheel angle.xls

Mendeley Data: Clip #8. https://doi.org/10.17632/hgt5whhj6n.332.

This project contains the following underlying data:

  • clip8.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 8_vehicle speed.xls

  • Clip 8_steering wheel angle.xls

Mendeley Data: Clip #9. https://doi.org/10.17632/bvbykc4hxf.433.

This project contains the following underlying data:

  • clip9.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 9_vehicle speed.xls

  • Clip 9_steering wheel angle.xls

Mendeley Data: Clip #10. https://doi.org/10.17632/g98zzcn6nr.334.

This project contains the following underlying data:

  • clip10.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 10_vehicle speed.xls

  • Clip 10_steering wheel angle.xls

Mendeley Data: Clip #11. https://doi.org/10.17632/z3yjbd4567.335.

This project contains the following underlying data:

  • clip11.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 11_vehicle speed.xls

  • Clip 11_steering wheel angle.xls

Mendeley Data: Clip #12. https://doi.org/10.17632/ytn823rw8j.336.

This project contains the following underlying data:

  • clip12.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 12_vehicle speed.xls

  • Clip 12_steering wheel angle.xls

Mendeley Data: Clip #13. https://doi.org/10.17632/946jzttn7n.337.

This project contains the following underlying data:

  • clip13.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 13_vehicle speed.xls

  • Clip 13_steering wheel angle.xls

Mendeley Data: Clip #14. https://doi.org/10.17632/cww75348bj.338.

This project contains the following underlying data:

  • clip14.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 14_vehicle speed.xls

  • Clip 14_steering wheel angle.xls

Mendeley Data: Clip #15. https://doi.org/10.17632/k74tdgbhjm.339.

This project contains the following underlying data:

  • clip15.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 15_vehicle speed.xls

  • Clip 15_steering wheel angle.xls

Mendeley Data: Clip #16. https://doi.org/10.17632/hps9jsjwxp.440.

This project contains the following underlying data:

  • clip16.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 16_vehicle speed.xls

  • Clip 16_steering wheel angle.xls

Mendeley Data: Clip #17. https://doi.org/10.17632/bxmmttx535.341.

This project contains the following underlying data:

  • clip17.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 17_vehicle speed.xls

  • Clip 17_steering wheel angle.xls

Mendeley Data: Clip #18. https://doi.org/10.17632/smx7tbx29p.342.

This project contains the following underlying data:

  • clip18.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 18_vehicle speed.xls

  • Clip 18_steering wheel angle.xls

Mendeley Data: Clip #19. https://doi.org/10.17632/kcxpm835gw.343.

This project contains the following underlying data:

  • clip19.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 19_vehicle speed.xls

  • Clip 19_steering wheel angle.xls

Mendeley Data: Clip #20. https://doi.org/10.17632/m25z57438h.344.

This project contains the following underlying data:

  • clip20.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 20_vehicle speed.xls

  • Clip 20_steering wheel angle.xls

Mendeley Data: Clip #21. https://doi.org/10.17632/cjptbmddpk.445.

This project contains the following underlying data:

  • clip21.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 21_vehicle speed.xls

  • Clip 21_steering wheel angle.xls

Mendeley Data: Clip #22. https://doi.org/10.17632/yhd2j7ddxc.346.

This project contains the following underlying data:

  • clip22.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 22_vehicle speed.xls

  • Clip 22_steering wheel angle.xls

Mendeley Data: Clip #23. https://doi.org/10.17632/5zjf62drv7.347.

This project contains the following underlying data:

  • clip23.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 23_vehicle speed.xls

  • Clip 23_steering wheel angle.xls

Mendeley Data: Clip #24. https://doi.org/10.17632/r8vm7nbgvm.348.

This project contains the following underlying data:

  • clip24.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 24_vehicle speed.xls

  • Clip 24_steering wheel angle.xls

Mendeley Data: Clip #25. https://doi.org/10.17632/642n3xx8s6.349.

This project contains the following underlying data:

  • clip25.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 25_vehicle speed.xls

  • Clip 25_steering wheel angle.xls

Mendeley Data: Clip #26. https://doi.org/10.17632/wmymrk79tg.350.

This project contains the following underlying data:

  • clip26.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 26_vehicle speed.xls

  • Clip 26_steering wheel angle.xls

Mendeley Data: Clip #27. https://doi.org/10.17632/wb4hgnr6k3.351.

This project contains the following underlying data:

  • clip27.avi

  • steering wheel angle and vehicle speed.mat

  • Clip 27_vehicle speed.xls

  • Clip 27_steering wheel angle.xls

Data are available under the terms of the Creative Commons Attribution 4.0 International license (CC-BY 4.0).

Extended data

Zenodo: realone84/Data_Fusion-based_LDW. https:// doi.org/10.5281/zenodo.524145125.

This project contains the following extended data:

  • LICENSE

  • README.md

  • clip20.avi

  • model.slx

  • parameter.mat

Data are available under the terms of the The MIT License (MIT).

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 07 Sep 2021
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Ping EP, Hossen J and Wong EK. Data fusion based lane departure warning framework using fuzzy logic [version 1; peer review: awaiting peer review]. F1000Research 2021, 10:896 (https://doi.org/10.12688/f1000research.67209.1)
NOTE: If applicable, it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status:
AWAITING PEER REVIEW
AWAITING PEER REVIEW
?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 07 Sep 2021
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.