Finalll Reporrt
Finalll Reporrt
CHAPTER 1
INTRODUCTION
A critical aspect of many ADAS functions, including LDW and LKA, is the accurate and
timely detection of road lane lines or boundaries. This capability is essential for ensuring that
the vehicle remains within its designated lane and operates safely alongside other traffic. As
ADAS technology advances, future functionalities like Collision Avoidance, Automated
Highway Driving (Autopilot), Automated Parking, and Cooperative Maneuvering are being
developed. These advancements demand even faster and more reliable road boundary
detection systems, which constitute one of the most complex and challenging tasks in the field
of autonomous driving.Road boundary detection functionality involves several key processes,
including the localization of the road, determining the relative position between the vehicle
and the road, and analyzing the vehicle's heading direction. Computer vision techniques serve
as the primary tools for achieving these capabilities, enabling vehicles to sense and interpret
their surrounding environments accurately. These techniques facilitate the detection,
identification, and tracking of road lane-lines, which are typically characterized by specific
patterns and features such as lane markings on painted road surfaces. However, the
effectiveness of vision-based lane detection can be hindered by various real-world challenges.
1.1 Objective
To achieve this objective, the project focuses on several key aspects. Firstly, it aims for high
accuracy in lane detection, which involves precisely identifying lane markings such as solid
lines, dashed lines, or unpainted boundaries. This accuracy is critical for ensuring that the
vehicle's navigation decisions are based on reliable lane information. Secondly, real-time
The existing systems for autonomous vehicles are predominantly manufactured in foreign
countries, which poses several challenges for their implementation in our country. One primary
issue is the difference in driving orientation; many of these systems are designed for right-lane
driving, while we use left-lane driving. Autonomous Driving Car is one of the most disruptive
innovations in AI. Fuelled by Deep Learning algorithms, they are continuously driving our
society forward and creating new opportunities in the mobility sector. An autonomous car can
go anywhere a traditional car can go and does everything that an experienced human driver
does. But it’s very essential to train it properly. One of the many steps involved during the
training of an autonomous driving car is lane detection, which is the preliminary step.
This project presents an advanced lane detection technology to improve the efficiency and
accuracy of real-time lane detection. The lane detection module is usually divided into two
steps: (1) image preprocessing and (2) the establishment and matching of line lane detection
model. Figure 1.1 shows the overall diagram of our proposed system where lane detection
blocks are the main contributions of this paper. The first step is to read the frames in the video
stream. The second step is to enter the image preprocessing module. What is different from
others is that in the preprocessing stage we not only process the image itself but also do colour
feature extraction and edge feature extraction [17]. In order to reduce the influence of noise in
the process of motion and tracking, after extracting the colour features of the image, we need
to use Gaussian filter to smooth the image. Then, the image is obtained by binary threshold
Dept. of CSE, Vemana IT 2 2023-24
Road Lane Detection
processing and morphological closure. These are the preprocessing methods mentioned in this
project.
CHAPTER 2
LITERATURE SURVEY
A literature survey in a project report represents the study done to assist the completion of a
project. It also describes a survey of the previous existing material on a topic of the report. Its
purpose is to create familiarity with current thinking and research on a particular topic, and may
justify future research into a previously overlooked or under studied area.
1. Robust Lane Detection for Complicated Road Environment
Based on Normal Map.
• Author: Hui Chen and Yanyan Xu.
• Objective : Due to the complex illumination and interferences such as vehicles and
shadows in the real driving environment, lane detection is still a challenging task today.
• To address these issues, a robust method for road segmentation and lane detection based
on a normal map is proposed.
• Advantage: Depth information to segment the road pavement.
• Disadvantage: Techniques involving normal maps can be computationally intensive
Which might not be suitable for real-time applications or deployment on resource-
constrained devices.
CHAPTER 3
• Real-Time Processing: The system must process video streams from the vehicle's
cameras in real-time to detect and track lane lines continuously.
• Accurate Lane Detection: The system should accurately identify and differentiate
lane lines, including solid and dashed lines, under various conditions like different
lighting and weather.
• Lane Departure Warning: The system must issue alerts if the vehicle drifts out of
its lane without signaling, to enhance driver safety.
• Lane Change Assistance: The system should assist with safe lane changes by detecting
adjacent lane conditions and identifying gaps in traffic.
• Real-Time Video Processing: The system must process live video feeds from vehicle
cameras to detect and track lane lines continuously.
• Lane Line Identification: The system should accurately detect and distinguish
between various types of lane markings, such as solid and dashed lines, in different
lighting and weather conditions.
• Lane Departure Warning: The system must provide timely alerts to the driver if the
vehicle deviates from its lane without signaling, enhancing safety.
• Lane Change Assistance: The system should assist with safe lane changes by
analyzing adjacent lane conditions and ensuring that it is safe to move into another
lane.
3.5.2 Matplotlib
Matplotlib is a plotting library for the Python programming language and its numerical
mathematics extension NumPy. It provides an object-oriented API for embedding plots into
applications using general-purpose GUI toolkits. There is also a procedural "pylab" interface
based on a state machine, designed to closely resemble that of MATLAB, though its use is
discouraged. SciPy makes use of Matplotlib.
3.5.3 NumPy
NumPy open-source software and a library for the Python programming language, adding
support for large, multi-dimensional arrays and matrices, along with a large collection of high-
3.5.4 Pandas
Pandas is a software library written for the Python programming language for data manipulation
and analysis. It offers data structures and operations for manipulating numerical tables and time
series.
• Tools for reading and writing data between in-memory data structures and different
file formats.
• Label based slicing, fancy indexing, and sub setting of large data sets.
• Time series functionality date range generation and frequency conversion, moving
window statistics, moving window linear regressions, date shifting and lagging.
• Provides data filtration. The library is highly optimized for performance, with critical
code paths written in Python or C.
CHAPTER 4
DESIGN
The fig 4.1 shows the design process involves conceptualizing and planning, in the initial
stages of the road lane detection project, it's crucial to clearly define the problem and establish
its scope. This involves articulating the specific objective, such as developing a system to
accurately detect lane boundaries from images or video footage. The scope should outline the
environments in which the system will operate—whether it's highways, urban roads, or both—
and the conditions it must accommodate, such as varying light levels and weather conditions.
Once the problem and scope are well-defined, the next step involves gathering and preparing the
necessary data. This includes acquiring a diverse dataset of road images or videos that encompass
the specified environments and conditions. Each piece of data should ideally be annotated to
indicate the locations of lane markings, facilitating supervised learning algorithms' training
processes later on. By meticulously defining the problem and scope and curating an appropriate
dataset, the project lays a solid foundation for subsequent stages, such as algorithm
development and model training.
• Capturing and decoding video file: We will capture the video using VideoFileClip
object and after the capturing has been initialized every video frame is decoded (i.e.
converting into a sequence of images).
• Grayscale conversion of image: The video frames are in RGB format, RGB is converted
to grayscale because processing a single channel image is faster than processing a three-
channel colored image.
• Reduce noise: Noise can create false edges, therefore before going further, it’s imperative
to perform image smoothening. Gaussian blur is used to perform this process. Gaussian
blur is a typical image filtering technique for lowering noise and enhancing image
characteristics. The weights are selected using a Gaussian distribution, and each pixel is
subjected to a weighted average that considers the pixels surrounding it. By reducing high-
frequency elements and improving overall image quality, this blurring technique creates
softer, more visually pleasant images.
• Region of Interest: This step is to take into account only the region covered by the road
lane. A mask is created here, which is of the same dimension as our road image.
Furthermore, bitwise AND operation is performed between each pixel of our canny image
and this mask. It ultimately masks the canny image and shows the region of interest traced
by the polygonal contour of the mask.
• Draw lines on the Image or Video: After identifying lane lines in our field of interest
using Hough Line Transform, we overlay them on our visual input(video stream/image).
CHAPTER 5
IMPLEMENTATION
5.1 CODE:
import numpy as np
import pandas as pd
import cv2
from google.colab.patches import cv2_imshow
from moviepy import editor
import moviepy
def process_video(test_video, output_video):
Read input video stream and produce a video file with detected lane lines.
Parameters:
test_video: location of input video file
output_video: location where output video file is to be saved
input_video = editor.VideoFileClip(test_video, audio=False)
processed = input_video.fl_image(frame_processor)
processed.write_videofile(output_video, audio=False)
def frame_processor(image):
Process the input frame to detect lane lines.
Parameters:
image: image of a road where one wants to detect lane lines
(we will be passing frames of video to this function)
grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur = cv2.GaussianBlur(grayscale, (kernel_size, kernel_size), 0)
low_t = 50
high_t = 150
edges = cv2.Canny(blur, low_t, high_t)
region = region_selection(edges)
hough = hough_transform(region)
result = draw_lane_lines(image, lane_lines(image, hough))
def average_slope_intercept(lines):
left_weights = [] #(length,)
right_weights = [] #(length,)
if x1 == x2:
continue
left_lines.append((slope, intercept))
left_weights.append((length))
else:
right_lines.append((slope, intercept))
right_weights.append((length))
Parameters:
if line is None:
return None
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
y1 = int(y1)
y2 = int(y2)
Parameters:
y1 = image.shape[0]
y2 = y1 * 0.6
Parameters:
line_image = np.zeros_like(image)
y1 = image.shape[0]
y2 = y1 * 0.6
Parameters:
line_image = np.zeros_like(image)
def hough_transform(image):
Parameter:
image: grayscale image which should be an output from the edge detector
rho = 1
theta = np.pi/180
threshold = 20
minLineLength = 20
def average_slope_intercept(lines):
Parameters:
left_weights = [] #(length,)
right_weights = [] #(length,)
if x1 == x2:
continue
if slope < 0:
left_lines.append((slope, intercept))
left_weights.append((length))
else:
right_weights.append((length))
Parameters:
if line is None:
return None
x1 = int((y1 - intercept)/slope)
x2 = int((y2 - intercept)/slope)
y1 = int(y1)
y2 = int(y2)
Parameters:
y1 = image.shape[0]
y2 = y1 * 0.6
Parameters:
line_image = np.zeros_like(image)
def frame_processor(image):
Parameters:
kernel_size = 5
low_t = 50
high_t = 150
region = region_selection(edges)
hough = hough_transform(region)
return result
Parameters:
processed = input_video.fl_image(frame_processor)
processed.write_videofile(output_video, audio=False)
process_video('input.mp4','output.mp4')
CHAPTER 6
METHODOLOGY
6.1 Pre-Processing
Preprocessing is an important part of image processing and an important part of lane
detection. Preprocessing can help reduce the complexity of the algorithm, thereby reducing
subsequent program processing time. The video input is a RGB-based colour image sequence
obtained from the camera. In order to improve the accuracy of lane detection, many
researchers employ different image preprocessing techniques. Smoothing and filtering
graphics is a common image preprocessing technique. The main purpose of filtering is to
eliminate image noise and enhance the effect of the image. Low-pass or high pass filtering
operation can be performed for 2D images, low-pass filtering (LPF) is advantageous for
denoising, and image blurring and high-pass filtering (HPF) are used to find image boundaries.
In order to perform the smoothing operation, an average, median , or Gaussian filter could be
used. in order to preserve detail and remove unwanted noise, Xu and Li firstly use a median
filter to filter the image and then use an image histogram in order to enhance the grayscale
6.2 Colour Transform
Colour model transform is an important part of machine vision, and it is also an indispensable
part of lane detection in this paper. The actual road traffic environment and light intensity all
produce noise that interferes with the identification of colour. We cannot detect the separation
of white lines, yellow lines, and vehicles from the background. The RGB colour space used
in the video stream is extremely sensitive to light intensity, and the effect of processing light
at different times is not ideal. In this paper, the RGB sequence frames in the video stream are
colour-converted into HSV colour space image. HSV represents hue, saturation, and value .
the values of white and yellow colours are very bright in the V-component compared to other
colours and are easily extracted, providing a good basis for the next colour extraction.
Experiments show that the colour processing performed in the HSV space is more robust to
detecting specific targets.
6.3 Basic Preprocessing
A large number of frames in the video will be preprocessed. The images are individually gray
scaled, blurred, X-gradient calculated, Y-gradient calculated, global gradient calculated,
thresh of frame, and morphological closure . In order to cater for different lighting conditions,
Dept. of CSE, Vemana IT 23 2023-24
Road Lane Detection
an adaptive threshold is implemented during the preprocessing phase. Then, we remove the
spots in the image obtained from the binary conversion and perform the morphological closing
operation. The basic preprocessed frames cannot be very good at removing noise. It can be
seen from the results after the morphological closure that although preliminary lane
information can be obtained, there is still a large amount of noise.
6.4 Adding Colour Extraction in Preprocessing
In order to improve the accuracy of lane detection, we add a feature extraction module in the
preprocessing stage. The purpose of feature extraction is to keep any features that may be lane
and remove features that may be non-lane. This project mainly carries on the feature extraction
to the colour. After the graying of the image and colour model conversion, we add the white
feature extraction and then carry out the conventional preprocessing operation in turn. The
process of the colour extraction proposed
6.5 Adding Edge Detection in Preprocessing
This project has carried out edge detection two times successively; the first time is to perform
a wide range of edge detection extraction in the entire frame image. In the second, the edge
detection is performed again after the lane detection after ROI selection. This detection further
improves the accuracy of lane detection. This section mainly performs the overall edge
detection on the frame image, using the improved Canny edge detection algorithm. The
concrete steps of Canny operator edge detection are as follows: First, we use a Gaussian filter
to smooth the image (preprocessed image), and then we use the Sobel operator to calculate
the gradient magnitude and direction. Next step is to suppress the nonmaximal value of the
gradient amplitude. Finally, we need to use a double-threshold algorithm to detect and connect
edges
6.6 ROI Selection
After edge detection by Canny edge detection, we can see that the obtained edge not only
includes the required lane line edges, but also includes other unnecessary lanes and the edges
of the surrounding fences. The way to remove these extra edges is to determine the visual area
of a polygon and only leave the edge information of the visible area. The basis is that the
camera is fixed relative to the car, and the relative position of the car with respect to the lane
is also fixed, so that the lane is basically kept in a fixed area in the camera. In order to lower
image redundancy and reduce algorithm complexity, we can set an adaptive area of interest
(ROI) on the image. We only set the input image on the ROI area and this method can increase
CHAPTER 7
SOFTWARE TESTING
• Grayscale conversion: Grey scales can be used for assessing colour change and staining
during colour fastness testing. The colour change scale consists of nine pairs of grey
coloured chips, from grades 1 to 5 (with four half steps). Grade 5 represents no change
and grade 1 depicts severe change in some standards.
• Gaussian blur: Photographers and designers choose Gaussian functions for several
purposes. If you take a photo in low light, and the resulting image has a lot of noise,
Gaussian blur can mute that noise.it is calculated by using 2D, because a photograph is
two-dimensional, Gaussian blur uses two mathematical functions (one for the x-axis and
one for the y) to create a third function, also known as a convolution. This third function
creates a normal distribution of those pixel values, smoothing out some of the
randomness.
• Distortion correction: The Distortion Correction feature automatically corrects optical
distortion in images. Distortion is a form of optical aberration that causes a geometrical
imaging error where straight lines don't appear straight in an image.Distortion is
calculated simply by relating the Actual Distance (AD) to the Predicted Distance (PD) of
the image using Equation 1. This is done by using a pattern such as dot target. Note that
while distortion runs negative or positive in a lens, it is not necessarily linear across the
image.
• Edge detection: Edge test case scenarios are those that are possible, but unknown or
accidental features of the requirements. Boundary testing, in which testers validate
between the extreme ends of a range of inputs, is a great way to find edge cases when
testers are dealing with specific and calculated value fields.
• Lane fitting: Lane Fitting is a computer vision task that involves identifying the
boundaries of driving lanes in a video or image of a road scene. The goal is to accurately
locate and track the lane markings in real-time, even in challenging conditions such as
poor lighting, glare, or complex road layouts.
• Shadow testing: Shadow testing is a technique used to test new features or changes in a
real-time production environment without affecting the actual users. It involves routing a
portion of the production traffic to the new feature or change while keeping the majority
of the traffic unaffected.The shadow-test approach allows us to do anything that is
CHAPTER 8
RESULTS
Conclusion
This project aimed to create we proposed a new lane detection preprocessing and ROI
selection methods to design a lane detection system. The main idea is to add white extraction
before the conventional basic preprocessing. Edge extraction has also been added during the
preprocessing stage to improve lane detection accuracy. We also placed the ROI selection after
the proposed preprocessing. Compared with selecting the ROI in the original image, it reduced
the nonlane parameters and improved the accuracy of lane detection. Currently, we only use
the Hough transform to detect straight lane and EKF to track lane and do not develop advanced
lane detection methods. In the future, we will exploit a more advanced lane detection approach
to improve the performance..
Future Enhancement
As one of the most important tasks in autonomous driving systems, ego-lane detection
has been extensively studied and has achieved impressive results in many scenarios. However,
ego-lane detection in the missing feature scenarios is still an unsolved problem. To address this
problem, previous methods have been devoted to proposing more complicated feature extraction
algorithms, but they are very time-consuming and cannot deal with extreme scenarios. Different
from others, this paper exploits prior knowledge contained in digital maps, which has a strong
capability to enhance the performance of detection algorithms. Specifically, we employ the road
shape extracted from OpenStreetMap as lane model, which is highly consistent with the real
lane shape and irrelevant to lane features. In this way, only a few lane features are needed to
eliminate the position error between the road shape and the real lane, and a search-based
optimization algorithm is proposed. Experiments show that the proposed method can be applied
to various scenarios and can run in real-time at a frequency of 20 Hz. At the same time, we
evaluated the proposed method on the public KITTI Lane dataset where it achieves state-of-
theart performance. Moreover, our code will be open source after publication.
REFERENCES
10. Q. Lin, Y. Han, and H. Hahn, “Real-Time Lane Departure Detection Based on Extended Edge-
Linking Algorithm,” in Proceedings of the 2010 Second International Conference on
Computer Research and Development, pp. 725–730, Kuala Lumpur, Malaysia, May
2010.View at: Publisher Site Google Scholar
11. C. Mu and X. Ma, “Lane detection based on object segmentation and piecewise fitting,”
TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12, no. 5, pp. 3491–3500,
2014.View at: Google Scholar
12. J.-G. Wang, C.-J. Lin, and S.-M. Chen, “Applying fuzzy method to vision-based lane detection
and departure warning system,” Expert Systems with Applications, vol. 37, no. 1, pp. 113–
126, 2010.View at: Publisher Site Google Scholar.
BIBLIOGRAPHY
BOOKS REFERRED:
[1]. "Introduction to Machine Learning with Python" by Andreas C. Müller and Sarah
Guido.
[2]. "Recommendation Systems: An Introduction" by Michael Schrage.
[3]. "Flask Web Development with Python" by Pradeep Gohil.
WEBSITES REFERRED: