Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
42 views38 pages

Finalll Reporrt

Uploaded by

alluiconster143
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views38 pages

Finalll Reporrt

Uploaded by

alluiconster143
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Road Lane Detection

CHAPTER 1
INTRODUCTION

The integration of Advanced Driving Assistance Systems (ADAS) in modern automobiles


represents a significant leap forward in automotive technology, driven primarily by the
objectives of increasing safety, reducing road accidents, and enhancing overall driving
experience. Over the past few decades, major car manufacturers have introduced a plethora of
sophisticated ADAS functionalities such as Lane Departure Warning (LDW), Lane Keep
Assist (LKA), Electronic Stability Control (ESC), and Anti-lock Brake System (ABS), among
others. These systems leverage cutting-edge technology to assist drivers in various aspects of
vehicle control and navigation.

A critical aspect of many ADAS functions, including LDW and LKA, is the accurate and
timely detection of road lane lines or boundaries. This capability is essential for ensuring that
the vehicle remains within its designated lane and operates safely alongside other traffic. As
ADAS technology advances, future functionalities like Collision Avoidance, Automated
Highway Driving (Autopilot), Automated Parking, and Cooperative Maneuvering are being
developed. These advancements demand even faster and more reliable road boundary
detection systems, which constitute one of the most complex and challenging tasks in the field
of autonomous driving.Road boundary detection functionality involves several key processes,
including the localization of the road, determining the relative position between the vehicle
and the road, and analyzing the vehicle's heading direction. Computer vision techniques serve
as the primary tools for achieving these capabilities, enabling vehicles to sense and interpret
their surrounding environments accurately. These techniques facilitate the detection,
identification, and tracking of road lane-lines, which are typically characterized by specific
patterns and features such as lane markings on painted road surfaces. However, the
effectiveness of vision-based lane detection can be hindered by various real-world challenges.

1.1 Objective
To achieve this objective, the project focuses on several key aspects. Firstly, it aims for high
accuracy in lane detection, which involves precisely identifying lane markings such as solid
lines, dashed lines, or unpainted boundaries. This accuracy is critical for ensuring that the
vehicle's navigation decisions are based on reliable lane information. Secondly, real-time

Dept. of CSE, Vemana IT 1 2023-24


Road Lane Detection
performance is essential to meet the instantaneous responsiveness required in dynamic driving
environments. The system must process incoming data quickly and efficiently to provide
timely feedback to the vehicle's control systems. Moreover, the project emphasizes robustness
against various challenges that can affect lane detection performance. These challenges
include occlusions caused by other vehicles, changes in lighting conditions throughout the
day, shadows cast by roadside objects, and distortions in lane markings due to road curvature
or surface imperfections. Developing algorithms that can adapt to and mitigate these
challenges ensures that the lane detection system maintains reliable operation under diverse
real-world conditions.

1.2 Existing System

The existing systems for autonomous vehicles are predominantly manufactured in foreign
countries, which poses several challenges for their implementation in our country. One primary
issue is the difference in driving orientation; many of these systems are designed for right-lane
driving, while we use left-lane driving. Autonomous Driving Car is one of the most disruptive
innovations in AI. Fuelled by Deep Learning algorithms, they are continuously driving our
society forward and creating new opportunities in the mobility sector. An autonomous car can
go anywhere a traditional car can go and does everything that an experienced human driver
does. But it’s very essential to train it properly. One of the many steps involved during the
training of an autonomous driving car is lane detection, which is the preliminary step.

1.3 Proposed System

This project presents an advanced lane detection technology to improve the efficiency and
accuracy of real-time lane detection. The lane detection module is usually divided into two
steps: (1) image preprocessing and (2) the establishment and matching of line lane detection
model. Figure 1.1 shows the overall diagram of our proposed system where lane detection
blocks are the main contributions of this paper. The first step is to read the frames in the video
stream. The second step is to enter the image preprocessing module. What is different from
others is that in the preprocessing stage we not only process the image itself but also do colour
feature extraction and edge feature extraction [17]. In order to reduce the influence of noise in
the process of motion and tracking, after extracting the colour features of the image, we need
to use Gaussian filter to smooth the image. Then, the image is obtained by binary threshold
Dept. of CSE, Vemana IT 2 2023-24
Road Lane Detection
processing and morphological closure. These are the preprocessing methods mentioned in this
project.

Figure 1.1 Proposed System

Dept. of CSE, Vemana IT 3 2023-24


Road Lane Detection

CHAPTER 2

LITERATURE SURVEY
A literature survey in a project report represents the study done to assist the completion of a
project. It also describes a survey of the previous existing material on a topic of the report. Its
purpose is to create familiarity with current thinking and research on a particular topic, and may
justify future research into a previously overlooked or under studied area.
1. Robust Lane Detection for Complicated Road Environment
Based on Normal Map.
• Author: Hui Chen and Yanyan Xu.
• Objective : Due to the complex illumination and interferences such as vehicles and
shadows in the real driving environment, lane detection is still a challenging task today.
• To address these issues, a robust method for road segmentation and lane detection based
on a normal map is proposed.
• Advantage: Depth information to segment the road pavement.
• Disadvantage: Techniques involving normal maps can be computationally intensive
Which might not be suitable for real-time applications or deployment on resource-
constrained devices.

2. Lane Detection and Road Surface Reconstruction Based on Multiple


Vanishing Point & Symposia.
• Author: Jian Zhou, Yi Cai, JinSheng Xiao
• Objective: Lane detection algorithm based on monocular camera is one of the most
popular methods in recent years, which can meet the requirement of real-time and robust
for autonomous vehicle.
• Advantage: Combining lane detection with road surface reconstruction can provide a
more holistic understanding of the driving environment
• Disadvantage: The combined approach might be more complex to implement and
require more computational resources.

Dept. of CSE, Vemana IT 4 2023-24


Road Lane Detection

3. Enhanced Lane Tracking Algorithm Using Ego-Motion Estimator for Fail-


Safe Operation.
• Author: Kyongsu Y
• Objective : In this paper, a method to ensure lateral control safety for the lane detection
function is proposed to address a failure of the image-sensor-based automated driving
system, which is the system with the highest possibility for practical mass production.
• Advantage: Enhance the accuracy of lane tracking by providing precise information
about the vehicle's movement and orientation.
• Disadvantage: Require more sophisticated hardware and software.

4. A Survey of FPGA-Based Vision Systems for Autonomous Cars.


• Author: David Castells-Rufas
• Objective : FPGA platforms are often used to implement a category of latency-critical
algorithms that demand maximum performance and energy efficiency.
• Since self-driving car computer vision functions fall into this category, one could expect
to see a wide adoption of FPGAs in autonomous cars.
• Advantage: FPGAs can still offer some advantages when used in conjunction with host
CPUs as accelerators.
• Disadvantage: Relying on multiple sensors increases the potential points of failure.
• If one sensor fails or provides incorrect data, it could compromise the entire system's
accuracy and reliability.

Dept. of CSE, Vemana IT 5 2023-24


Road Lane Detection

CHAPTER 3

SYSTEM REQUIREMENTS SPECIFICATION


It gives the information regarding analysis done for the proposed system. System Analysis is
done to capture the requirement of the user of the proposed system. It also provides the
information regarding the existing system and also the need for the proposed system. The key
features of the proposed system and the requirement specifications of the proposed system are
discussed below.

3.1 Software Requirements


3.1.1 Operating system:
• Windows 7, Windows 8 or Windows 10
• Debian, Fedora, Ubuntu
• Language: Python (Python 2.7, or Python 3.5 or newer)
• IDE: Jupyter Notebook/Google Colab
3.1.2 Browsers:
• Mozilla Firefox
• Internet Explorer
• Google Chrome
• Opera

3.2 Hardware Requirements


• 64-bit versions of Microsoft Windows 10, 8, 7
• 4 GB RAM minimum, 8 GB RAM recommended
• 1.5 GB hard disk space + at least 1 GB for caches
• 1024 x 768 minimum screen resolution

Dept. of CSE, Vemana IT 6 2023-24


Road Lane Detection

3.3 Functional Requirements

• Real-Time Processing: The system must process video streams from the vehicle's
cameras in real-time to detect and track lane lines continuously.

• Accurate Lane Detection: The system should accurately identify and differentiate
lane lines, including solid and dashed lines, under various conditions like different
lighting and weather.

• Lane Departure Warning: The system must issue alerts if the vehicle drifts out of
its lane without signaling, to enhance driver safety.

• Lane Change Assistance: The system should assist with safe lane changes by detecting
adjacent lane conditions and identifying gaps in traffic.

• Robustness to Occlusions: The system needs to maintain reliable lane detection


performance even when lane lines are partially obscured by other vehicles or objects.

3.4 Non - Functional Requirements

• Real-Time Video Processing: The system must process live video feeds from vehicle
cameras to detect and track lane lines continuously.

• Lane Line Identification: The system should accurately detect and distinguish
between various types of lane markings, such as solid and dashed lines, in different
lighting and weather conditions.

• Lane Departure Warning: The system must provide timely alerts to the driver if the
vehicle deviates from its lane without signaling, enhancing safety.

• Lane Change Assistance: The system should assist with safe lane changes by
analyzing adjacent lane conditions and ensuring that it is safe to move into another
lane.

Dept. of CSE, Vemana IT 7 2023-24


Road Lane Detection

3.5 Software Description

3.5.1 Jupyter Notebook


Jupyter Notebook is a web-based interactive computational environment for creating Jupyter
notebook documents. The notebook term can colloquially make reference to many different
entities, mainly the Jupyter web application, Jupyter Python web server, or Jupyter document
format depending on context. A Jupyter Notebook document is a JSON document, following a
versioned schema, and containing an ordered list of input/output cells which can contain code,
text, mathematics, plots and rich media, usually ending with the ".ipynb" extension. A Jupyter
Notebook can be converted to a number of open standard output formats HTML, presentation
slides, LaTeX, PDF, Restructured Text, Markdown, Python through "Download As" in the web
interface, via the nbconvert library or "jupyter nbconvert" command line interface in a shell. To
simplify visualisation of Jupyter notebook documents on the web, the nb convert library is
provided as a service through NbViewer which can take a URL to any publicly available
notebook document, convert it to HTML on the fly and display it to the user. A Jupyter kernel
is a program responsible for handling various types of request (code execution, code
completions, inspection) and providing a reply. Kernels talk to the other components of Jupyter
using Zero MQ over the network and thus can be on the same or remote machines. Unlike many
other Notebook-like interfaces, in Jupyter, kernels are not aware that they are attached to a
specific document, and can be connected to many clients at once. Usually kernels allow
execution of only a single language, but there are a couple of exceptions. By default, Jupyter
ships with IPython as a default kernel and a reference implementation via the ipy kernel wrapper.
Kernels for many languages having varying quality and features are available.

3.5.2 Matplotlib
Matplotlib is a plotting library for the Python programming language and its numerical
mathematics extension NumPy. It provides an object-oriented API for embedding plots into
applications using general-purpose GUI toolkits. There is also a procedural "pylab" interface
based on a state machine, designed to closely resemble that of MATLAB, though its use is
discouraged. SciPy makes use of Matplotlib.

3.5.3 NumPy
NumPy open-source software and a library for the Python programming language, adding
support for large, multi-dimensional arrays and matrices, along with a large collection of high-

Dept. of CSE, Vemana IT 8 2023-24


Road Lane Detection
level mathematical functions to operate on these arrays. Incorporating features of the
competing Num array into Numeric, with extensive modifications. NumPy targets the Python
reference implementation of Python, which is a non-optimizing bytecode interpreter.
Mathematical algorithms written for this version of Python often run much slower than
compiled equivalents. NumPy addresses the slowness problem partly by providing
multidimensional arrays and functions and operators that operate efficiently on arrays,
requiring rewriting some code, mostly inner loops using NumPy. Using NumPy in Python
gives functionality comparable to MATLAB since they are both interpreted, and they both
allow the user to write fast programs as long as most operations work on arrays or matrices
instead of scalars. NumPy is intrinsically integrated with Python, a more modern and complete
programming language. Complementary Python packages are available. SciPy is a library that
adds more MATLAB-like functionality and Matplotlib is a plotting package that provides
MATLAB-like plotting functionality. Internally, both MATLAB and NumPy rely on BLAS
and LAPACK for efficient linear algebra computations. Python bindings of the widely used
computer vision library OpenCV utilize NumPy arrays to store and operate on data. Since
images with multiple channels are simply represented as three- dimensional arrays, indexing,
slicing or masking with other arrays are very efficient ways to access specific pixels of an
image.

3.5.4 Pandas
Pandas is a software library written for the Python programming language for data manipulation
and analysis. It offers data structures and operations for manipulating numerical tables and time
series.

• Data Frame object for data manipulation with integrated indexing.

• Tools for reading and writing data between in-memory data structures and different
file formats.

• Data alignment and integrated handling of missing data.

• Reshaping and pivoting of data sets.

• Label based slicing, fancy indexing, and sub setting of large data sets.

• Data structure column insertion and deletion.

• Group by engine allowing split-apply-combine operations on data sets.

Dept. of CSE, Vemana IT 9 2023-24


Road Lane Detection
• Data set merging and joining

• Hierarchical axis indexing to work with high-dimensional data in a lower-dimensional


data structure.

• Time series functionality date range generation and frequency conversion, moving
window statistics, moving window linear regressions, date shifting and lagging.

• Provides data filtration. The library is highly optimized for performance, with critical
code paths written in Python or C.

Dept. of CSE, Vemana IT 10 2023-24


Road Lane Detection

CHAPTER 4
DESIGN
The fig 4.1 shows the design process involves conceptualizing and planning, in the initial
stages of the road lane detection project, it's crucial to clearly define the problem and establish
its scope. This involves articulating the specific objective, such as developing a system to
accurately detect lane boundaries from images or video footage. The scope should outline the
environments in which the system will operate—whether it's highways, urban roads, or both—
and the conditions it must accommodate, such as varying light levels and weather conditions.
Once the problem and scope are well-defined, the next step involves gathering and preparing the
necessary data. This includes acquiring a diverse dataset of road images or videos that encompass
the specified environments and conditions. Each piece of data should ideally be annotated to
indicate the locations of lane markings, facilitating supervised learning algorithms' training
processes later on. By meticulously defining the problem and scope and curating an appropriate
dataset, the project lays a solid foundation for subsequent stages, such as algorithm
development and model training.

4.1 Data Flow Diagram

• Capturing and decoding video file: We will capture the video using VideoFileClip
object and after the capturing has been initialized every video frame is decoded (i.e.
converting into a sequence of images).

• Grayscale conversion of image: The video frames are in RGB format, RGB is converted
to grayscale because processing a single channel image is faster than processing a three-
channel colored image.

• Reduce noise: Noise can create false edges, therefore before going further, it’s imperative
to perform image smoothening. Gaussian blur is used to perform this process. Gaussian
blur is a typical image filtering technique for lowering noise and enhancing image
characteristics. The weights are selected using a Gaussian distribution, and each pixel is
subjected to a weighted average that considers the pixels surrounding it. By reducing high-
frequency elements and improving overall image quality, this blurring technique creates
softer, more visually pleasant images.

Dept. of CSE, Vemana IT 11 2023-24


Road Lane Detection
• Canny Edge Detector: It computes gradient in all directions of our blurred image and
traces the edges with large changes in intensity. For more explanation please go through
this article

• Region of Interest: This step is to take into account only the region covered by the road
lane. A mask is created here, which is of the same dimension as our road image.
Furthermore, bitwise AND operation is performed between each pixel of our canny image
and this mask. It ultimately masks the canny image and shows the region of interest traced
by the polygonal contour of the mask.

• Hough Line Transform: In image processing, the Hough transformation is a feature


extraction method used to find basic geometric objects like lines and circles. By
converting the picture space into a parameter space, it makes it possible to identify shapes
by accumulating voting points. We’ll use the probabilistic Hough Line Transform in our
algorithm. The Hough transformation has been extended to address the computational
complexity with the probabilistic Hough transformation. In order to speed up processing
while preserving accuracy in shape detection, it randomly chooses a selection of picture
points and applies the Hough transformation solely to those points.

• Draw lines on the Image or Video: After identifying lane lines in our field of interest
using Hough Line Transform, we overlay them on our visual input(video stream/image).

Fig 3.1 Dataflow Diagram

Dept. of CSE, Vemana IT 12 2023-24


Road Lane Detection

CHAPTER 5
IMPLEMENTATION

5.1 CODE:

import numpy as np
import pandas as pd
import cv2
from google.colab.patches import cv2_imshow
from moviepy import editor
import moviepy
def process_video(test_video, output_video):
Read input video stream and produce a video file with detected lane lines.
Parameters:
test_video: location of input video file
output_video: location where output video file is to be saved
input_video = editor.VideoFileClip(test_video, audio=False)
processed = input_video.fl_image(frame_processor)
processed.write_videofile(output_video, audio=False)
def frame_processor(image):
Process the input frame to detect lane lines.
Parameters:
image: image of a road where one wants to detect lane lines
(we will be passing frames of video to this function)
grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur = cv2.GaussianBlur(grayscale, (kernel_size, kernel_size), 0)
low_t = 50
high_t = 150
edges = cv2.Canny(blur, low_t, high_t)
region = region_selection(edges)
hough = hough_transform(region)
result = draw_lane_lines(image, lane_lines(image, hough))

Dept. of CSE, Vemana IT 13 2023-24


Road Lane Detection
return result def region_selection(image):
Determine and cut the region of interest in the input image.
Parameters:
image: we pass here the output from canny where we have
identified edges in the frame
mask = np.zeros_like(image)
if len(image.shape) > 2:
channel_count = image.shape[2]
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
rows, cols = image.shape[:2]
bottom_left = [cols * 0.1, rows * 0.95]
top_left = [cols * 0.4, rows * 0.6]
bottom_right = [cols * 0.9, rows * 0.95]
top_right = [cols * 0.6, rows * 0.6]
vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_image = cv2.bitwise_and(image, mask)
return masked_image.
def region_selection(image):
Determine and cut the region of interest in the input image.
Parameters:
image: we pass here the output from canny where we have
identified edges in the frame
mask = np.zeros_like(image)
if len(image.shape) > 2:
channel_count = image.shape[2]
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
rows, cols = image.shape[:2]
bottom_left = [cols * 0.1, rows * 0.95]
Dept. of CSE, Vemana IT 14 2023-24
Road Lane Detection
top_left = [cols * 0.4, rows * 0.6]
bottom_right = [cols * 0.9, rows * 0.95]
top_right = [cols * 0.6, rows * 0.6]
vertices = np.array([[bottom_left, top_left, top_right, bottom_right]], dtype=np.int32)
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_image = cv2.bitwise_and(image, mask)
return masked_image
def hough_transform(image):
image: grayscale image which should be an output from the edge detector
rho = 1
theta = np.pi/180
threshold = 20
minLineLength = 20
maxLineGap = 500

return cv2.HoughLinesP(image, rho = rho, theta = theta, threshold = threshold,

minLineLength = minLineLength, maxLineGap = maxLineGap)

def average_slope_intercept(lines):

lines: output from Hough Transform

left_lines = [] #(slope, intercept)

left_weights = [] #(length,)

right_lines = [] #(slope, intercept)

right_weights = [] #(length,)

for line in lines:

for x1, y1, x2, y2 in line:

if x1 == x2:

continue

slope = (y2 - y1) / (x2 - x1)

intercept = y1 - (slope * x1)

length = np.sqrt(((y2 - y1) ** 2) + ((x2 - x1) ** 2))


Dept. of CSE, Vemana IT 15 2023-24
Road Lane Detection
if slope < 0:

left_lines.append((slope, intercept))

left_weights.append((length))

else:

right_lines.append((slope, intercept))

right_weights.append((length))

left_lane = np.dot(left_weights, left_lines) / np.sum(left_weights) if


len(left_weights) > 0 else None

right_lane = np.dot(right_weights, right_lines) /np.sum(right_weights) if


len(right_weights) > 0 else None

return left_lane, right_lane

def pixel_points(y1, y2, line):

Parameters:

y1: y-value of the line's starting point.

y2: y-value of the line's end point.

line: The slope and intercept of the line.

if line is None:

return None

slope, intercept = line

x1 = int((y1 - intercept)/slope)

x2 = int((y2 - intercept)/slope)

y1 = int(y1)

y2 = int(y2)

return ((x1, y1), (x2, y2))

def lane_lines(image, lines)

Parameters:

Dept. of CSE, Vemana IT 16 2023-24


Road Lane Detection
image: The input test image.

lines: The output lines from Hough Transform.

y1 = image.shape[0]

y2 = y1 * 0.6

left_line = pixel_points(y1, y2, left_lane)

right_line = pixel_points(y1, y2, right_lane)

return left_line, right_line

def draw_lane_lines(image, lines, color=[255, 0, 0], thickness=12):

Parameters:

image: The input test image (video frame in our case).

lines: The output lines from Hough Transform.

color (Default = red): Line color.

thickness (Default = 12): Line thickness.

line_image = np.zeros_like(image)

for line in lines:

if line is not None:

cv2.line(line_image, *line, color, thickness)

return cv2.addWeighted(image, 1.0, line_image, 1.0, 0.0)

left_lane, right_lane = average_slope_intercept(lines)

y1 = image.shape[0]

y2 = y1 * 0.6

Dept. of CSE, Vemana IT 17 2023-24


Road Lane Detection
left_line = pixel_points(y1, y2, left_lane)

right_line = pixel_points(y1, y2, right_lane)

return left_line, right_line

def draw_lane_lines(image, lines, color=[255, 0, 0], thickness=12):

Parameters:

image: The input test image (video frame in our case).

lines: The output lines from Hough Transform.

color (Default = red): Line color.

thickness (Default = 12): Line thickness.

line_image = np.zeros_like(image)

for line in lines:

if line is not None:

cv2.line(line_image, *line, color, thickness)

return cv2.addWeighted(image, 1.0, line_image, 1.0, 0.0)

def hough_transform(image):

Parameter:

image: grayscale image which should be an output from the edge detector

rho = 1

theta = np.pi/180

threshold = 20

minLineLength = 20

Dept. of CSE, Vemana IT 18 2023-24


Road Lane Detection
maxLineGap = 500

return cv2.HoughLinesP(image, rho = rho, theta = theta, threshold =

minLineLength = minLineLength, maxLineGap = maxLineGap)

def average_slope_intercept(lines):

Parameters:

lines: output from Hough Transform

left_lines = [] #(slope, intercept)

left_weights = [] #(length,)

right_lines = [] #(slope, intercept)

right_weights = [] #(length,)

for line in lines:

for x1, y1, x2, y2 in line:

if x1 == x2:

continue

slope = (y2 - y1) / (x2 - x1)

intercept = y1 - (slope * x1)

length = np.sqrt(((y2 - y1) ** 2) + ((x2 - x1) ** 2))

if slope < 0:

left_lines.append((slope, intercept))

left_weights.append((length))

else:

Dept. of CSE, Vemana IT 19 2023-24


Road Lane Detection
right_lines.append((slope, intercept))

right_weights.append((length))

left_lane = np.dot(left_weights, left_lines) / np.sum(left_weights) if len(left_weights) > 0


else None

right_lane = np.dot(right_weights, right_lines) / np.sum(right_weights) if len(right_weights)


> 0 else None

return left_lane, right_lane

def pixel_points(y1, y2, line):

Parameters:

y1: y-value of the line's starting point.

y2: y-value of the line's end point.

line: The slope and intercept of the line.

if line is None:

return None

slope, intercept = line

x1 = int((y1 - intercept)/slope)

x2 = int((y2 - intercept)/slope)

y1 = int(y1)

y2 = int(y2)

return ((x1, y1), (x2, y2))

def lane_lines(image, lines):

Parameters:

image: The input test image.

lines: The output lines from Hough Transform.

left_lane, right_lane = average_slope_intercept(lines)

y1 = image.shape[0]

y2 = y1 * 0.6

Dept. of CSE, Vemana IT 20 2023-24


Road Lane Detection
left_line = pixel_points(y1, y2, left_lane)

right_line = pixel_points(y1, y2, right_lane)

return left_line, right_line

def draw_lane_lines(image, lines, color=[255, 0, 0], thickness=12):

Parameters:

image: The input test image (video frame in our case).

lines: The output lines from Hough Transform.

color (Default = red): Line color.

thickness (Default = 12): Line thickness.

line_image = np.zeros_like(image)

for line in lines:

if line is not None:

cv2.line(line_image, *line, color, thickness)

return cv2.addWeighted(image, 1.0, line_image, 1.0, 0.0)

def frame_processor(image):

Parameters:

image: image of a road where one wants to detect lane lines

grayscale = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

kernel_size = 5

blur = cv2.GaussianBlur(grayscale, (kernel_size, kernel_size), 0)

low_t = 50

high_t = 150

region = region_selection(edges)

hough = hough_transform(region)

Dept. of CSE, Vemana IT 21 2023-24


Road Lane Detection
result = draw_lane_lines(image, lane_lines(image, hough))

return result

def process_video(test_video, output_video):

Parameters:

test_video: location of input video file

output_video: location where output video file is to be saved

input_video = editor.VideoFileClip(test_video, audio=False)

processed = input_video.fl_image(frame_processor)

processed.write_videofile(output_video, audio=False)

process_video('input.mp4','output.mp4')

Dept. of CSE, Vemana IT 22 2023-24


Road Lane Detection

CHAPTER 6

METHODOLOGY
6.1 Pre-Processing
Preprocessing is an important part of image processing and an important part of lane
detection. Preprocessing can help reduce the complexity of the algorithm, thereby reducing
subsequent program processing time. The video input is a RGB-based colour image sequence
obtained from the camera. In order to improve the accuracy of lane detection, many
researchers employ different image preprocessing techniques. Smoothing and filtering
graphics is a common image preprocessing technique. The main purpose of filtering is to
eliminate image noise and enhance the effect of the image. Low-pass or high pass filtering
operation can be performed for 2D images, low-pass filtering (LPF) is advantageous for
denoising, and image blurring and high-pass filtering (HPF) are used to find image boundaries.
In order to perform the smoothing operation, an average, median , or Gaussian filter could be
used. in order to preserve detail and remove unwanted noise, Xu and Li firstly use a median
filter to filter the image and then use an image histogram in order to enhance the grayscale
6.2 Colour Transform
Colour model transform is an important part of machine vision, and it is also an indispensable
part of lane detection in this paper. The actual road traffic environment and light intensity all
produce noise that interferes with the identification of colour. We cannot detect the separation
of white lines, yellow lines, and vehicles from the background. The RGB colour space used
in the video stream is extremely sensitive to light intensity, and the effect of processing light
at different times is not ideal. In this paper, the RGB sequence frames in the video stream are
colour-converted into HSV colour space image. HSV represents hue, saturation, and value .
the values of white and yellow colours are very bright in the V-component compared to other
colours and are easily extracted, providing a good basis for the next colour extraction.
Experiments show that the colour processing performed in the HSV space is more robust to
detecting specific targets.
6.3 Basic Preprocessing
A large number of frames in the video will be preprocessed. The images are individually gray
scaled, blurred, X-gradient calculated, Y-gradient calculated, global gradient calculated,
thresh of frame, and morphological closure . In order to cater for different lighting conditions,
Dept. of CSE, Vemana IT 23 2023-24
Road Lane Detection
an adaptive threshold is implemented during the preprocessing phase. Then, we remove the
spots in the image obtained from the binary conversion and perform the morphological closing
operation. The basic preprocessed frames cannot be very good at removing noise. It can be
seen from the results after the morphological closure that although preliminary lane
information can be obtained, there is still a large amount of noise.
6.4 Adding Colour Extraction in Preprocessing
In order to improve the accuracy of lane detection, we add a feature extraction module in the
preprocessing stage. The purpose of feature extraction is to keep any features that may be lane
and remove features that may be non-lane. This project mainly carries on the feature extraction
to the colour. After the graying of the image and colour model conversion, we add the white
feature extraction and then carry out the conventional preprocessing operation in turn. The
process of the colour extraction proposed
6.5 Adding Edge Detection in Preprocessing
This project has carried out edge detection two times successively; the first time is to perform
a wide range of edge detection extraction in the entire frame image. In the second, the edge
detection is performed again after the lane detection after ROI selection. This detection further
improves the accuracy of lane detection. This section mainly performs the overall edge
detection on the frame image, using the improved Canny edge detection algorithm. The
concrete steps of Canny operator edge detection are as follows: First, we use a Gaussian filter
to smooth the image (preprocessed image), and then we use the Sobel operator to calculate
the gradient magnitude and direction. Next step is to suppress the nonmaximal value of the
gradient amplitude. Finally, we need to use a double-threshold algorithm to detect and connect
edges
6.6 ROI Selection
After edge detection by Canny edge detection, we can see that the obtained edge not only
includes the required lane line edges, but also includes other unnecessary lanes and the edges
of the surrounding fences. The way to remove these extra edges is to determine the visual area
of a polygon and only leave the edge information of the visible area. The basis is that the
camera is fixed relative to the car, and the relative position of the car with respect to the lane
is also fixed, so that the lane is basically kept in a fixed area in the camera. In order to lower
image redundancy and reduce algorithm complexity, we can set an adaptive area of interest
(ROI) on the image. We only set the input image on the ROI area and this method can increase

Dept. of CSE, Vemana IT 24 2023-24


Road Lane Detection
the speed and accuracy of the system. In this project, we use the standard KITTI road database.
We divide the image of each frame in the running video of the vehicle into two parts, and one-
half of the lower part of the image frame serves as the ROI area. The images of the four
different sample frames have been able to substantially display the lane information after
being processed by the proposed preprocessing method, but not only the lane information but
also a lot of non-lane noise is present in the upper half of the image. So we cut out the lower
half of the image (one-half) as the ROI area.
6.7 Lane Detection
The lane detection module is mainly divided into lane edge detection and linear lane detection. This
section implements the basic functions of lane detection and performs lane detection based on
improved preprocessing and the proposed ROI selection.
6.8 Edge Detection
Feature extraction is very important for lane detection. There are many common methods used for
edge detection, such as Canny transform, Sobel transform, and Laplacian transform [18, 24]. We have
selected Canny transform which is better. As shown in Figure 11, we performed Canny edge detection
after the proposed ROI selection.
6.9. Lane Detection
The methods of lane detection include feature-based methods and model-based methods. The
method-based feature is used in this paper to detect the colour and edge features of lanes in
order to improve the accuracy and efficiency of lane detection. There are two methods to
achieve straight lane detection. One is to use the Hough line detected function encapsulated
by the OpenCV library commonly used for image processing, and draw CMRU lane lines in
the corresponding area of the original image. The other is self-programming. In the header
file, the ROI area is traversed to perform line detection for a specific range of angles. Both
methods can be reflected in the video, and the first method runs faster. Since this article
focuses on the accuracy and efficiency of lane detection, we chose the first method (Hough
line function in the OpenCV library) to run faster for linear detection. Moreover, because the
Hough transform is insensitive to noise and can process straight lines well, Hough transform
is used to extract lane line parameters in each frame of the image sequence for lane detection.
In image processing, the Hough transform is used to detect any shape that can be expressed
in a mathematical formula, even if the shape is broken or somewhat distorted. Compared with
other methods, the Hough transform can find noise reduction better than other methods. The
classic Hough transform is often used to detect lines, circles, ellipses, etc.

Dept. of CSE, Vemana IT 25 2023-24


Road Lane Detection

6.10 Lane Tracking Using Extended Kalman Filter


After completing the lane detection, the next step is to track the lane, which is also a key technology
for smart and automated vehicle (SAV) . Image edge detection technology and linear lane detection
are technologies used to detect lane; then EKF is used to track these parameters one by one . In this
way, the tracking of lane lines is converted into the tracking of lane line parameters, which not only
improves the tracking speed, but also introduces the method of Kalman tracking to improve the
tracking accuracy.

Dept. of CSE, Vemana IT 26 2023-24


Road Lane Detection

CHAPTER 7
SOFTWARE TESTING

• Grayscale conversion: Grey scales can be used for assessing colour change and staining
during colour fastness testing. The colour change scale consists of nine pairs of grey
coloured chips, from grades 1 to 5 (with four half steps). Grade 5 represents no change
and grade 1 depicts severe change in some standards.
• Gaussian blur: Photographers and designers choose Gaussian functions for several
purposes. If you take a photo in low light, and the resulting image has a lot of noise,
Gaussian blur can mute that noise.it is calculated by using 2D, because a photograph is
two-dimensional, Gaussian blur uses two mathematical functions (one for the x-axis and
one for the y) to create a third function, also known as a convolution. This third function
creates a normal distribution of those pixel values, smoothing out some of the
randomness.
• Distortion correction: The Distortion Correction feature automatically corrects optical
distortion in images. Distortion is a form of optical aberration that causes a geometrical
imaging error where straight lines don't appear straight in an image.Distortion is
calculated simply by relating the Actual Distance (AD) to the Predicted Distance (PD) of
the image using Equation 1. This is done by using a pattern such as dot target. Note that
while distortion runs negative or positive in a lens, it is not necessarily linear across the
image.
• Edge detection: Edge test case scenarios are those that are possible, but unknown or
accidental features of the requirements. Boundary testing, in which testers validate
between the extreme ends of a range of inputs, is a great way to find edge cases when
testers are dealing with specific and calculated value fields.
• Lane fitting: Lane Fitting is a computer vision task that involves identifying the
boundaries of driving lanes in a video or image of a road scene. The goal is to accurately
locate and track the lane markings in real-time, even in challenging conditions such as
poor lighting, glare, or complex road layouts.
• Shadow testing: Shadow testing is a technique used to test new features or changes in a
real-time production environment without affecting the actual users. It involves routing a
portion of the production traffic to the new feature or change while keeping the majority
of the traffic unaffected.The shadow-test approach allows us to do anything that is

Dept. of CSE, Vemana IT 27 2023-24


Road Lane Detection
possible for automated assembly of fixed test forms but then in an adaptive format. In
fact, as shown in the next presentation, it can even produce the same test content in any
possible format.
• Performance test: It specifies the purpose of a specific test, identifies the required inputs
and expected results, provides step-by-step procedures for executing the test, and outlines
the pass/fail criteria for determining acceptance. Test Case Specification has to be done
separately for each unit.

TABLE 7.1 TEST CASE SPECIFICATION

Dept. of CSE, Vemana IT 28 2023-24


Road Lane Detection

CHAPTER 8
RESULTS

8.1 input image

8.2 processed grayscale image

Dept. of CSE, Vemana IT 29 2023-24


Road Lane Detection

8.3 Edge detection of image using canny edge

8.4 selecting (ROI)region of Interest

Dept. of CSE, Vemana IT 30 2023-24


Road Lane Detection

8.5 Hough transformed image

8.6 Adding extra plotted line to image

Dept. of CSE, Vemana IT 31 2023-24


Road Lane Detection

8.7 detecting lane in roads

Dept. of CSE, Vemana IT 32 2023-24


Road Lane Detection

CONCLUSION & FUTURE ENHANCEMENT

Conclusion

This project aimed to create we proposed a new lane detection preprocessing and ROI
selection methods to design a lane detection system. The main idea is to add white extraction
before the conventional basic preprocessing. Edge extraction has also been added during the
preprocessing stage to improve lane detection accuracy. We also placed the ROI selection after
the proposed preprocessing. Compared with selecting the ROI in the original image, it reduced
the nonlane parameters and improved the accuracy of lane detection. Currently, we only use
the Hough transform to detect straight lane and EKF to track lane and do not develop advanced
lane detection methods. In the future, we will exploit a more advanced lane detection approach
to improve the performance..

Future Enhancement

As one of the most important tasks in autonomous driving systems, ego-lane detection
has been extensively studied and has achieved impressive results in many scenarios. However,
ego-lane detection in the missing feature scenarios is still an unsolved problem. To address this
problem, previous methods have been devoted to proposing more complicated feature extraction
algorithms, but they are very time-consuming and cannot deal with extreme scenarios. Different
from others, this paper exploits prior knowledge contained in digital maps, which has a strong
capability to enhance the performance of detection algorithms. Specifically, we employ the road
shape extracted from OpenStreetMap as lane model, which is highly consistent with the real
lane shape and irrelevant to lane features. In this way, only a few lane features are needed to
eliminate the position error between the road shape and the real lane, and a search-based
optimization algorithm is proposed. Experiments show that the proposed method can be applied
to various scenarios and can run in real-time at a frequency of 20 Hz. At the same time, we
evaluated the proposed method on the public KITTI Lane dataset where it achieves state-of-
theart performance. Moreover, our code will be open source after publication.

Dept. of CSE, Vemana IT 33 2023-24


Road Lane Detection

REFERENCES

1. D. Pomerleau, “RALPH: rapidly adapting lateral position handler,” in Proceedings of the


Intelligent Vehicles '95. Symposium, pp. 506–511, Detroit, MI, USA, 2003.View at:
Publisher Site Google Scholar
2. J. Navarro, J. Deniel, E. Yousfi, C. Jallais, M. Bueno, and A. Fort, “Influence of lane departure
warnings onset and reliability on car drivers' behaviors,” Applied Ergonomics, vol. 59, pp.
123–131, 2017.View at: Publisher Site|Google Scholar
3. P. N. Bhujbal and S. P. Narote, “Lane departure warning system based on Hough transform
and Euclidean distance,” in Proceedings of the 3rd International Conference on Image
Information Processing, ICIIP 2015, pp. 370–373, India, December 2015.View at: Google
Scholar
4. V. Gaikwad and S. Lokhande, “Lane Departure Identification for Advanced Driver
Assistance,” IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 2, pp. 910–
918, 2015.View at: Publisher Site Google Scholar
5. H. Zhu, K.-V. Yuen, L. Mihaylova, and H. Leung, “Overview of Environment Perception for
Intelligent Vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no.
10, pp. 2584–2601, 2017.View at: Publisher Site Google Scholar
6. F. Yuan, Z. Fang, S. Wu, Y. Yang, and Y. Fang, “Real-time image smoke detection using
staircase searching-based dual threshold AdaBoost and dynamic analysis,” IET Image
Processing, vol. 9, no. 10, pp. 849–856, 2015.View at: Publisher Site Google Scholar
7. P.-C. Wu, C.-Y. Chang, and C. H. Lin, “Lane-mark extraction for automobiles under complex
conditions,” Pattern Recognition, vol. 47, no. 8, pp. 2756–2767, 2014.View at: Publisher
SitemGoogle Scholar
8. M.-C. Chuang, J.-N. Hwang, and K. Williams, “A feature learning and object recognition
framework for underwater fish images,” IEEE Transactions on Image Processing, vol. 25, no.
4, pp. 1862–1872, 2016.View at: Publisher Site Google Scholar MathSciNet
9. Y. Saito, M. Itoh, and T. Inagaki, “Driver Assistance System with a Dual Control Scheme:
Effectiveness of Identifying Driver Drowsiness and Preventing Lane Departure Accidents,”
IEEE Transactions on Human-Machine Systems, vol. 46, no. 5, pp. 660–671, 2016.View at:
Publisher Site Google Scholar

Dept. of CSE, Vemana IT 34 2023-24


Road Lane Detection

10. Q. Lin, Y. Han, and H. Hahn, “Real-Time Lane Departure Detection Based on Extended Edge-
Linking Algorithm,” in Proceedings of the 2010 Second International Conference on
Computer Research and Development, pp. 725–730, Kuala Lumpur, Malaysia, May
2010.View at: Publisher Site Google Scholar
11. C. Mu and X. Ma, “Lane detection based on object segmentation and piecewise fitting,”
TELKOMNIKA Indonesian Journal of Electrical Engineering, vol. 12, no. 5, pp. 3491–3500,
2014.View at: Google Scholar
12. J.-G. Wang, C.-J. Lin, and S.-M. Chen, “Applying fuzzy method to vision-based lane detection
and departure warning system,” Expert Systems with Applications, vol. 37, no. 1, pp. 113–
126, 2010.View at: Publisher Site Google Scholar.

Dept. of CSE, Vemana IT 35 2023-24


Road Lane Detection

BIBLIOGRAPHY
BOOKS REFERRED:

[1]. "Introduction to Machine Learning with Python" by Andreas C. Müller and Sarah
Guido.
[2]. "Recommendation Systems: An Introduction" by Michael Schrage.
[3]. "Flask Web Development with Python" by Pradeep Gohil.

WEBSITES REFERRED:

[1]. Flask Documentation: [Flask Documentation] (https://flask.palletsprojects.com/)

[2]. SQLAlchemy Documentation: (https://docs.sqlalchemy.org/)

[3]. Real Python: [Real Python] (https://realpython.com/)

[4]. W3Schools: [W3Schools] (https://www.w3schools.com/)

Dept. of CSE, Vemana IT 36 2023-24


Road Lane Detection

Dept. of CSE, Vemana IT 37 2023-24


Road Lane Detection

Dept. of CSE, Vemana IT 38 2023-24

You might also like