Developing an Autonomous All-Terrain
Vehicle – Control and Perception
James M. Conrad
1
Agenda
• Initial problem definition, motivation,
and background
• Phase 1: Conversion of an ATV
• Phase 2: Computer control
• Phase 3: Autonomous driving on
paths in forests
• Phase 4: Autonomously following a target
• Phase 5: Autonomous driving in rugged terrain
• Discussion & Conclusion
• Video Phase 1&2: https://youtu.be/rbLYwmMTUjo
• Video Phase 3: https://youtu.be/hS-RCrkh1bs
• Video Phase 4: https://youtu.be/W7jHCyE4_BI
• Video Phase 4: ATV_Demo_Video_Conrad.mp4
Initial Problem Definition
Company wanted to have a rugged vehicle drive
through a field looking for unexploded bombs
underground.
Solution: An autonomous
All-Terrain Vehicle. It will
drive on a predetermined
path, pull a trailer with
equipment, and avoid
trees, rocks, and ditches.
Phase 1: Convert ATV
• Required modifying several sub-systems
of vehicle (electrical and mechanical)
• Must be able to be driven manually as well
as by computer, with minor changes to
switch between the two modes
• Try to make minimal physical changes to
the vehicle
Phase 1: Systems Changed
• Throttle: Added a servo to the air intake
valve, allow the Engine Control Unit to
accelerate.
• Braking: Added a linear actuator to
depress the foot brake
• Steering: “Spoofed” the steering assist
module with new CAN bus control signals.
Phase 2: Change in Direction
The new direction is to have the Autonomous ATV to
either:
1. Follow a path in the forest
2. Follow a Breadcrumb Path in the forest
3. Travel towards a specific GPS waypoint in a
forest
Applications: Military, Search and rescue
All implementations require the vehicle to avoid
obstacles
Phase 2: Overall System Overview
• Multiple Nodes
• Distributed processors
• reduce processing power needed
• Reduce delay
• Whole system connected via CAN Bus
• Signal lines
• Power lines
Overall System Overview
• Actuators: • Sensors:
• Steering Module • GPS
• Braking system • IMU (Compass)
• Throttle Control • 2 LIDARs
• Brain:
• Synchronization
Phase 2: Overall System Overview
• All modules are implemented using MSP430 F series
(except Steering)
• CAN Shield MCP2515
• Brain is using MSP430 connected to CAN Shield as
well.
Phase 3: Implementation Goals
• Provide an autonomous ATV that can follow a paved or
gravel path using a camera and AI models
• Provide a method to localize the ATV with respect to the
path using a camera, semantic segmentation, followed by
an image processing model.
• Provide a new dataset gathered on a paved path in a
forested environment from the walking trail at The
University of North Carolina Charlotte that can help in
more machine learning training and other type of
research.
Preparing Base ATV Architecture
• .
Data Gathering
• Visual Data: Intel D435i with wide angle lens and 16
Mega Pixels (41,056 frames)
• Angular Data: Encoder connected to the steering axis
read using ADC
• For Measuring Performance: Gathered 6 different tracks
between good driving and bad driving
Path Detection and Semantic
Segmentation
• Canny Edge Trigger (High Freq Filter)
• Color Filters
• Combining Both
• Cityscape ENet Semantic Seg.
• SegNet ResNet-18
Path Detection and Semantic
Segmentation
ENet SegNet (Low Res) SegNet (High Res)
Database Cityscape DeepScene DeepScene
Network Efficient Network ResNet18 ResNet18
Real-time 0.71 fps 24fps 14fps
performance 1.4 sec/fr 45 to 50 msec/frame 84 to 90 msec/frame
Distance at 10 mph 6 meters 18 cm 31 cm
Compared to Ground In the 90s percentile In the 90s percentile In the 80s percentile
Truth
15
Preparation for ML Model
• The ML needs for training the ATV current position, its
trajectory, and the current angle.
• The projection is important because it will give the ML
hints if it needs to stay on the edge or turn.
ML Training
• Least Squares and Least Mean Squares models were
created for each Semantic Segmentation model
• Models were trained using the 33K entry
• The 7.7K entry were used for testing
• Final results:
YouTube Video: https://youtu.be/hS-RCrkh1bs
Phase 4: Rework!!!
• With existing ATV, rewired system (simplified).
• Also added our own wheel encoder to get more accurate
distance measurements.
• Speed adjustment based on wheel encoder reading, not ATV
Engine Control Unit.
• New objective: follow a target (person, car, ATV).
Objective: Following
• Steering actuation similar, but now directed via output from
YOLO8 assessment of the target (bounding box).
• Steering adjusted (turning) based on center of image
versus center of target.
• Speed adjusted based on size of bounding box.
• Video: https://youtu.be/W7jHCyE4_BI
• Video: ATV_Demo_Video_Conrad.mp4
Phase 5: Future Work
• Electric ATV testing
• More testing to define more limitations (snow)
• Integrate the ML model with the breadcrumb research to
handle forks
• Stronger GPU to gather more data while moving
• Train SS using local data and test if it helps with the noise
• Add more classes to the SS model
• Integrate obstacle avoidance
Contact Information
James M. Conrad, Professor
Dept. of Electrical and
Computer Engineering
UNC-Charlotte
9201 University City Boulevard
Charlotte, NC 28223
Phone: +1-704-687-8597
Fax: +1-704-687-5588
[email protected]*
http://webpages.charlotte.edu/~jmconrad (or Google me!)
* Preferred method of contact
YouTube Channel: stiquitojmconrad
AI Model Offline Testing & Results
LS & ENet
Real-Time Implementation
• A pipeline with frame by frame
• The Semantic Segmentation input is the camera frames
• The states uses the Semantic Segmentation output
• ML is in use mode, just using the weights from the
training
• The GPU sends the ML output (Angle Value) over CAN
bus to the steering module
Real-Time Results – Data
Augmentation and Correction Factor
Solutions:
• A new model was implemented with amplification of
the deviation from the center value
• Another ML model was trained with data
augmentation by repeating samples from the
abnormal cases to have 128K entry instead of the
33K entry (unbiased dataset)
Analysis and Fix
• ATV speed reduced from 6 mph to 2.9 mph
• Real-time SS checks to understand what the ATV is
seeing
• Noises introduced due to:
• Path cracks
• Environmental inclusions (leaves, light
reflections, ..etc)
• Image Processing model adjusted to locate the position
using 80% of the frame instead of specific locations.
Final results: YouTube Video: https://youtu.be/hS-RCrkh1bs
System Limitations
• System functions normally in:
• Light fog, haze, light rain, cloudy weather, sunny weather,
• asphalt, gravel
• ATV was able to perform a turn at a curve radius of 10 m
with an angle of 20°(sharpest turn) and 23 m with angle
60° (longest turn)
• ATV speed is between
• 2.9 mph (the min avg that could be achieved with the
new speed controller).
• and 9 mph (the max on first gear is 10mph but it will
overheat the engine).
System Limitations
• System performs poorly:
• Low light conditions, near sunset (1 hour before sunset)
and after sunset
• Heavy rain, dense fog
• System was not tested in snowing condition
• System is not designed to handle forks
• SS classifies everything as path, trees, and vegetations:
• Tiles, gravel, and asphalt are recognized as path
Accomplishments
• An internal combustion engine Honda ATV was
transformed into an autonomous ATV that can follow a
paved path.
• AI model on a GPU controls embedded systems over
CAN bus messages
• GPU reads camera frames, applies SS, then image
processing model to find the ATV pose and projection
with respect to a path.
• A LR ML model calculates the steering angle to keep the
ATV on track
Accomplishments
• Main difference from common autonomous road-vehicles
is road detection using lanes markings and other traffic
schemes
• It is feasible to define the path using a camera and a
trained SS model
• Real-time testing showed ATV was able to maintain ideal
center pose with 91.5% compared to a human driver
• Smooth states transitions with 1.08 standard state
deviation from the center of the path out of 8 different
states