Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
40 views72 pages

Understanding Game Mechanics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views72 pages

Understanding Game Mechanics

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 72

Unit – 4

Designing and implementing game mechanics

What are Game Mechanics?

Game mechanics are rules and systems that dictate a game’s function and how players interact
with it. How a game works and how it affects the player are among the most important factors in
the design and creation of a video game.

Players who wonder what game mechanics have likely experienced them without realizing, as
they are present in every video game we play.

History of Game Mechanics

Game design mechanics have changed throughout the decades, from the early fundamentals in
the 1950s and 1960s to the more complicated works of today. The full timeline goes over the
events.

The history of gaming is extensive; check out who invented gaming and the full history of games
to learn more.

How do Game Mechanics Work?

Pinpointing a Game Mechanics definition can be tricky because of the many intricacies and
factors that are involved in video games. They can affect gaming in many different ways from
visuals, controls, audio, and story.

The Structure of Video Games Can Be Defined By Five Elements

Games should allow players to interact with their systems, like moving a character, selecting
answers, or solving puzzles. These actions are usually controlled by buttons or commands.

Mechanics and their application will vary depending on the genre and the intended gameplay. In
the end, a game’s mechanics should seek to enhance the experience.

Types of Game Mechanics


For years, developers have debated what exactly defines mechanics in gaming, and a consensus
has yet to be reached. However, we have compiled a few common factors.

Here is every type of mechanic in games:

Quantity

Quantity refers to mechanics that are represented by a numerical value. They serve to inform the
player.

Type Examples Application

Health

Stamina Players use these to measure their level of


Resource
Shields success as they progress.

Mana

Gold
Players earn these as a reward and can spend
Currency Shards
them on items for personal benefit.
Skill points

Continuously passes to help progress the game


Abstract Time
or help players plan ahead.

Spatial

Spatial refers to video game mechanics that fill or affect physical space and influence how
players interact with the game.

Type Examples Application


Refers to assets that dictate how players
World Objects
navigate or change the world around them.

Characters
These allow the game to respond to the
Tangible Items
player’s actions or desires.
Contact

Intangib Inventory Allows players to manage items they encounter


le Storage and determine individual value.

State

State refers to the mechanics of a game that change or apply additional rules. They serve to
create an effect or reaction to the player’s actions.

Type Examples Application

Airbourne
Environmental influences change how
Actio Swimming
players move or interact within the game
n Grounded
world.
Zero Gravity

Poison
Factors that take away resources or add
Slow
Effect new conditions to empower or weaken the
Speed boost
player.
Dead
Victory
Stages that begin, end, and progress a game
Game Lobby
in response to player input.
Loading

Objects in-game change in response to


On/Off
Object player action to give a sense of agency in
Open/Close
the world.

Action

Action refers to mechanics that force or influence change. They allow the player to interact with
the game.

Type Examples Application

Health regen

Reload ammo Factors that change over time as players


Resource
Respawn continue to influence the game.

Ability recharge

Running
Actions that take effect in direct response
Jumping
World to player input to further interact with the
Dodging
world.
Teleporting

Open door Forcing a change or status to achieve a


Object
Unlock chest goal through player input.
Enter portal

Climb Ladder

Popular Game Mechanics

When you look through the games available in the current market, you can see common tropes
included in certain games, which is how games are classified into different genres.

First Person Shooters (FPS)

First-person shooters require aiming weapons, eliminating foes, and managing ammo

Aiming and Shooting: Players aim their weapons and shoot at enemies to damage them.

Cover System: Players can hide behind objects to avoid enemy fire and recover health.

Weapon Upgrades: Customizing or upgrading weapons to improve performance.

Ammo Management: Players must manage a limited supply of ammunition through reloads.

Multiplayer Modes: Team-based or free-for-all combat against other players online.


Examples include Call of Duty, Overwatch, Apex Legends, and Counter-Strike.

Check out our first-person shooter definition to learn more.

Role-Playing Game (RPG)

Leveling Up: Characters gain experience points (XP) to improve stats and abilities.

Skill Trees: Players unlock and upgrade abilities through a branching system.

Quest Systems: Players complete the main story or side quests to earn rewards.

Turn-Based Combat: Players and enemies take turns making moves in combat.

Dialogue Choices: Interacting with NPCs (non-playable characters) through conversation.

Examples include Fallout, The Witcher, Elden Ring, and Cyberpunk 2077.

Platformers
Platformers feature obstacles players must overcome through the movement of a character

Jumping and Movement: Precision jumps across platforms are necessary to progress.

Power-Ups: Items that grant temporary abilities to empower or change gameplay.

Environments: Can feature environmental changes such as flying or swimming.

Timed Challenges: Completing levels within a time limit to pose a challenge.

Checkpoints: Players restart from certain points when they die or fail.

Examples include Super Mario, Sonic, Crash Bandicoot, and Shovel Knight.

Fighting Games
Combo Systems: Precisely executing combo strings to maximize damage.

Blocking and Parrying: Defending against enemy attacks by blocking or countering.

Special Moves: Powerful, character-specific moves that require precise inputs to execute.

Stamina or Special Meter: A gauge that fills overtime to spend on special attacks.

Rounds/Matches: The first player to win a certain number of rounds claims victory.

Examples include Street Fighter, Tekken, Mortal Kombat, and Guilty Gear.

Check out our list of the best fighting games to find out which titles to play in 2024.

4 Ways to Improve Game Mechanic Performance

You can improve the performance of your game mechanics in four simple ways.

Use one of the best VPNs to ensure safe and secure online play.

Enable Game Mode on your PC to push the performance of your system for higher frame rates.

Use a wired connection instead of Wi-Fi for the most consistent online performance.
Upgrade RAM, SSD, or graphics card for a smoother gaming experience.

We also recommend researching guides online and practicing in any featured training/practice
mode to gauge a better understanding of how games function.

Game Mechanics Pros and Cons

Gameplay mechanics make games increasingly more advanced and exciting to play, but
developers need to tread carefully. Some mechanics can elevate the experience, while others can
hinder it.

Pros

Well-designed mechanics keep players engaged and motivated.

Mechanics create a sense of achievement by balancing difficulty with rewards.

Unique or dynamic mechanics can encourage replayability.

Deepen immersion by making gameplay more interactive.

Cons

Some mechanics are poorly balanced.

Overused mechanics can quickly become stale.

Too many mechanics can often overwhelm players.

Some mechanics create a high learning curve.

The Bottom Line

Game mechanics are the rules that shape a video game and dictate how players interact with its
systems. These mechanics are implemented in a variety of ways in order to create unique
experiences to stand out from competitors and incentivise players to engage with the game.
A game mechanics definition is simply too broad to narrow down to a simple explanation, but
this article breaks down the core fundamentals that affect players the most, from how players
navigate the games to how they progress through gameplay.

Introduction to AI

Creating Artificial Intelligence (AI) for characters or other entities in your projects in Unreal
Engine 4 (UE4) is accomplished through multiple systems working together. From a Behavior
Tree that is branching between different decisions or actions, running a query to get information
about the environment through the Environment Query System (EQS), to using the AI
Perception system to retrieve sensory information such as sight, sound, or damage information;
all of these systems play a key role in creating believable AI in your projects. Additionally, all of
these tools can be debugged with the AI Debugging tools, giving you insight into what the AI is
thinking or doing at any given moment.

When crafting AI in UE4 and using each of these systems, a good way to think about building
your AI is that the decision making process is handled by Behavior Trees, stimuli from the
environment (such as sensory information) is sent to Behavior Trees from the AI Perception
system, and queries about the environment itself are handled through EQS.

Behavior Trees assets in Unreal Engine 4 (UE4) can be used to create artificial intelligence (AI)
for non-player characters in your projects. While the Behavior Tree asset is used to execute
branches containing logic, to determine which branches should be executed, the Behavior Tree
relies on another asset called a Blackboard which serves as the "brain" for a Behavior Tree.

The Blackboard contains several user defined Keys that hold information used by the Behavior
Tree to make decisions. For example, you could have a Boolean Key called Is Light On which
the Behavior Tree can reference to see if the value has changed. If the value is true, it could
execute a branch that causes a roach to flee. If it is false, if could execute a different branch
where the roach maybe moves randomly around the environment. Behavior Trees can be as
simplistic as the roach example given, or as complex as simulating another human player in a
multiplayer game that finds cover, shoots at players, and looks for item pickups.

If you are new to Behavior Trees in UE4, it is recommended that you go through the Behavior
Tree Quick Start guide to quickly get an AI character up and running. If you are already familiar
with the concept of Behavior Trees from other applications, you may want to check out the
Essentials section which contains an overview of how Behavior Trees work in UE4, a User
Guide to working with Behavior Trees and Blackboards, as well as reference pages for the
different types of nodes available within Behavior Trees.

By the end of this guide, you will have an understanding of the following systems:

Blueprint Visual Scripting

AI Controllers

Blackboards

Behavior Trees

Behavior Tree Services

Behavior Tree Decorators

Behavior Tree Tasks

1 - Required Project Setup

In this first step, we set up our project with the assets we'll need for our AI character to get
around the environment.

For this guide we are using a new Blueprint Third Person Template project.

Expand the Sources panel, then right-click on the ThirdPersonBP folder and create a New Folder
called AI.
In the ThirdPersonBP > Blueprints folder, drag the ThirdPersonCharacter onto the AI folder and
select Copy Here.

In the AI folder, create a new Blueprint Class based on the AIController class.
Name the AIController Blueprint Enemy_Controller and the ThirdPersonCharacter Blueprint
Enemy_Character.
Open Enemy_Character, then delete all the script from the graph.

Select the Character Movement component then set Max Walk Speed in the Details panel to
120.0.

Click image for full view.

This reduces the speed of our AI Character movement around the environment when patrolling
and not chasing the Player.
Select Class Defaults from the Toolbar, then in the Details panel, assign the Enemy_Controller
as the AI Controller Class.

Click image for full view.

We are going to place our AI in the world. If you spawn the AI after the world is loaded, change
the Auto Possess AI setting to Spawned.

From the Content Browser, drag the Enemy_Character into the Level.

From the Place Actors panel, drag a Nav Mesh Bounds Volume into the Level.
With the Nav Mesh Bounds Volume selected, press R and scale the volume to encapsulate the
entire Level.
This will generate a Navigation Mesh that enables our AI character to move around the
environment. You can press the P key to toggle the display of the Nav Mesh in the Viewport
(areas that are green indicate possible navigation locations).

During gameplay, you can use the Show Navigation console command to toggle the display of
the Nav Mesh on/off.

Our project setup is complete, in the next step we will set up our Blackboard asset.

2 - Blackboard Setup

In this step, we create our Blackboard asset, which is essentially the brain of our AI. Anything
we want our AI to know about will have a Blackboard Key that we can reference. We’ll create
keys for keeping track of the Player, whether or not the AI has line of sight to the Player, and a
location where the AI can move to when it is not chasing the Player.

In the Content Browser, click Add New and under Artificial Intelligence, select Blackboard and
call it BB_Enemy.
Inside the BB_Enemy Blackboard, click the New Key button and select Object.
The Blackboard asset consists of two panels: the Blackboard, which enables you to add and keep
track of your Blackboard Keys (variables to monitor), and Blackboard Details, which enables
you to name and specify the type of Keys.

For the Object key, enter EnemyActor as the Entry Name and Actor as the Base Class.
Add another Key with the Key Type set to Bool called HasLineOfSight.
This will be used to keep track of whether or not the AI has line of sight to the Player.

Add another Key, with the Key Type set to Vector called PatrolLocation.
This will be used to keep track of a location in the Level where the AI can move when it is not
chasing the Player.

Our Blackboard is set up with the things we need to track. In the next step, we will lay out our
Behavior Tree.

3 - Behavior Tree Layout

In this step, we will lay out the flow of our Behavior Tree and the states that we want our AI to
enter. Laying out your Behavior Tree with the states you anticipate your AI could be in as a
visual flow will give you an idea of what type of logic and rules you will need to create to enter
those states.

In the Content Browser, click Add New and under Artificial Intelligence, select Behavior Tree
and call it BT_Enemy.
Naming conventions may vary, but it's generally good practice to add an acronym of the asset
type to the name.

Open the BT_Enemy and assign the BB_Enemy as the Blackboard Asset.
If you do not see the Blackboard Keys we created, clear the Blackboard Asset by clicking the
yellow arrow, then re-assign the Enemy_BB to refresh the keys.

The Behavior Tree consists of three panels: the Behavior Tree graph, where you visually layout
the branches and nodes that define your behaviors, the Details panel, where properties of your
nodes can be defined, and the Blackboard, which shows your Blackboard Keys and their current
values when the game is running and is useful for debugging.

In the graph, left-click and drag off the Root and add a Selector node.
Composites are a form of flow control and determine how the child branches that are connected
to them execute.

Composite
Description
s

Executes branches from left-to-right and are typically used to select between
subtrees. Selectors stop moving between subtrees when they find a subtree they
Selector successfully execute. For example, if the AI is successfully chasing the Player, it
will stay in that branch until its execution is finished, then go up to selector's
parent composite to continue the decision flow.

Executes branches from left-to-right and are more commonly used to execute a
series of children in order. Unlike Selectors, the Sequence continues to execute
its children until it reaches a node that fails. For example, if we had a Sequence
Sequence
to move to the Player, check if they are in range, then rotate and attack. If the
check if they are in range portion failed, the rotate and attack actions would not
be performed
Simple Parallel has two "connections". The first one is the Main Task, and it can
only be assigned a Task node (meaning no Composites). The second connection
Simple (the Background Branch) is the activity that's supposed to be executed while the
Parallel main Task is still running. Depending on the properties, the Simple Parallel may
finish as soon as the Main Task finishes, or wait for the Background Branch to
finish as well.

For the Selector node, in the Details panel, change the Node Name to AI Root.

Renaming nodes in the graph is a good way to easily identify, from a high-level, what the node
accomplishes. In this example, we name it AI Root as this is the real "Root" of our Behavior
Tree which will switch between our child branches. The default Root node that is automatically
added when creating a Behavior Tree is used to configure properties of the Behavior Tree as well
as assign the Blackboard asset it's using.
Left-click and drag off AI Root and add a Sequence node named Chase Player.

Click image for full view.

We use a Sequence node here as we plan to tell the AI to do a sequence of actions: rotate towards
the player, change their movement speed, then move to and chase the Player.

Left-click and drag off the AI Root node and add a Sequence node called Patrol.
Click image for full view.

For our AI, we will use the Sequence node to find a random location in the map, move to that
location, then wait there for a period of time before repeating the process of finding a new
location to move to.

You may also notice the numbers in the upper-right corner of the nodes:

This indicates the order of operation. Behavior Trees execute from left-to-right and top-down, so
the arrangement of your nodes is important. The most important actions for the AI should usually
be placed to the left, while the less important actions (or fall back behaviors) are placed to the
right. Child branches execute in the same fashion and should any child branch fail, the entire
branch will stop executing and will fail back up the tree. For example, if Chase Player failed, it
would return back up to AI Root before moving on to Patrol.
Drag off AI Root then add a Wait Task to the right of Patrol with Wait Time set to 1.0.

Click image for full view.

You will notice that this node is purple, indicating that it is a Task node. Task nodes are the
actions that you want the Behavior Tree to perform. The Wait Task acts as a catch-all in the
event that the Behavior Tree fails both Chase Player or Patrol.
Drag off the Chase Player and add a Rotate to Face BBEntry node.

Click image for full view.

This particular Task enables you to designate a Blackboard Entry that you want to rotate and
face, in our case the Enemy Actor (Player). Once you add the node, if you look in the Details
panel, the Blackboard Key will automatically be set to EnemyActor because it filters for the
Actor blackboard variable and it is the first one in the list. You can adjust the Precision option if
you want to tune the success condition range as well as change the Node Name.

From the Toolbar, click the New Task button.

In addition to using the built in Tasks, you can create and assign your own custom Tasks that
have additional logic that you can customize and define. This Task will be used to change the
movement speed of the AI so that it runs after the Player. When you create a new Task, a new
Blueprint will automatically be created and opened.
Click image for full view.

In the Content Browser, rename the new asset as BTT _ChasePlayer.

It’s a good practice to immediately rename any newly created Tasks, Decorators or Services
when you create them. Proper naming conventions would be to prefix the name of the asset with
the type of asset you create such as: BTT for Behavior Tree Tasks, BTD for Behavior Tree
Decorators, or BTS for Behavior Tree Services.
Inside the BT_Enemy, add the BTT_ChasePlayer Task followed by a Move To.

Click image for full view.

Our new Task has no logic in it yet, but we will come back and add the logic for changing the
movement speed of our AI character after which, the AI will Move To the EnemyActor (Player).
Create a new Task and rename it BTT_FindRandomPatrol, then connect it to Patrol.

Click image for full view.


Add a Move To Task and set the Blackboard Key to PatrolLocation.

Click image for full view.

This will instruct the AI to Move To the PatrolLocation which will be set inside the
BTT_FindRandomPatrol Task.
Add a Wait Task following Move To with Wait Time to 4.0 and Random Deviation to 1.0.

Click image for full view.

This instructs the AI to wait at PatrolLocation for 3-5 seconds (Random Deviation adds + or - a
second to Wait Time).

The framework for our Behavior Tree is complete. In the next step, we will add the logic for
changing the movement speed of the AI, finding a random location to navigate to when the AI is
patrolling, and the logic for determining when the AI should be chasing the player or patrolling.

4 - Task Setup - Chase Player

In this step, we set up our Chase Player Task to change the movement speed when chasing the
Player.

Inside BTT_ChasePlayer, right-click and add an Event Receive Execute AI node.


The Event Receive Execute AI node is fired when this Task is activated inside the Behavior
Tree.

You should always select the AI version of Event Receive Execute, Event Receive Abort, and
Event Receive Tick if the Agent is an AI Controller. If both generic and AI event versions are
implemented, only the more suitable one will be called, meaning the AI version is called for AI,
and the generic one otherwise.

Off the Controlled Pawn pin, use a Cast to Enemy_Character node.


Here, we are accessing the Character Blueprint for our AI called Enemy_Character by using a
Cast node.

Inside the Content Browser, open the Enemy_Character Blueprint and add a Function called
Update Walk Speed.

This function will be called from our Behavior Tree and will be used to change the AI's
movement speed.

Technically we could access the Character Movement Component off the Cast node in our Chase
Player Task and adjust the movement speed from within the Task, however having the Behavior
Tree directly change properties of sub-objects is not a recommended best practice. Instead, we
will have the Behavior Tree call a function inside the Character which will then make the
modifications we need.

In the Details panel for the Update Walk Speed function, add a Float input called
NewWalkSpeed.
Drag the CharacterMovement Component off the Components tab, then use Set Max Walk
Speed and connect as shown below.

When we call this function from the Behavior Tree, we can pass through a value to be used as
the new speed.

Back inside the BTT_ChasePlayer Task, from the As Enemy Character node, call Update Walk
Speed set to 500.0 and connect as shown.
Don't see the Update Walk Speed function you created? You may need to Compile the
Enemy_Character Blueprint before trying to add it in the Chase Player Task.

Following Update Walk Speed, add two Finish Execute nodes and connect as shown below.

Here we mark the Task as successfully finishing when we successfully cast to the
Enemy_Character. In the event that the controlled Pawn is not Enemy_Character, we need to
handle this case so we mark the Task as unsuccessful which will abort the Task.

Right-click the New Walk Speed pin, then promote it to a variable and call it ChaseSpeed.
For ChaseSpeed, make sure to enable Instance Editable.

By promoting this to an Instance Editable variable, the value of Max Walk Speed can be set from
outside of this Blueprint and will be available as a property inside our Behavior Tree.

We can now easily change the value of Chase Speed that is being sent to the Enemy_Character
Blueprint enabling us to tune and tweak how fast our AI chases the Player.

Our Chase Player Task is complete, in the next step, we will set up the Find Random Patrol Task
logic to get a random location for the AI to move to.
5 - Task Setup - Find Random Patrol

In this step, we set up our Find Random Patrol Task so our AI moves to a random location when
it is not chasing the Player.

Implementing a Blueprint Behavior Tree Task is a clever way to quickly iterate, but, if
performance is a concern, you may decide to move to a native Behavior Tree Task.

Inside BTT_FindRandomPatrol, use Event Receive Execute AI and Cast to Enemy_Character.

Off As Enemy Character, call Update Walk Speed and promote New Walk Speed to a variable
called Patrol Speed with the following settings:

Click image for full view.

- Variable Name to PatrolSpeed


- Instance Editable to Enabled
- Patrol Speed (Default Value) to 125.0

Here we are lowering the enemy movement speed while patrolling.

Off Controlled Pawn, Get Actor Location then GetRandomReachablePointInRadius with the
Return Value connected to a Branch.

Promote the Radius on GetRandomReachablePointInRadius to a variable with the following


settings:
Click image for full view.

- Variable Name to PatrolRadius


- Instance Editable to Enabled
- Patrol Radius (Default Value) to 1000.0

Here we are finding a random location within 1000 units of the enemy character's current
location. We are also using a Branch node to handle the edge case that a random point to move to
is not found.

Off the Random Location pin, use Set Blackboard Value as Vector with the Key promoted to a
variable called PatrolLocation.

Use another Set Blackboard Value as Vector node with the Value coming from Get Actor
Location.

Continuing from the previous step, connect as shown below with both nodes resulting in Finish
Execute marked Success.
Click image for full view.

If the enemy finds a random position to move to, it will be stored in the Blackboard as the
location to move to. If a location is not found, it will use it's current location and stay put before
trying a new location. We still need to handle the edge case that the Controlled Pawn is not
Enemy_Character.

Off the Cast Failed pin of the Cast node, use Finish Execute with Success disabled.

Click image for full view.

If the Controlled Pawn is not Enemy_Character, this Task will be marked as unsuccessful and
will be aborted.

Our Find Random Patrol Task is complete. In the next step, we will learn more about Decorators
and how they can be used as conditionals as well as set up our AI Controller.

6 - AI Controller Setup
In this step, we do a little bit of work inside the AI Controller in preparation for the final step,
setting up a Decorator to determine which branch of our Behavior Tree to enter.

In the Content Browser, open the Enemy_Controller Blueprint and add an Event On Possess
node.

Off Event On Possess, add a Run Behavior Tree node with BTAsset set to BT_Enemy.

**

Run Behavior Tree is a contextual functional call that targets AI Controller Class Blueprints and
enables you to execute the assigned Behavior Tree** asset.

In the Components window, click + Add Component and search for and add an AIPerception
Component.
The AI Perception Component is used to create a stimuli listener within the AI Perception
System and gathers registered stimuli (in our case, we can use Sight) that you can respond to.
This will give us the ability to determine when the AI actually sees the Player and can react
accordingly.

In the Details panel for the AIPerception Component, add an AI Sight config and enable Detect
Neutrals.

The Detection by Affiliation properties enable you to set up team based AI that fight alongside
teammates of the same affiliation and attack members of the opposing affiliation. Actors by
default are not assigned an affiliation and are considered as neutral.

Currently, you cannot assign affiliation through Blueprint, therefore we are enabling the Detect
Neutral flag in order to detect the Player. As an alternative, we are going to use Actor Tagging to
determine which Character is the Player and force AI Character(s) to only chase Actors tagged as
Player.
In the Events section for AIPerception, click the + sign next to On Target Perception Updated.

Off On Target Perception Updated in the graph, add an Actor Has Tag node with Tag set to
Player.

Off the Stimulus pin, add a Break AIStimulus node.

Add a Branch node with the Condition shown below.

Here we are checking if the Actor was successfully Sensed and if that Actor has the Tag of
Player.
You can select the Break AIStimulus node and in the Details panel use Hide Unconnected Pins
to hide all pins that are not connected so your graph looks similar to the one above.

Off the False of the Branch, use Set Timer by Event with Time set to 4.0.

Right-click and promote Time to a variable and call it Line Of Sight Timer.

Click image for full view.

This Variable and the value assigned will determine how long before the AI gives up chasing the
Player at which point, the attached Event will execute.

Right-click on the Return Value of Set Timer by Event and promote it to a Variable called
EnemyTimer.

Click image for full view.

This stores a reference to the Timer by way of a Handle. This Handle can be called upon through
script to invalidate itself and clear any associated Events (preventing the associated Event from
being executed). We can use this later in the event the AI sees the player again before the Line of
Sight Timer runs out, which stop the AI from losing sight of the player and giving up chase.
Create a Custom Event and call it StartEnemyTimer and connect it to the Event pin of Set Timer
by Event.

Right-click, then under Variables > AI, add a Get Blackboard node

Off Blackboard, use Set Value as Bool and Set Value as Object and connect as shown below.
This enables us to update the Blackboard Keys defined with new Values.

Right-click and promote both Key Names to Variables called HasLineOfSight and EnemyActor
respectively.

Compile the Blueprint and set the Default Values for both Key Names to HasLineOfSight and
EnemyActor respectively.

Off the True of the Branch, use Get EnemyTimer then Clear and Invalidate Timer by Handle.

Click image for full view.


When the AI sees the Player, it will clear the Line Of Sight Timer until it loses sight of the
Player again (where a new Line Of Sight Timer will start).

Copy and Paste the Blackboard node, Set Value as and Key Name nodes as shown.

Click image for full view.

On the Set Value as Bool node, enable the Bool Value and drag the Actor pin to the Object
Value as shown.

Click image for full view.

This sets the Blackboard Key Values for Has Line Of Sight to True and EnemyActor to the
Actor we perceived (which we have set up to only trigger if it is the Player).
Click Compile to compile then close the Blueprint.

Click image for full view.

The final graph should look similar to above.

7 - Decorator and Final Setup

In this final section, we adjust a few settings on the Player Character and Enemy Character
Blueprints. We also set up our Decorator in our Behavior Tree which will determine what branch
we can enter based on a specified condition.

Inside the Content Browser under Content > ThirdPersonBP > Blueprints, open the
ThirdPersonCharacter Blueprint.

In the Details panel, search for and add a Tag set to Player.

By setting adding this Tag of Player, the AI can now perceive and react to the Player.
Open up the Enemy_Character Blueprint inside your AI folder.

In the Details panel, search for Rotation and enable Use Controller Rotation Yaw.

This will cause the AI to rotate properly when the Rotate to Face BB Entry is called from our
Behavior Tree.

Don't see the Pawn options? You may need to click the Class Defaults button from the Toolbar
first.

Open up the BT_Enemy and right-click on Chase Player, then under Add Decorator.., select
Blackboard.
Click image for full view.

When you right-click on a node in a Behavior Tree, you can add subnodes that provide
additional functionality:

Subnode Description

Decorat Also known as conditionals. These attach to another node and make decisions on
or whether or not a branch in the tree, or even a single node, can be executed.

These attach to both Task and Composite nodes, and will execute at their defined
frequency as long as their branch is being executed. These are often used to make
Service
checks and to update the Blackboard. These take the place of traditional Parallel
nodes in other Behavior Tree systems.

We are going to use a the Blackboard Decorator to determine the value of a Blackboard Key,
which when valid, is going to allow this branch to execute.

Select the Blackboard Based Condition that was added and set the following settings in the
Details panel.

Click image for full view.

Observer aborts to Both

Blackboard Key to HasLineOfSIght

Node Name to Has Line of Sight?


Here we are stating that when the HasLineOfSight value Is Set (or is true), execute this Chase
Player branch. The Observer aborts setting of Both states that when the Blackboard Key we
assigned changes, abort our self (Chase Player) and any lower priority Tasks. This means, when
the value of HasLineOfSight changes and is not set, abort self (Chase Player), at which point the
next branch (Patrol) will execute. When the HasLineOfSight value becomes Is Set again, the
observer will abort lower priority Tasks as well enabling the Chase Player branch to be executed
again.

Compile and close the Behavior Tree then Play in the Editor.

In addition to Behavior Trees which can be used to make decisions on which logic to execute,
and the Environmental Query System (EQS) used to retrieve information about the environment;
another tool you can use within the AI framework which provides sensory data for an AI is the
AI Perception System. This provides a way for Pawns to receive data from the environment,
such as where noises are coming from, if the AI was damaged by something, or if the AI sees
something. This is accomplished with the AI Perception Component that acts as a stimuli listener
and gathers registered Stimuli Sources.

When a stimuli source is registered, the event On Perception Updated (or On Target Perception
Updated for target selection) is called which you can use to fire off new Blueprint Script and (or)
update variables that are used to validate branches in a Behavior Tree.

AI Perception Component

The AI Perception Component is a type of Component that can be added to a Pawn's


AIController Blueprint from the Components window and is used to define what senses to listen
for, the parameters for those senses, and how to respond when a sense has been detected. You
can also use several different functions to get information about what was sensed, what Actors
were sensed, or even disable or enable a particular type of sense.

To add the AI Perception Component, click the +Add Component button in your Blueprint and
select AIPerception.
Once the AI Perception Component has been added, you can access its properties inside the
Details panel.

AI Perception Properties

In addition to the common properties available with in the Details panel for the AI Perception
Component, you can add the type of Senses to perceive under the AI Perception and Senses
Config section. Depending on the type of Sense, different properties are available to adjust how
the Sense is perceived.

The Dominant Sense property can be used to assign a Sense that should take precedence over
other senses when determining a sensed Actor's location. This should be set to one of the senses
configured in your Senses Config section or set to None.
AI Damage

If you want your AI to react to damage events such as Event Any Damage, Event Point Damage,
or Event Radial Damage, you can use the AI Damage Sense Config. The Implementation
property (which defaults to the engine class AISense_Damage) can be used to determine how
damage events are handled, however you can create your own damage classes through C++
code.

Property Description

Implementati
The AI Sense Class to use for these entry (defaults to AISense_Damage).
on

Debug Color When using the AI Debugging tools, what color to draw the debug lines.

Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).

Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.

AI Hearing

The AI Hearing sense can be use to detect sounds generated by a Report Noise Event, for
example, a projectile hits something and generates a sound which can be registered with the AI
Hearing sense.
Property Description

Implementation The AI Sense Class to use for this entry (defaults to AISense_Hearing).

The distance in which this sense can be perceived by the AI Perception


Hearing Range
system.

Lo SHearing This is used to display a different radius in the debugger for Hearing
Range Range.

Detection by
Determines if Enemies, Neutrals, or Friendlies can trigger this sense.
Affiliation

Debug Color When using the AI Debugging tools, what color to draw the debug lines.

Determines the duration in which the stimuli generated by this sense


Max Age
becomes forgotten (0 means never forgotten).

Determines whether the given sense starts in an enabled state or must be


Starts Enabled
manually enabled/disabled.
AI Prediction

This asks the Perception System to supply Requestor with PredictedActor's predicted location in
PredictionTime seconds.

Property Description

Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color

Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).

Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.

AI Sight

The AI Sight config enables you to define parameters that allow an AI character to "see" things
in your Level. When an Actor enters the Sight Radius, the AI Perception System signals an
update and passes through the Actor that was seen (for example a Player enters the radius and is
perceived by the AI who has Sight Perception).
Property Description

Implementation The AI Sense Class to use for this entry (defaults to AISense_Sight).

Sight Radius The max distance over which this sense can start perceiving.

The max distance in which a seen target is no longer perceived by


Lose Sight Radius
the sight sense.

How far to the side the AI can see in degrees. The value represents
the angle measured in relation to the forward vector, not the whole
Peripheral Vision Half
range.
Angle Degrees
You can use SetPeripheralVisionAngle in Blueprint to change the
value at runtime.

Determines if Enemies, Neutrals, or Friendlies can trigger this


sense.
Detection by Affiliation
This property can be used to set up Sight perception for teams.
Currently, Affiliation can only be defined in C++. For Blueprints,
you can use the Detect Neutrals option to detect all Actors, then use
Tags to filter out Actor types.

Auto Success Range When greater than zero, the AI will always be able to see the a
from Last Seen target that has already been seen as long as they are within the range
Location specified here.

When using the AI Debugging tools, what color to draw the debug
Debug Color
lines.

Determines the duration in which the stimuli generated by this sense


Max Age
becomes forgotten (0 means never forgotten).

Determines whether the given sense starts in an enabled state or


Starts Enabled
must be manually enabled/disabled.

AI Team

This notifies the Perception component owner that someone on the same team is close by (radius
is sent by the gameplay code which sends the event).

Property Description

Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color

Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).
Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.

AI Touch

The AI Touch config setting gives you the ability to detect when the AI bumps into something or
something bumps into it. For example, in a stealth based game, you may want a Player to sneak
by an enemy AI without touching them. Using this Sense you can determine when the Player
touches the AI and can respond with different logic.

Property Description

Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color

Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).

Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.

Perception Events

The Events section enables you to define what happens when the AI Perception System receives
an update or when the AI Perception Component is activated or deactivated.
Property Description

On
Perception
Updated
This Event will fire when the Perception System receives an update and will
return an array of Actors that signaled the update.

On Target
Perception
Updated
This Event will fire when the Perception System receives an update and will
return the Actor that signaled the update. It also returns an AI Stimulus struct
that can be broken down to retrieve additional information.
Property Description

Age How long since the Stimulus occurred.

Expiration Age How long before the Stimulus becomes invalid.

Strength The weight defined in the Stimulus.

Stimulus
Where the Stimulus originated from.
Location

Receiver Where the Stimulus was registered by the AI Perception


Location System.

Tag Any Gameplay Tag associated with the Stimulus.

Successfully Whether the Stimulus was sensed by the AI Perception


Sensed System (returns True or False).

On
Component An Event that is fired when the AI Perception Component is activated.
Activated
On
Component An Event that is fired when the AI Perception Component is deactivated.
Deactivated

Perception Function Calls

The following functions can be called through Blueprint to get information from or affect the
Perception System.

Function Description

Get Actors Retrieves whatever has been sensed about a given Actor and returns a
Perception Sensed Actor's Data structure.

Get Currently Returns all Actors that are being perceived based on a given Sense. If no
Perceived Sense is specified, all Actors currently perceived in any way will be
Actors returned.

Get Known Returns any Actors that have been perceived (and not yet forgotten) based
Perceived on a given Sense. If no Sense is specified, all Actors that have been
Actors perceived will be returned.

Returns the list of Hostile Actors (any hostile Actors that had a stimulus
Get Perceived
sensed which is not expired or successfully sensed). Method can be
Hostile Actors
overridden in Blueprint to return whatever Actors list the user wants.

Request Stimuli Manually forces the AI Perception System to update properties for the
Listener Update specified target stimuli listener.

Enable or Disable the specified Sense Class.


Set Sense
This only works if the given Sense has already been configured for the
Enabled
target component instance.

Stimuli Source

The AI Perception Stimuli Source Component gives the owning Actor a way to automatically
register itself as a stimuli source for the designated Sense(s) within the Perception System. An
example use case would be to have an AI character with an AI Perception Component set up to
perceive stimuli based on Sight. You could then use the Stimuli Source Component in an Actor
(such as an item pickup Actor) and register it as a stimuli for Sight (which would enable the AI
to "see" the Actor in the Level).

To add the AI Perception Stimuli Source Component, click the +Add Component button in your
Blueprint and select AIPerception Stimuli Source.

Once the AI Perception Stimuli Source Component has been added, you can access properties
for it inside the Details panel.

Stimuli Properties

In the Details panel for the AI Perception Stimuli Source Component, the following two options
are available for AI Perception:
Property Description

Auto Register as Whether to automatically register the stimuli for the specified sense with
Source respect to the owning Actor.

An array of Senses to register as a source for. Click the + sign to add a


Source, then click the drop-down and assign the desired Sense.

Register as Source
for Senses

You can also assign any custom Senses that have been based on the AISense Class.

Stimuli Function Calls

The following functions can be called through Blueprint for the AI Perception Stimuli Source
Component:

Function Description

Register for Sense Registers owning Actor as a stimuli source for the specified Sense class.

Registers owning Actor as a stimuli source for Senses specified in the


Register as Source for Senses property and through the Register for
Register with
Sense function call.
Perception System
You do not need to call this function if the Auto Register as Source
property is enabled.
Unregister from
Unregisters the owning Actor from being a source of Sense stimuli.
Perception System

Unregister from Unregisters the stimuli for the specified sense with respect to the owning
Sense Actor.

AI Perception Debugging

You can debug AI Perception using the AI Debugging tools by pressing the '(apostrophe) key
while your game is running, then pressing numpad key 4 to bring up the Perception information.

The Unreal Engine Navigation System allows artificial intelligence Agents to navigate the Level
using pathfinding.

The system generates a Navigation Mesh from the collision geometry in the Level and divides
the mesh into tiles. These tiles are then divided into polygons to form a graph that is used by
Agents when navigating to their destination. Each polygon is assigned a cost which Agents use
to determine the optimal path with the overall lowest cost.

The Navigation System includes a variety of components and settings that can modify the way
the Navigation Mesh is generated, such as the cost assigned to polygons. This, in turn, affects the
way Agents navigate through your Level. You can also connect areas of the Navigation Mesh
that are not contiguous, such as platforms and bridges.
The Navigation System includes three Generation Modes: Static, Dynamic, and Dynamic
Modifiers Only. These modes control the way the Navigation Mesh is generated in your project
and provide a variety of options to suit your needs.

The system also provides two methods of avoidance for Agents: Reciprocal Velocity Obstacles
(RVO), and the Detour Crowd Manager. These methods allow Agents to navigate around
dynamic obstacles and other Agents during gameplay.

In the following guides you will learn about the different components and settings of the
Navigation System and how you can use them to create interactive artificial intelligence Agents
for your project.

Widget Blueprints

How to create a Widget Blueprint and Overview of the Widget Blueprint Interface.

At first, you should create a Widget Blueprint, as shown below. With the help of this, you will be
able to start working with Unreal Motion Graphics (UMG).

Create Widget Blueprint. Click the Add in the Content Browser, then select User Interface >
Widget Blueprint.
You can also Right-click in the Content Browser instead of clicking the Add button.

You can rename or use the default name for the Widget Blueprint you created in the Content
Browser.

Double-click the created Widget Blueprint to open it in the Widget Blueprint Editor.
Click image for full view.

Widget Blueprint Editor

The Designer tab is tab by default in the opened Widget Blueprint Editor. With the help of
available editor tools, you can customize the appearance of the UI. Also, you can get the visual
preview of the in-game screen, due to layout you adjust.
Click for full view.

Numbe
Window Description
r

1 Menu Bar It contains the common menu options.

It contains a number of commonly used functions for the Blueprint


2 Tool Bar
Editor, such as Compile, Save, Browse, Play, and so on.

Editor
3 It switches the Blueprint Editor between Designer and Graph modes.
Mode

It contains the list of widgets, that you can drag into the Visual
4 Palette
Designer window. Displays any class inheriting from UWidget.

It displays the structure of the User Widget. You can also drag
5 Hierarchy
widgets from Palette panel into this panel.

Visual It is the visual representation of the UI layout. Also, you can


6
Designer manipulate widgets you dragged into the Visual Designer.
It displays the properties of the selected widget. You can adjust them
7 Details
via this panel.

This is the animation track for UMG which allows you to keyframe
8 Animations
animations for your widgets.

The Visual Designer window by default is 1:1 scale. You can change the scale by holding Ctrl
and using Mouse-Wheel.

The Graph tab of the Widget Blueprint Editor looks as following.

You might also like