Understanding Game Mechanics
Understanding Game Mechanics
Game mechanics are rules and systems that dictate a game’s function and how players interact
with it. How a game works and how it affects the player are among the most important factors in
the design and creation of a video game.
Players who wonder what game mechanics have likely experienced them without realizing, as
they are present in every video game we play.
Game design mechanics have changed throughout the decades, from the early fundamentals in
the 1950s and 1960s to the more complicated works of today. The full timeline goes over the
events.
The history of gaming is extensive; check out who invented gaming and the full history of games
to learn more.
Pinpointing a Game Mechanics definition can be tricky because of the many intricacies and
factors that are involved in video games. They can affect gaming in many different ways from
visuals, controls, audio, and story.
Games should allow players to interact with their systems, like moving a character, selecting
answers, or solving puzzles. These actions are usually controlled by buttons or commands.
Mechanics and their application will vary depending on the genre and the intended gameplay. In
the end, a game’s mechanics should seek to enhance the experience.
Quantity
Quantity refers to mechanics that are represented by a numerical value. They serve to inform the
player.
Health
Mana
Gold
Players earn these as a reward and can spend
Currency Shards
them on items for personal benefit.
Skill points
Spatial
Spatial refers to video game mechanics that fill or affect physical space and influence how
players interact with the game.
Characters
These allow the game to respond to the
Tangible Items
player’s actions or desires.
Contact
State
State refers to the mechanics of a game that change or apply additional rules. They serve to
create an effect or reaction to the player’s actions.
Airbourne
Environmental influences change how
Actio Swimming
players move or interact within the game
n Grounded
world.
Zero Gravity
Poison
Factors that take away resources or add
Slow
Effect new conditions to empower or weaken the
Speed boost
player.
Dead
Victory
Stages that begin, end, and progress a game
Game Lobby
in response to player input.
Loading
Action
Action refers to mechanics that force or influence change. They allow the player to interact with
the game.
Health regen
Ability recharge
Running
Actions that take effect in direct response
Jumping
World to player input to further interact with the
Dodging
world.
Teleporting
Climb Ladder
When you look through the games available in the current market, you can see common tropes
included in certain games, which is how games are classified into different genres.
First-person shooters require aiming weapons, eliminating foes, and managing ammo
Aiming and Shooting: Players aim their weapons and shoot at enemies to damage them.
Cover System: Players can hide behind objects to avoid enemy fire and recover health.
Ammo Management: Players must manage a limited supply of ammunition through reloads.
Leveling Up: Characters gain experience points (XP) to improve stats and abilities.
Skill Trees: Players unlock and upgrade abilities through a branching system.
Quest Systems: Players complete the main story or side quests to earn rewards.
Turn-Based Combat: Players and enemies take turns making moves in combat.
Examples include Fallout, The Witcher, Elden Ring, and Cyberpunk 2077.
Platformers
Platformers feature obstacles players must overcome through the movement of a character
Jumping and Movement: Precision jumps across platforms are necessary to progress.
Checkpoints: Players restart from certain points when they die or fail.
Examples include Super Mario, Sonic, Crash Bandicoot, and Shovel Knight.
Fighting Games
Combo Systems: Precisely executing combo strings to maximize damage.
Special Moves: Powerful, character-specific moves that require precise inputs to execute.
Stamina or Special Meter: A gauge that fills overtime to spend on special attacks.
Rounds/Matches: The first player to win a certain number of rounds claims victory.
Examples include Street Fighter, Tekken, Mortal Kombat, and Guilty Gear.
Check out our list of the best fighting games to find out which titles to play in 2024.
You can improve the performance of your game mechanics in four simple ways.
Use one of the best VPNs to ensure safe and secure online play.
Enable Game Mode on your PC to push the performance of your system for higher frame rates.
Use a wired connection instead of Wi-Fi for the most consistent online performance.
Upgrade RAM, SSD, or graphics card for a smoother gaming experience.
We also recommend researching guides online and practicing in any featured training/practice
mode to gauge a better understanding of how games function.
Gameplay mechanics make games increasingly more advanced and exciting to play, but
developers need to tread carefully. Some mechanics can elevate the experience, while others can
hinder it.
Pros
Cons
Game mechanics are the rules that shape a video game and dictate how players interact with its
systems. These mechanics are implemented in a variety of ways in order to create unique
experiences to stand out from competitors and incentivise players to engage with the game.
A game mechanics definition is simply too broad to narrow down to a simple explanation, but
this article breaks down the core fundamentals that affect players the most, from how players
navigate the games to how they progress through gameplay.
Introduction to AI
Creating Artificial Intelligence (AI) for characters or other entities in your projects in Unreal
Engine 4 (UE4) is accomplished through multiple systems working together. From a Behavior
Tree that is branching between different decisions or actions, running a query to get information
about the environment through the Environment Query System (EQS), to using the AI
Perception system to retrieve sensory information such as sight, sound, or damage information;
all of these systems play a key role in creating believable AI in your projects. Additionally, all of
these tools can be debugged with the AI Debugging tools, giving you insight into what the AI is
thinking or doing at any given moment.
When crafting AI in UE4 and using each of these systems, a good way to think about building
your AI is that the decision making process is handled by Behavior Trees, stimuli from the
environment (such as sensory information) is sent to Behavior Trees from the AI Perception
system, and queries about the environment itself are handled through EQS.
Behavior Trees assets in Unreal Engine 4 (UE4) can be used to create artificial intelligence (AI)
for non-player characters in your projects. While the Behavior Tree asset is used to execute
branches containing logic, to determine which branches should be executed, the Behavior Tree
relies on another asset called a Blackboard which serves as the "brain" for a Behavior Tree.
The Blackboard contains several user defined Keys that hold information used by the Behavior
Tree to make decisions. For example, you could have a Boolean Key called Is Light On which
the Behavior Tree can reference to see if the value has changed. If the value is true, it could
execute a branch that causes a roach to flee. If it is false, if could execute a different branch
where the roach maybe moves randomly around the environment. Behavior Trees can be as
simplistic as the roach example given, or as complex as simulating another human player in a
multiplayer game that finds cover, shoots at players, and looks for item pickups.
If you are new to Behavior Trees in UE4, it is recommended that you go through the Behavior
Tree Quick Start guide to quickly get an AI character up and running. If you are already familiar
with the concept of Behavior Trees from other applications, you may want to check out the
Essentials section which contains an overview of how Behavior Trees work in UE4, a User
Guide to working with Behavior Trees and Blackboards, as well as reference pages for the
different types of nodes available within Behavior Trees.
By the end of this guide, you will have an understanding of the following systems:
AI Controllers
Blackboards
Behavior Trees
In this first step, we set up our project with the assets we'll need for our AI character to get
around the environment.
For this guide we are using a new Blueprint Third Person Template project.
Expand the Sources panel, then right-click on the ThirdPersonBP folder and create a New Folder
called AI.
In the ThirdPersonBP > Blueprints folder, drag the ThirdPersonCharacter onto the AI folder and
select Copy Here.
In the AI folder, create a new Blueprint Class based on the AIController class.
Name the AIController Blueprint Enemy_Controller and the ThirdPersonCharacter Blueprint
Enemy_Character.
Open Enemy_Character, then delete all the script from the graph.
Select the Character Movement component then set Max Walk Speed in the Details panel to
120.0.
This reduces the speed of our AI Character movement around the environment when patrolling
and not chasing the Player.
Select Class Defaults from the Toolbar, then in the Details panel, assign the Enemy_Controller
as the AI Controller Class.
We are going to place our AI in the world. If you spawn the AI after the world is loaded, change
the Auto Possess AI setting to Spawned.
From the Content Browser, drag the Enemy_Character into the Level.
From the Place Actors panel, drag a Nav Mesh Bounds Volume into the Level.
With the Nav Mesh Bounds Volume selected, press R and scale the volume to encapsulate the
entire Level.
This will generate a Navigation Mesh that enables our AI character to move around the
environment. You can press the P key to toggle the display of the Nav Mesh in the Viewport
(areas that are green indicate possible navigation locations).
During gameplay, you can use the Show Navigation console command to toggle the display of
the Nav Mesh on/off.
Our project setup is complete, in the next step we will set up our Blackboard asset.
2 - Blackboard Setup
In this step, we create our Blackboard asset, which is essentially the brain of our AI. Anything
we want our AI to know about will have a Blackboard Key that we can reference. We’ll create
keys for keeping track of the Player, whether or not the AI has line of sight to the Player, and a
location where the AI can move to when it is not chasing the Player.
In the Content Browser, click Add New and under Artificial Intelligence, select Blackboard and
call it BB_Enemy.
Inside the BB_Enemy Blackboard, click the New Key button and select Object.
The Blackboard asset consists of two panels: the Blackboard, which enables you to add and keep
track of your Blackboard Keys (variables to monitor), and Blackboard Details, which enables
you to name and specify the type of Keys.
For the Object key, enter EnemyActor as the Entry Name and Actor as the Base Class.
Add another Key with the Key Type set to Bool called HasLineOfSight.
This will be used to keep track of whether or not the AI has line of sight to the Player.
Add another Key, with the Key Type set to Vector called PatrolLocation.
This will be used to keep track of a location in the Level where the AI can move when it is not
chasing the Player.
Our Blackboard is set up with the things we need to track. In the next step, we will lay out our
Behavior Tree.
In this step, we will lay out the flow of our Behavior Tree and the states that we want our AI to
enter. Laying out your Behavior Tree with the states you anticipate your AI could be in as a
visual flow will give you an idea of what type of logic and rules you will need to create to enter
those states.
In the Content Browser, click Add New and under Artificial Intelligence, select Behavior Tree
and call it BT_Enemy.
Naming conventions may vary, but it's generally good practice to add an acronym of the asset
type to the name.
Open the BT_Enemy and assign the BB_Enemy as the Blackboard Asset.
If you do not see the Blackboard Keys we created, clear the Blackboard Asset by clicking the
yellow arrow, then re-assign the Enemy_BB to refresh the keys.
The Behavior Tree consists of three panels: the Behavior Tree graph, where you visually layout
the branches and nodes that define your behaviors, the Details panel, where properties of your
nodes can be defined, and the Blackboard, which shows your Blackboard Keys and their current
values when the game is running and is useful for debugging.
In the graph, left-click and drag off the Root and add a Selector node.
Composites are a form of flow control and determine how the child branches that are connected
to them execute.
Composite
Description
s
Executes branches from left-to-right and are typically used to select between
subtrees. Selectors stop moving between subtrees when they find a subtree they
Selector successfully execute. For example, if the AI is successfully chasing the Player, it
will stay in that branch until its execution is finished, then go up to selector's
parent composite to continue the decision flow.
Executes branches from left-to-right and are more commonly used to execute a
series of children in order. Unlike Selectors, the Sequence continues to execute
its children until it reaches a node that fails. For example, if we had a Sequence
Sequence
to move to the Player, check if they are in range, then rotate and attack. If the
check if they are in range portion failed, the rotate and attack actions would not
be performed
Simple Parallel has two "connections". The first one is the Main Task, and it can
only be assigned a Task node (meaning no Composites). The second connection
Simple (the Background Branch) is the activity that's supposed to be executed while the
Parallel main Task is still running. Depending on the properties, the Simple Parallel may
finish as soon as the Main Task finishes, or wait for the Background Branch to
finish as well.
For the Selector node, in the Details panel, change the Node Name to AI Root.
Renaming nodes in the graph is a good way to easily identify, from a high-level, what the node
accomplishes. In this example, we name it AI Root as this is the real "Root" of our Behavior
Tree which will switch between our child branches. The default Root node that is automatically
added when creating a Behavior Tree is used to configure properties of the Behavior Tree as well
as assign the Blackboard asset it's using.
Left-click and drag off AI Root and add a Sequence node named Chase Player.
We use a Sequence node here as we plan to tell the AI to do a sequence of actions: rotate towards
the player, change their movement speed, then move to and chase the Player.
Left-click and drag off the AI Root node and add a Sequence node called Patrol.
Click image for full view.
For our AI, we will use the Sequence node to find a random location in the map, move to that
location, then wait there for a period of time before repeating the process of finding a new
location to move to.
You may also notice the numbers in the upper-right corner of the nodes:
This indicates the order of operation. Behavior Trees execute from left-to-right and top-down, so
the arrangement of your nodes is important. The most important actions for the AI should usually
be placed to the left, while the less important actions (or fall back behaviors) are placed to the
right. Child branches execute in the same fashion and should any child branch fail, the entire
branch will stop executing and will fail back up the tree. For example, if Chase Player failed, it
would return back up to AI Root before moving on to Patrol.
Drag off AI Root then add a Wait Task to the right of Patrol with Wait Time set to 1.0.
You will notice that this node is purple, indicating that it is a Task node. Task nodes are the
actions that you want the Behavior Tree to perform. The Wait Task acts as a catch-all in the
event that the Behavior Tree fails both Chase Player or Patrol.
Drag off the Chase Player and add a Rotate to Face BBEntry node.
This particular Task enables you to designate a Blackboard Entry that you want to rotate and
face, in our case the Enemy Actor (Player). Once you add the node, if you look in the Details
panel, the Blackboard Key will automatically be set to EnemyActor because it filters for the
Actor blackboard variable and it is the first one in the list. You can adjust the Precision option if
you want to tune the success condition range as well as change the Node Name.
In addition to using the built in Tasks, you can create and assign your own custom Tasks that
have additional logic that you can customize and define. This Task will be used to change the
movement speed of the AI so that it runs after the Player. When you create a new Task, a new
Blueprint will automatically be created and opened.
Click image for full view.
It’s a good practice to immediately rename any newly created Tasks, Decorators or Services
when you create them. Proper naming conventions would be to prefix the name of the asset with
the type of asset you create such as: BTT for Behavior Tree Tasks, BTD for Behavior Tree
Decorators, or BTS for Behavior Tree Services.
Inside the BT_Enemy, add the BTT_ChasePlayer Task followed by a Move To.
Our new Task has no logic in it yet, but we will come back and add the logic for changing the
movement speed of our AI character after which, the AI will Move To the EnemyActor (Player).
Create a new Task and rename it BTT_FindRandomPatrol, then connect it to Patrol.
This will instruct the AI to Move To the PatrolLocation which will be set inside the
BTT_FindRandomPatrol Task.
Add a Wait Task following Move To with Wait Time to 4.0 and Random Deviation to 1.0.
This instructs the AI to wait at PatrolLocation for 3-5 seconds (Random Deviation adds + or - a
second to Wait Time).
The framework for our Behavior Tree is complete. In the next step, we will add the logic for
changing the movement speed of the AI, finding a random location to navigate to when the AI is
patrolling, and the logic for determining when the AI should be chasing the player or patrolling.
In this step, we set up our Chase Player Task to change the movement speed when chasing the
Player.
You should always select the AI version of Event Receive Execute, Event Receive Abort, and
Event Receive Tick if the Agent is an AI Controller. If both generic and AI event versions are
implemented, only the more suitable one will be called, meaning the AI version is called for AI,
and the generic one otherwise.
Inside the Content Browser, open the Enemy_Character Blueprint and add a Function called
Update Walk Speed.
This function will be called from our Behavior Tree and will be used to change the AI's
movement speed.
Technically we could access the Character Movement Component off the Cast node in our Chase
Player Task and adjust the movement speed from within the Task, however having the Behavior
Tree directly change properties of sub-objects is not a recommended best practice. Instead, we
will have the Behavior Tree call a function inside the Character which will then make the
modifications we need.
In the Details panel for the Update Walk Speed function, add a Float input called
NewWalkSpeed.
Drag the CharacterMovement Component off the Components tab, then use Set Max Walk
Speed and connect as shown below.
When we call this function from the Behavior Tree, we can pass through a value to be used as
the new speed.
Back inside the BTT_ChasePlayer Task, from the As Enemy Character node, call Update Walk
Speed set to 500.0 and connect as shown.
Don't see the Update Walk Speed function you created? You may need to Compile the
Enemy_Character Blueprint before trying to add it in the Chase Player Task.
Following Update Walk Speed, add two Finish Execute nodes and connect as shown below.
Here we mark the Task as successfully finishing when we successfully cast to the
Enemy_Character. In the event that the controlled Pawn is not Enemy_Character, we need to
handle this case so we mark the Task as unsuccessful which will abort the Task.
Right-click the New Walk Speed pin, then promote it to a variable and call it ChaseSpeed.
For ChaseSpeed, make sure to enable Instance Editable.
By promoting this to an Instance Editable variable, the value of Max Walk Speed can be set from
outside of this Blueprint and will be available as a property inside our Behavior Tree.
We can now easily change the value of Chase Speed that is being sent to the Enemy_Character
Blueprint enabling us to tune and tweak how fast our AI chases the Player.
Our Chase Player Task is complete, in the next step, we will set up the Find Random Patrol Task
logic to get a random location for the AI to move to.
5 - Task Setup - Find Random Patrol
In this step, we set up our Find Random Patrol Task so our AI moves to a random location when
it is not chasing the Player.
Implementing a Blueprint Behavior Tree Task is a clever way to quickly iterate, but, if
performance is a concern, you may decide to move to a native Behavior Tree Task.
Off As Enemy Character, call Update Walk Speed and promote New Walk Speed to a variable
called Patrol Speed with the following settings:
Off Controlled Pawn, Get Actor Location then GetRandomReachablePointInRadius with the
Return Value connected to a Branch.
Here we are finding a random location within 1000 units of the enemy character's current
location. We are also using a Branch node to handle the edge case that a random point to move to
is not found.
Off the Random Location pin, use Set Blackboard Value as Vector with the Key promoted to a
variable called PatrolLocation.
Use another Set Blackboard Value as Vector node with the Value coming from Get Actor
Location.
Continuing from the previous step, connect as shown below with both nodes resulting in Finish
Execute marked Success.
Click image for full view.
If the enemy finds a random position to move to, it will be stored in the Blackboard as the
location to move to. If a location is not found, it will use it's current location and stay put before
trying a new location. We still need to handle the edge case that the Controlled Pawn is not
Enemy_Character.
Off the Cast Failed pin of the Cast node, use Finish Execute with Success disabled.
If the Controlled Pawn is not Enemy_Character, this Task will be marked as unsuccessful and
will be aborted.
Our Find Random Patrol Task is complete. In the next step, we will learn more about Decorators
and how they can be used as conditionals as well as set up our AI Controller.
6 - AI Controller Setup
In this step, we do a little bit of work inside the AI Controller in preparation for the final step,
setting up a Decorator to determine which branch of our Behavior Tree to enter.
In the Content Browser, open the Enemy_Controller Blueprint and add an Event On Possess
node.
Off Event On Possess, add a Run Behavior Tree node with BTAsset set to BT_Enemy.
**
Run Behavior Tree is a contextual functional call that targets AI Controller Class Blueprints and
enables you to execute the assigned Behavior Tree** asset.
In the Components window, click + Add Component and search for and add an AIPerception
Component.
The AI Perception Component is used to create a stimuli listener within the AI Perception
System and gathers registered stimuli (in our case, we can use Sight) that you can respond to.
This will give us the ability to determine when the AI actually sees the Player and can react
accordingly.
In the Details panel for the AIPerception Component, add an AI Sight config and enable Detect
Neutrals.
The Detection by Affiliation properties enable you to set up team based AI that fight alongside
teammates of the same affiliation and attack members of the opposing affiliation. Actors by
default are not assigned an affiliation and are considered as neutral.
Currently, you cannot assign affiliation through Blueprint, therefore we are enabling the Detect
Neutral flag in order to detect the Player. As an alternative, we are going to use Actor Tagging to
determine which Character is the Player and force AI Character(s) to only chase Actors tagged as
Player.
In the Events section for AIPerception, click the + sign next to On Target Perception Updated.
Off On Target Perception Updated in the graph, add an Actor Has Tag node with Tag set to
Player.
Here we are checking if the Actor was successfully Sensed and if that Actor has the Tag of
Player.
You can select the Break AIStimulus node and in the Details panel use Hide Unconnected Pins
to hide all pins that are not connected so your graph looks similar to the one above.
Off the False of the Branch, use Set Timer by Event with Time set to 4.0.
Right-click and promote Time to a variable and call it Line Of Sight Timer.
This Variable and the value assigned will determine how long before the AI gives up chasing the
Player at which point, the attached Event will execute.
Right-click on the Return Value of Set Timer by Event and promote it to a Variable called
EnemyTimer.
This stores a reference to the Timer by way of a Handle. This Handle can be called upon through
script to invalidate itself and clear any associated Events (preventing the associated Event from
being executed). We can use this later in the event the AI sees the player again before the Line of
Sight Timer runs out, which stop the AI from losing sight of the player and giving up chase.
Create a Custom Event and call it StartEnemyTimer and connect it to the Event pin of Set Timer
by Event.
Right-click, then under Variables > AI, add a Get Blackboard node
Off Blackboard, use Set Value as Bool and Set Value as Object and connect as shown below.
This enables us to update the Blackboard Keys defined with new Values.
Right-click and promote both Key Names to Variables called HasLineOfSight and EnemyActor
respectively.
Compile the Blueprint and set the Default Values for both Key Names to HasLineOfSight and
EnemyActor respectively.
Off the True of the Branch, use Get EnemyTimer then Clear and Invalidate Timer by Handle.
Copy and Paste the Blackboard node, Set Value as and Key Name nodes as shown.
On the Set Value as Bool node, enable the Bool Value and drag the Actor pin to the Object
Value as shown.
This sets the Blackboard Key Values for Has Line Of Sight to True and EnemyActor to the
Actor we perceived (which we have set up to only trigger if it is the Player).
Click Compile to compile then close the Blueprint.
In this final section, we adjust a few settings on the Player Character and Enemy Character
Blueprints. We also set up our Decorator in our Behavior Tree which will determine what branch
we can enter based on a specified condition.
Inside the Content Browser under Content > ThirdPersonBP > Blueprints, open the
ThirdPersonCharacter Blueprint.
In the Details panel, search for and add a Tag set to Player.
By setting adding this Tag of Player, the AI can now perceive and react to the Player.
Open up the Enemy_Character Blueprint inside your AI folder.
In the Details panel, search for Rotation and enable Use Controller Rotation Yaw.
This will cause the AI to rotate properly when the Rotate to Face BB Entry is called from our
Behavior Tree.
Don't see the Pawn options? You may need to click the Class Defaults button from the Toolbar
first.
Open up the BT_Enemy and right-click on Chase Player, then under Add Decorator.., select
Blackboard.
Click image for full view.
When you right-click on a node in a Behavior Tree, you can add subnodes that provide
additional functionality:
Subnode Description
Decorat Also known as conditionals. These attach to another node and make decisions on
or whether or not a branch in the tree, or even a single node, can be executed.
These attach to both Task and Composite nodes, and will execute at their defined
frequency as long as their branch is being executed. These are often used to make
Service
checks and to update the Blackboard. These take the place of traditional Parallel
nodes in other Behavior Tree systems.
We are going to use a the Blackboard Decorator to determine the value of a Blackboard Key,
which when valid, is going to allow this branch to execute.
Select the Blackboard Based Condition that was added and set the following settings in the
Details panel.
Compile and close the Behavior Tree then Play in the Editor.
In addition to Behavior Trees which can be used to make decisions on which logic to execute,
and the Environmental Query System (EQS) used to retrieve information about the environment;
another tool you can use within the AI framework which provides sensory data for an AI is the
AI Perception System. This provides a way for Pawns to receive data from the environment,
such as where noises are coming from, if the AI was damaged by something, or if the AI sees
something. This is accomplished with the AI Perception Component that acts as a stimuli listener
and gathers registered Stimuli Sources.
When a stimuli source is registered, the event On Perception Updated (or On Target Perception
Updated for target selection) is called which you can use to fire off new Blueprint Script and (or)
update variables that are used to validate branches in a Behavior Tree.
AI Perception Component
To add the AI Perception Component, click the +Add Component button in your Blueprint and
select AIPerception.
Once the AI Perception Component has been added, you can access its properties inside the
Details panel.
AI Perception Properties
In addition to the common properties available with in the Details panel for the AI Perception
Component, you can add the type of Senses to perceive under the AI Perception and Senses
Config section. Depending on the type of Sense, different properties are available to adjust how
the Sense is perceived.
The Dominant Sense property can be used to assign a Sense that should take precedence over
other senses when determining a sensed Actor's location. This should be set to one of the senses
configured in your Senses Config section or set to None.
AI Damage
If you want your AI to react to damage events such as Event Any Damage, Event Point Damage,
or Event Radial Damage, you can use the AI Damage Sense Config. The Implementation
property (which defaults to the engine class AISense_Damage) can be used to determine how
damage events are handled, however you can create your own damage classes through C++
code.
Property Description
Implementati
The AI Sense Class to use for these entry (defaults to AISense_Damage).
on
Debug Color When using the AI Debugging tools, what color to draw the debug lines.
Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).
Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.
AI Hearing
The AI Hearing sense can be use to detect sounds generated by a Report Noise Event, for
example, a projectile hits something and generates a sound which can be registered with the AI
Hearing sense.
Property Description
Implementation The AI Sense Class to use for this entry (defaults to AISense_Hearing).
Lo SHearing This is used to display a different radius in the debugger for Hearing
Range Range.
Detection by
Determines if Enemies, Neutrals, or Friendlies can trigger this sense.
Affiliation
Debug Color When using the AI Debugging tools, what color to draw the debug lines.
This asks the Perception System to supply Requestor with PredictedActor's predicted location in
PredictionTime seconds.
Property Description
Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color
Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).
Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.
AI Sight
The AI Sight config enables you to define parameters that allow an AI character to "see" things
in your Level. When an Actor enters the Sight Radius, the AI Perception System signals an
update and passes through the Actor that was seen (for example a Player enters the radius and is
perceived by the AI who has Sight Perception).
Property Description
Implementation The AI Sense Class to use for this entry (defaults to AISense_Sight).
Sight Radius The max distance over which this sense can start perceiving.
How far to the side the AI can see in degrees. The value represents
the angle measured in relation to the forward vector, not the whole
Peripheral Vision Half
range.
Angle Degrees
You can use SetPeripheralVisionAngle in Blueprint to change the
value at runtime.
Auto Success Range When greater than zero, the AI will always be able to see the a
from Last Seen target that has already been seen as long as they are within the range
Location specified here.
When using the AI Debugging tools, what color to draw the debug
Debug Color
lines.
AI Team
This notifies the Perception component owner that someone on the same team is close by (radius
is sent by the gameplay code which sends the event).
Property Description
Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color
Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).
Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.
AI Touch
The AI Touch config setting gives you the ability to detect when the AI bumps into something or
something bumps into it. For example, in a stealth based game, you may want a Player to sneak
by an enemy AI without touching them. Using this Sense you can determine when the Player
touches the AI and can respond with different logic.
Property Description
Debug
When using the AI Debugging tools, what color to draw the debug lines.
Color
Determines the duration in which the stimuli generated by this sense becomes
Max Age
forgotten (0 means never forgotten).
Starts Determines whether the given sense starts in an enabled state or must be
Enabled manually enabled/disabled.
Perception Events
The Events section enables you to define what happens when the AI Perception System receives
an update or when the AI Perception Component is activated or deactivated.
Property Description
On
Perception
Updated
This Event will fire when the Perception System receives an update and will
return an array of Actors that signaled the update.
On Target
Perception
Updated
This Event will fire when the Perception System receives an update and will
return the Actor that signaled the update. It also returns an AI Stimulus struct
that can be broken down to retrieve additional information.
Property Description
Stimulus
Where the Stimulus originated from.
Location
On
Component An Event that is fired when the AI Perception Component is activated.
Activated
On
Component An Event that is fired when the AI Perception Component is deactivated.
Deactivated
The following functions can be called through Blueprint to get information from or affect the
Perception System.
Function Description
Get Actors Retrieves whatever has been sensed about a given Actor and returns a
Perception Sensed Actor's Data structure.
Get Currently Returns all Actors that are being perceived based on a given Sense. If no
Perceived Sense is specified, all Actors currently perceived in any way will be
Actors returned.
Get Known Returns any Actors that have been perceived (and not yet forgotten) based
Perceived on a given Sense. If no Sense is specified, all Actors that have been
Actors perceived will be returned.
Returns the list of Hostile Actors (any hostile Actors that had a stimulus
Get Perceived
sensed which is not expired or successfully sensed). Method can be
Hostile Actors
overridden in Blueprint to return whatever Actors list the user wants.
Request Stimuli Manually forces the AI Perception System to update properties for the
Listener Update specified target stimuli listener.
Stimuli Source
The AI Perception Stimuli Source Component gives the owning Actor a way to automatically
register itself as a stimuli source for the designated Sense(s) within the Perception System. An
example use case would be to have an AI character with an AI Perception Component set up to
perceive stimuli based on Sight. You could then use the Stimuli Source Component in an Actor
(such as an item pickup Actor) and register it as a stimuli for Sight (which would enable the AI
to "see" the Actor in the Level).
To add the AI Perception Stimuli Source Component, click the +Add Component button in your
Blueprint and select AIPerception Stimuli Source.
Once the AI Perception Stimuli Source Component has been added, you can access properties
for it inside the Details panel.
Stimuli Properties
In the Details panel for the AI Perception Stimuli Source Component, the following two options
are available for AI Perception:
Property Description
Auto Register as Whether to automatically register the stimuli for the specified sense with
Source respect to the owning Actor.
Register as Source
for Senses
You can also assign any custom Senses that have been based on the AISense Class.
The following functions can be called through Blueprint for the AI Perception Stimuli Source
Component:
Function Description
Register for Sense Registers owning Actor as a stimuli source for the specified Sense class.
Unregister from Unregisters the stimuli for the specified sense with respect to the owning
Sense Actor.
AI Perception Debugging
You can debug AI Perception using the AI Debugging tools by pressing the '(apostrophe) key
while your game is running, then pressing numpad key 4 to bring up the Perception information.
The Unreal Engine Navigation System allows artificial intelligence Agents to navigate the Level
using pathfinding.
The system generates a Navigation Mesh from the collision geometry in the Level and divides
the mesh into tiles. These tiles are then divided into polygons to form a graph that is used by
Agents when navigating to their destination. Each polygon is assigned a cost which Agents use
to determine the optimal path with the overall lowest cost.
The Navigation System includes a variety of components and settings that can modify the way
the Navigation Mesh is generated, such as the cost assigned to polygons. This, in turn, affects the
way Agents navigate through your Level. You can also connect areas of the Navigation Mesh
that are not contiguous, such as platforms and bridges.
The Navigation System includes three Generation Modes: Static, Dynamic, and Dynamic
Modifiers Only. These modes control the way the Navigation Mesh is generated in your project
and provide a variety of options to suit your needs.
The system also provides two methods of avoidance for Agents: Reciprocal Velocity Obstacles
(RVO), and the Detour Crowd Manager. These methods allow Agents to navigate around
dynamic obstacles and other Agents during gameplay.
In the following guides you will learn about the different components and settings of the
Navigation System and how you can use them to create interactive artificial intelligence Agents
for your project.
Widget Blueprints
How to create a Widget Blueprint and Overview of the Widget Blueprint Interface.
At first, you should create a Widget Blueprint, as shown below. With the help of this, you will be
able to start working with Unreal Motion Graphics (UMG).
Create Widget Blueprint. Click the Add in the Content Browser, then select User Interface >
Widget Blueprint.
You can also Right-click in the Content Browser instead of clicking the Add button.
You can rename or use the default name for the Widget Blueprint you created in the Content
Browser.
Double-click the created Widget Blueprint to open it in the Widget Blueprint Editor.
Click image for full view.
The Designer tab is tab by default in the opened Widget Blueprint Editor. With the help of
available editor tools, you can customize the appearance of the UI. Also, you can get the visual
preview of the in-game screen, due to layout you adjust.
Click for full view.
Numbe
Window Description
r
Editor
3 It switches the Blueprint Editor between Designer and Graph modes.
Mode
It contains the list of widgets, that you can drag into the Visual
4 Palette
Designer window. Displays any class inheriting from UWidget.
It displays the structure of the User Widget. You can also drag
5 Hierarchy
widgets from Palette panel into this panel.
This is the animation track for UMG which allows you to keyframe
8 Animations
animations for your widgets.
The Visual Designer window by default is 1:1 scale. You can change the scale by holding Ctrl
and using Mouse-Wheel.