Module 5B
10.7 TASK COMMUNICATION
In a multitasking system, multiple processes run concurrently. Depending on their interaction,
processes are classified into:
1. Co-operating Processes
o Require input from other processes to complete execution.
o Exchange information via shared resources or communication mechanisms.
2. Competing Processes
o Do not share data but compete for system resources (e.g., files, display devices, CPU
time).
Methods of Co-operating Process Communication
1. Cooperation through Sharing
• Processes exchange data using shared memory or files.
• Example: Multiple processes accessing a shared database.
2. Cooperation through Communication
• No direct data sharing, but processes communicate to synchronize their actions.
• Example: Two processes signaling each other to proceed in execution.
Inter-Process Communication (IPC)
IPC is the mechanism that allows processes to communicate and coordinate. Some common IPC
mechanisms include:
• Message Passing (e.g., sockets, pipes, message queues)
• Shared Memory (e.g., memory-mapped files, direct memory access)
• Signals & Semaphores (for synchronization)
• Remote Procedure Calls (RPC) (used for communication between processes on different
machines)
10.7.1 Shared Memory
Shared memory is one of the fastest IPC mechanisms, where multiple processes access a common
memory area to exchange data.
How Shared Memory Works
1. A process writes data into the shared memory region.
2. Other processes read the data from the same memory region.
3. Synchronization mechanisms like semaphores or mutexes ensure consistency and prevent
data corruption.
Real-World Analogy
• A Notice Board:
o A company posts updates (like meeting schedules).
o Employees read the information but cannot modify it (one-way communication).
o Unlike the real-world case, shared memory can support both read and write
operations.
Pipes
A pipe is a shared memory section used for communication between processes. It follows a
client-server architecture, where:
• Pipe Server: Creates the pipe.
• Pipe Client: Connects to the pipe to read/write data.
Pipes act as a conduit for information flow and have two ends:
• Unidirectional Pipe: One process writes, and the other reads.
• Bidirectional Pipe: Both ends can read and write.
Types of Pipes
1. Anonymous Pipes
• Unidirectional communication.
• Used for data transfer between two processes.
• Generally used for parent-child process communication.
• Cannot be used between processes on different machines.
• Example:
o A parent process writes data to a pipe.
o A child process reads the data.
2. Named Pipes
• Can be unidirectional or bidirectional.
• Unlike anonymous pipes, named pipes have a name and can be accessed by multiple
processes.
• Supports communication between processes on the same machine or over a network.
• Any process can act as both client and server, allowing point-to-point communication.
10.7.1.2 Memory Mapped Objects
Memory-Mapped Objects in Inter-Process Communication (IPC)
Memory-mapped objects are a shared memory technique used by certain Real-Time Operating
Systems (RTOS) to allocate a shared block of memory accessible by multiple processes
simultaneously. Synchronization techniques are required to prevent inconsistent results
How Memory-Mapped Objects Work
1. A mapping object is created.
2. Physical storage for it is reserved and committed.
3. A process maps the entire memory block or a portion of it into its virtual address space.
4. Any read/write operations performed on this virtual address are directed to the underlying
physical memory area.
5. Another process that wants to share data maps the same memory area to its own virtual
memory space.
Windows API Calls for Memory-Mapped Objects
1. Creating a Memory-Mapped Object
• CreateFileMapping():
o Used to create a mapping from a file or system paging memory.
o If mapping from system paging memory, pass INVALID_HANDLE_VALUE (-1).
o Parameters:
▪ flProtect: Access control (PAGE_READONLY, PAGE_READWRITE).
▪ lpName: Name of the mapping (if NULL, creates an unnamed object).
o If an object with the same name exists, it returns the handle of the existing object.
2. Mapping the Object into Process Memory
• MapViewOfFile():
o Maps a view of the memory-mapped object into the calling process’s address space.
o Parameters:
▪ dwDesiredAccess: Read/write permissions (FILE_MAP_READ,
FILE_MAP_WRITE).
▪ dwFileOffsetHigh/Low: Offset from where mapping starts.
▪ dwNumberOfBytesToMap: Number of bytes to map (0 maps the entire
memory).
o Returns starting address of the mapped view (or NULL if it fails).
3. Unmapping the Memory
• UnmapViewOfFile():
o Unmaps a previously mapped view, freeing up virtual address space.
4. Opening an Existing Memory-Mapped Object
• OpenFileMapping():
o Opens an existing memory-mapped object by name.
o Parameters:
▪ dwDesiredAccess: Read/write permissions.
▪ bInheritHandle: Whether the handle is inheritable (TRUE/FALSE).
▪ lpName: Name of the existing memory-mapped object.
The following sample code illustrates the creation and accessing of memory mapped objects across
multiple processes. The fi rst piece of code illustrates the creation of a memory mapped object with
name “memorymappedobject” and prints the address of the memory location where the memory is
mapped within the virtual address space of Process 1
The piece of code given below corresponds to Process 2. It illustrates the accessing of an existing
memory mapped object (memory mapped object with name “memorymappedobject” created by
Process 1) and prints the address of the memory location where the memory is mapped within the
virtual address space of Process 2. To demonstrate the application, the program corresponding to
Process 1 should be executed fi rst and the program corresponding to Process 2 should be executed
following Process 1
Process 1 is executed fi rst and Process 2 is executed while Process 1 waits for the user input from
the keyboard, is given in Fig. 10.19
10.7.2 Message Passing
Definition:
• Message passing is an (a)synchronous communication mechanism used for IPC/ITC.
Comparison with Shared Memory:
• Shared Memory allows large data exchange but has synchronization overhead.
• Message Passing is limited in data exchange but faster and free from synchronization issues.
10.7.2.1Message Queue:
• Communication occurs via a FIFO (First-In-First-Out) queue called a Message Queue.
• Messages are sent using send() and received using receive().
• Implementation depends on the OS kernel.
•
Windows XP OS Implementation:
• Maintains one system message queue and one message queue per process/thread.
• A thread posts a message to the system queue, which the kernel processes and forwards to
the destination thread.
• The MSG structure contains the recipient thread handle, message parameters, timestamp,
etc.
Types of Message Passing:
• Asynchronous Messaging:
o The sending thread does not wait for a response. It continues execution after
posting the message.
• Synchronous Messaging:
o The sending thread waits (blocks) until the recipient thread processes the message
and responds.
10.7.2.2 Mailbox:
• Definition:
o A Mailbox is an alternative to Message Queues used in Real-Time Operating
Systems (RTOS) for one-way communication.
o It allows a task/thread to send messages to multiple recipient tasks/threads.
• How It Works:
o A thread (Mailbox Server) creates a mailbox to post messages.
o Other threads (Mailbox Clients) subscribe to receive messages.
o When a message is posted, the server notifies the clients.
o Clients read the message from the mailbox when notified.
• Key Differences Between Mailbox and Message Queue:
o Both serve the same function (passing messages between tasks).
o Message Queues can hold multiple messages.
o Mailboxes handle only a single message at a time.
o Mailboxes are useful for communication between an ISR (Interrupt Service Routine)
and a task.
• Implementation:
o Mailbox operations (creation, subscription, message reading/writing) are done using
OS kernel APIs.
o Example: MicroC/OS-II uses mailboxes for inter-task communication.
10.7.2.3. Signaling
• Definition:
o Signaling is a primitive IPC mechanism used for asynchronous notifications
between processes/threads.
o A signal does not carry data and is not queued.
• How It Works:
o One process/thread sends a signal to indicate an event.
o Another process/thread waits for the signal before proceeding.
• Examples of Signaling in RTOS:
o RTX51 Tiny OS
▪ Uses os_send_signal to send a signal.
▪ Uses os_wait to wait for a signal.
o VxWorks RTOS
▪ Signals are handled by a signal handler.
▪
Remote Procedure Call (RPC)
• Definition:
o RPC is an Inter-Process Communication (IPC) mechanism that allows a process to
invoke a procedure of another process.
o The processes can be on the same CPU or different CPUs connected over a network.
o In Object-Oriented Programming (OOP), RPC is also called Remote Method
Invocation (RMI).
o Commonly used in distributed applications like Client-Server models.
• How It Works:
o The server process contains the procedure to be invoked remotely.
o The client process initiates an RPC request to call the procedure.
o Communication follows standard formats to ensure compatibility across platforms.
o Uses Interface Definition Language (IDL), such as Microsoft Interface Definition
Language (MIDL), for defining interfaces.
• Types of RPC Communication:
o Synchronous RPC (Blocking)
▪ The client waits (blocks) until it receives a response from the server.
o Asynchronous RPC (Non-blocking)
▪ The client continues execution while the remote procedure runs in parallel.
▪ The result is returned through callback functions.
• Security in RPC:
o Uses authentication mechanisms to protect against unauthorized access.
o Authentication methods include IDs, public key cryptography (DES, 3DES, etc.).
2. Sockets in RPC Communication
• Definition:
o A socket is a logical endpoint for two-way communication between processes over a
network.
o Each socket has a port number for identifying the application.
• Types of Sockets:
1. Stream Sockets (TCP-based)
▪ Connection-oriented, ensuring reliable communication.
2. Datagram Sockets (UDP-based)
▪ Connectionless but faster, though less reliable than TCP.
• Client-Server Communication Using Sockets:
o Client Side: Uses a socket with a port number to send requests.
o Server Side: Listens on a specific port number and processes incoming requests.
o If both client and server run on the same CPU, they can use the same host name and
port number.
o The connection is established using network interfaces like Ethernet or Wi-Fi.
• OS-Dependent Implementations:
o Windows OS provides Winsock (Windows Socket 2) Library for socket programming.
10.8 Task Synchronization
• In a multitasking environment, multiple processes run concurrently and share system
resources.
• Each process operates within its own boundary and communicates using IPC mechanisms
like shared memory and variables.
• Issues arise when multiple processes attempt to access the same resource simultaneously,
leading to unexpected results.
• Synchronization ensures that processes are aware of shared resource access, preventing
conflicts.
• Task synchronization is essential to maintain data integrity and avoid race conditions.
• Without proper synchronization, issues like data inconsistency, deadlocks, and resource
conflicts can occur.
• Various synchronization techniques are used to manage resource access effectively.
10.8.1 Task Communication/Synchronisation Issues
10.8.1.1 Racing
Let us have a look at the following piece of code
#include “stdafx.h”
#include <windows.h>
• A race condition occurs when multiple processes access and modify shared data
concurrently, leading to unpredictable outcomes.
• In the given example, Process A and Process B both increment a shared variable (counter++),
but due to context switching, one increment is lost.
• The counter++ operation is broken into multiple low-level instructions (mov, add, mov),
making it non-atomic.
• Context switching between these instructions allows another process to modify the shared
variable before the first process completes its operation.
• As a result, Process A increments counter using an outdated value, leading to incorrect
results.
• The solution is to implement mutual exclusion—ensuring that only one process accesses the
shared variable at a time using synchronization mechanisms like locks, semaphores, or
atomic operations.
10.8.1.2 Deadlock
• Deadlock occurs when two or more processes are waiting for each other to release
resources, causing a standstill where none can proceed.
• It is similar to a traffic jam where vehicles block each other at an intersection.
Example:
• Process A holds Resource X but needs Resource Y (held by Process B).
• Process B holds Resource Y but needs Resource X (held by Process A).
• Both processes are stuck, unable to proceed, leading to a deadlock.
•
Cause: Mutual exclusion (resources are locked by the process using them).
Solution:
• Deadlock prevention: Avoid holding resources while waiting for another.
• Deadlock detection: Identify deadlock and release resources.
• Deadlock recovery: Terminate one of the processes to break the cycle.
The different conditions favouring a deadlock situation are listed below.
Deadlock Conditions (Coffman Conditions)
1. Mutual Exclusion – Only one process can use a resource at a time.
2. Hold and Wait – A process holding a resource waits for additional resources held by other
processes.
3. No Resource Preemption – Resources cannot be forcibly taken from a process; they must be
released voluntarily.
4. Circular Wait – A circular chain of processes exists, where each process is waiting for a
resource held by the next.
Deadlock Handling Techniques
• Ignore Deadlocks – Assume deadlocks won’t happen (used in UNIX systems).
• Prevention – Modify system conditions to avoid deadlocks (e.g., remove circular wait).
• Detection & Recovery – Identify deadlocks and recover (e.g., terminate a process).
• Avoidance – Use resource allocation algorithms to prevent deadlocks.
Deadlock Detection & Recovery
• OS detects deadlocks using a resource graph and applies graph analysis.
• If deadlock occurs, the system may:
1. Terminate a process to break the deadlock.
2. Preempt resources (forcefully take back resources).
• Similar to resolving a traffic jam by backing up cars.
•
Deadlock Avoidance
• The OS carefully allocates resources to avoid circular waits.
• Works like a traffic light system at junctions to prevent congestion.
Deadlock Prevention
• Modifies system rules to prevent deadlock conditions by:
1. Requesting all resources before execution begins.
2. Allowing resource allocation only if no other resources are held.
3. Ensuring resource preemption—processes release resources if a new request cannot
be fulfilled.
Livelock
• Livelock is a condition where processes continuously change their state in response to each
other but fail to make any real progress.
• Unlike deadlock, where processes remain stuck in a waiting state forever, livelock processes
are actively executing but still unable to complete their tasks.
• Example: Two people trying to pass each other in a narrow corridor, repeatedly moving in
the same direction to give way but still blocking each other.
• Solution: Implement mechanisms like random delays, priority-based decision-making, or
introducing timeouts to break the cycle of ineffective actions.
Starvation
• Starvation occurs when a process is unable to gain access to required resources for an
extended period due to unfair scheduling policies.
• This often happens in systems that prioritize high-priority tasks, causing lower-priority tasks
to be continuously postponed.
• Example: In a CPU scheduling system, if high-priority tasks keep arriving, a low-priority task
may never get CPU time.
• Causes: Priority-based scheduling, resource allocation policies, or deadlock prevention
mechanisms.
• Solution: Use aging techniques, where a process’s priority gradually increases over time,
ensuring it eventually gets the required resources.
Mutual Exclusion through Sleep & Wakeup
1. Need for Sleep & Wakeup Mechanism
• Traditional busy waiting for mutual exclusion wastes CPU time and increases power
consumption.
• Not suitable for battery-powered embedded systems due to high energy usage.
• Alternative: Sleep & Wakeup mechanism, where processes go into a blocked state when
they can't access the critical section.
2. How Sleep & Wakeup Works
• If a process cannot access a critical section, it enters a sleep (blocked) state.
• The process that owns the critical section sends a wakeup signal once it releases the
resource.
• This approach avoids unnecessary CPU usage and improves efficiency.
• Implementation varies depending on the OS kernel (e.g., Windows NT/CE).
3. Semaphore-Based Mutual Exclusion
• Semaphores are system resources that regulate shared resource access.
• A process acquires a semaphore before using a resource and releases it after use.
Types of Semaphores
1. Binary Semaphore (Mutex)
o Allows exclusive access to a shared resource (only one process at a time).
o Example: Display device in an embedded system.
2. Counting Semaphore
o Controls access for multiple processes but with a fixed limit.
o Maintains a count from zero to a maximum value, controlling the number of active
users.
o Example: Hard disk access, where different sectors can be used concurrently.
Real-World Example (Dormitory System)
• A dormitory with 5 beds can accommodate 5 users at a time.
• If a user requests a bed:
o If available, they get access.
o If full, they wait for a slot and are notified when space is available.
• Similar to counting semaphores, where resources are allocated only up to a limit.
4. Advantages of Sleep & Wakeup Mechanism
• Reduces CPU wastage (avoids busy waiting).
• Efficient resource allocation with proper synchronization.
• Enhances system performance in real-time and embedded environments.
Counting Semaphores vs. Binary Semaphores
1. Counting Semaphores
• Similar to Binary Semaphores but more flexible.
• Can be used for:
o Exclusive access (by setting the maximum count to 1).
o Limited access (by setting the count to a fixed number).
• Example: Shared hardware like a printer, where only a limited number of users can access it
simultaneously.
2. Binary Semaphore (Mutex)
• A synchronization object used to ensure exclusive access to a resource.
• Only one process/thread can own the mutex at a time.
• State of a mutex:
o Signaled → Available (not owned).
o Non-signaled → Owned by a process/thread.
3. Real-World Example (Hotel Accommodation System)
• Hotel rooms are shared resources, accessible only to one user at a time.
• A person requests a room from the receptionist:
o If available, they get the key and exclusive access.
o If not available, they book in advance and wait for a notification.
• When a user vacates, the key is returned, and the room is made available for the next user.
• Similar to mutex behavior, where a resource is locked until released.
3.
10.10 How to Choose an RTOS
The selection of a Real-Time Operating System (RTOS) for an embedded system is a crucial
decision that depends on functional and non-functional requirements. Below are the key
factors to consider:
1. Functional Requirements
Processor Support
• Not all RTOSs support every processor architecture.
• Ensure compatibility with the target processor.
Memory Requirements
• RTOS requires ROM (for OS files, usually stored in FLASH) and RAM (for OS services).
• Since embedded systems are memory-constrained, choose an OS with minimal ROM/RAM
requirements.
Real-Time Capabilities
• Not all embedded OSs are real-time.
• Evaluate task scheduling policies and real-time standards compliance of the OS.
Kernel and Interrupt Latency
• RTOS kernels may disable interrupts while executing services, causing latency.
• For high-response embedded systems, latency should be minimal.
Inter-Process Communication & Task Synchronization
• Different OS kernels offer various communication and synchronization mechanisms.
• Some provide solutions for priority inversion during resource sharing.
Modularization Support
• Some RTOSs allow selecting only necessary modules, reducing footprint.
• Example: Windows CE is highly modular.
Networking & Communication Support
• Ensure the OS provides built-in network stacks and driver support for required interfaces.
Development Language Support
• Some RTOSs include Java Virtual Machine (JVM) or .NET Compact Framework (NETCF) for
Java/.NET applications.
• If not included, check third-party availability.
2. Non-Functional Requirements
Custom vs. Off-the-Shelf OS
• Choose between:
1. Custom-built OS (tailored but costly & time-consuming).
2. Off-the-shelf OS (Commercial or Open Source).
• Consider development cost, licensing fees, and time-to-market.
Cost
• Evaluate the total cost, including development, licensing, and maintenance.
Development & Debugging Tools
• Some RTOSs offer limited development/debugging tools, impacting ease of use.
• Ensure adequate tool support for the OS.
Ease of Use
• Some RTOSs have steeper learning curves.
• Consider how easy it is to develop, deploy, and manage applications.
After-Sales Support
• For commercial RTOS, check for:
o Bug fixes & security patches.
o Technical support (email, phone, etc.).
o Production issue resolution.
Chapter 12:
Integration Testing of Embedded Hardware and
Firmware
1. Integration Testing Step
o "Integration testing of the embedded hardware and firmware is the immediate step
following the embedded hardware and firmware development."
2. Embedded Hardware
o "The final embedded hardware constitutes a PCB with all necessary components
affixed to it as per the original schematic diagram."
3. Embedded Firmware
o "Embedded firmware represents the control algorithm and configuration data
necessary to implement the product requirements on the product."
o "Embedded firmware will be in a target processor/controller understandable format
called machine language (sequence of 1s and 0s–Binary)."
4. Hardware Without Firmware
o "The target embedded hardware without embedding the firmware is a dumb device
and cannot function properly."
o "If you power up the hardware without embedding the firmware, the device may
behave in an unpredicted manner."
5. Unit Testing
o "As described in the earlier chapters, both embedded hardware and firmware should
be independently tested (Unit Tested) to ensure their proper functioning."
o "Functioning of individual hardware sections can be verified by writing small utilities
which check the operation of the specified part."
6. Firmware Simulation in IDE
o "As far as the embedded firmware is concerned, its targeted functionalities can
easily be checked by the simulator environment provided by the embedded
firmware development tool’s IDE."
o "By simulating the firmware, the memory contents, register details, status of various
flags and registers can easily be monitored and it gives an approximate picture of
'What happens inside the processor/controller and what are the states of various
peripherals' when the firmware is running on the target hardware."
o "The IDE gives necessary support for simulating the various inputs required from the
external world, like inputting data on ports, generating an interrupt condition, etc."
o "This really helps in debugging the functioning of the firmware without dumping the
firmware in a real target board."
12.1 INTEGRATION OF HARDWARE AND FIRMWARE
Integration of hardware and firmware involves embedding firmware into the target hardware board.
• It adds intelligence to the product by enabling firmware execution on the
processor/controller.
• Firmware can be stored in internal memory if the processor supports it and the size fits.
• If internal memory is insufficient, an external EPROM/FLASH chip is used for storage.
• The choice of firmware storage is determined during hardware design based on complexity.
12.1.1 Out-of-Circuit Programming
• Out-of-circuit programming is performed outside the target board.
• The processor or memory chip is removed and programmed using a dedicated programming
device.
• The programming device generates the necessary signals and supports multiple device
families.
• A ZIF socket holds the chip, and the device is controlled by a PC utility program.
• The programmer connects to the PC via RS-232C, USB, or Parallel Port Interface.
The commands to control the programmer are sent from the utility program to the programmer
through the interface (Fig. 12.2).
The sequence of operations for embedding the fi rmware with a programmer is listed below.
1. Connect the programming device to the specifi ed port of PC (USB/COM port/parallel port)
2. Power up the device (Most of the programmers incorporate LED to indicate Device power up.
Ensure that the power indication LED is ON)
3. Execute the programming utility on the PC and ensure proper connectivity is established
between PC and programmer. In case of error, turn off device power and try connecting it again
4. Unlock the ZIF socket by turning the lock pin
5. Insert the device to be programmed into the open socket as per the insert diagram shown on
the programmer
6. Lock the ZIF socket
7. Select the device name from the list of supported devices
8. Load the hex fi le which is to be embedded into the device
9. Program the device by ‘Program’ option of utility program
10. Wait till the completion of programming operation (Till busy LED of programmer is off)
11. Ensure that programming is successful by checking the status LED on the programmer
(Usually ‘Green’ for success and ‘Red’ for error condition) or by noticing the feedback from the
utility program
12. Unlock the ZIF socket and take the device out of programmer
Now the fi rmware is successfully embedded into the device. Insert the device into the board,
power up the board and test it for the required functionalities.
12.1.2 In System Programming (ISP)
• In-System Programming (ISP) allows firmware to be embedded without removing the chip
from the board.
• It is a flexible and easy method but requires ISP-supported target devices.
• No extra hardware is needed apart from the PC, ISP cable, and ISP utility.
• The target board connects to a PC via Serial Port, Parallel Port, or USB using serial
communication protocols like JTAG or SPI.
• The device must enter ISP mode to receive commands, erase, and reprogram memory
before resuming normal operation.
12.1.2.1 In System Programming with SPI Protocol
Devices with SPI ISP support have a built-in SPI interface, and the on-chip EEPROM or FLASH memory
is programmed through this interface.
Key SPI Pins:
• MOSI – Master Out Slave In (PC sends data)
• MISO – Master In Slave Out (Device acknowledges)
• SCK – System Clock (Syncs data transfer)
• RST – Reset of Target Device
• GND – Ground of Target Device
➢ The PC acts as the master, and the target device acts as the slave in ISP. The MOSI pin
receives program data, while the MISO pin sends acknowledgments.
➢ The SCK pin provides the clock signal for data transfer, and a utility program on the PC
generates these signals.
➢ If the target device operates under 5V logic, it can be directly connected to the parallel port
of the PC, without additional hardware for signal conversion.
➢ Standard SPI-ISP utilities are available online, so no custom programming is needed—just
connect the pins as required.
➢ The target device must be powered up in a predefined sequence to enable ISP mode.
The power up sequence for In System Programming for Atmel’s AT89S series microcontroller family is
listed below.
• Apply supply voltage between VCC and GND pins of target chip.
• Set RST pin to “HIGH” state.
• If a crystal is not connected across pins XTAL1 and XTAL2, apply a 3 MHz to 24 MHz clock to
XTAL1 pin and wait for at least 10 milliseconds
• Enable Serial Programming: Send the Programming Enable serial instruction to MOSI/P1.5.
• Clock Frequency Requirement: The shift clock at SCK/P1.7 must be less than CPU clock at
XTAL1 divided by 40.
• Programming Process:
➢ Code or Data array is programmed one byte at a time.
➢ Address and data must be supplied with the Write instruction.
➢ The memory location is erased before writing new data.
➢ The write cycle is self-timed and takes less than 2.5 ms at 5V.
• Verification: Any memory location can be verified using the Read instruction, which returns
the stored data at MISO/P1.6.
• Final Step: After programming, set RST pin low or turn off and on the chip power supply to
start normal operation.
12.1.3 In Application Programming ( IAP)
Purpose: IAP allows firmware running on the target device to modify a selected portion of
the code memory, but it is not used for initial firmware embedding.
Use Cases: It is commonly used for updating calibration data, look-up tables, and other
stored information in code memory.
Boot ROM API:
• The Boot ROM resident API provides functions for programming, erasing, and reading Flash
memory during ISP mode.
• These API instructions can also be used by end-user firmware for IAP operations.
Execution Mechanism:
• Specific registers must be set as required for an operation.
• A call is made to a common entry point to execute the operation.
• After execution, control returns to the user’s firmware, like a subroutine call.
Memory Shadowing:
• The Boot ROM is shadowed with user code memory within its address range.
• A status bit controls whether accesses go to Boot ROM or user code memory.
• Before calling the IAP function, the user must set the status bit to ensure correct memory
access.
12.2 BOARD BRING UP
• Now the firmware is embedded into the target board using one of the programming
techniques described above.
• The first verification is to make sure that the processor is fetching the code and the firmware
execution is happening in the expected manner.
• When the hardware PCB is assembled, most designers, in their eagerness to get the project
rolling, will try to power it on immediately.
• The first check is to look over the board and check for any missing parts, loose or partly
soldered components, or shorts in the path.
• Make sure that various power lines in the board (2.2V, 3.3V, 5.0V etc.) are up, within the spec
limits, and clean.
• Certain controllers/processors with multiple power inputs will have power sequencing
requirements.
• Clocks play a critical role in the boot-up of the system, and all clocks must be up and running
within spec values.
• Monitor the different interconnect buses (like I2C) to ensure they meet electrical
specifications and protocol requirements.
• Bring-up of prototype/evaluation/production versions is one of the most important steps in
embedded product design.
• The bring-up process involves performing a series of validations in a controlled environment
to verify hardware and firmware functionality.