ACADEMIC YEAR:2020-2021 REGULATION:2017
IFET COLLEGE OF ENGINEERING
DEPARTMENT OF ELECTRONICS AND COMMUNICATION
SEM: VII YEAR: 4th
SUBJECT CODE: EC8071 SUBJECT: COGNITIVE RADIO
UNIT-I INTRODUCTION TO SOFTWARE DEFINED RADIO AND
COGNITIVE RADIO
Evolution of Software Defined Radio and Cognitive radio: goals, benefits, definitions,
architectures, relations with other radios, issues, enabling technologies, radio frequency
spectrum and regulations.
COURSE OUTCOME:
Students who successfully complete the course will be able to
CO1: Gain knowledge on the design principles of software defined radio and cognitive radio.
Mapping of Course Outcomes (CO) to Programme Outcomes (PO)
Course Mapping with Programme outcomes
Outcomes
PO1 PO2 PO3 PO4 PO5 PO6 PO7 PO8 PO9 PO10 PO11 PO12
CO1 2 1 3 - - - - - 1 - - 2
Topics Covered with Page No:
S.NO TOPIC PAGE
NO.
1 Evolution of Software Defined Radio 2
2 Goals, Benefits, Definitions of SDR 3
3 Architecture of SDR 5
4 relations with other radios 9
5 Issues in SDR 10
6 Enabling technologies of SDR 11
7 Evolution of Cognitive Radio 16
8 Goals, Benefits, Definitions of CR 17
9 Issues in CR 18
10 Enabling technologies of CR 19
11 radio frequency spectrum and regulations. 20
1
ACADEMIC YEAR:2020-2021 REGULATION:2017
1.1 Evolution of Software Defined Radio:
Two decades ago most radios had no software at all, and those that had it didn’t do
much with it. In a remarkably visionary article published in 1993, Joseph Mitola III
envisioned a very different kind of radio.
A few years later Mitola’s vision started to become reality. In the mid-1990s military
radio systems were invented in which software controlled most of the signal processing
digitally, enabling one set of hardware to work on many different frequencies and
communication protocols.
1.1.1 First application of SDR:
The first (known) example of this type of radio was the U.S. military’s SPEAKeasy I
and SPEAKeasy II radios, which allowed units from different branches of armed forces to
communicate for the first time.
However, the technology was costly and the first design took up racks that had to be
carried around in a large vehicle. SPEAKeasy II was a much more compact radio, the size of
two stacked pizza boxes, and was the first SDR with sufficient DSP resources to handle many
different kinds of waveforms.
SPEAKeasy II subsequently made its way into the U.S. Navy’s digital modulator
radio (DMR) with many waveforms and modes, able to be remotely controlled with an
Ethernet interface.
These SPEAKeasy II and DMR products evolved not only to define these radio
waveform features in software, but also to develop an appropriate software architecture to
enable porting the software to an arbitrary hardware platform, thus achieving independence
of the waveform software specification and design from the underlying hardware.
1.1.2 First commercial application of SDR:
In the late 1990s SDR started to spread from the military domain to the commercial
sector, with the pace of penetration into this market considerably accelerating in the new
millennium.
Cellular networks were considered as the most obvious and potentially most lucrative
market that SDR could penetrate.
The benefits it could bring to this industry included a general-purpose and therefore
more economic hardware platform, future-proofing and easier bug fixes through software
upgrades, and increased functionality and interoperability through the ability to support
multiple standards.
Companies such as Vanu, AirSpan, and Etherstack currently offer SDR products for
cellular base stations. Vanu Inc., a U.S.-based company, has been focusing on the
commercial development of SDR business since 1998.
It received a lot of attention in 2005 with its AnywaveTM GSM base station, which
became the first SDR product to receive approval under the newly established software radio
regulation.
The Anywave base station runs on a general-purpose processing platform and
provides a software implementation of the BTS (base transceiver station), BSC (base station
controller), and TRAU (transcoder and rate adaptation unit) modules of the BSS (base sation
subsystem).
It supports GSM and can be upgraded to GPRS and Edge. The product was first
deployed in rural Texas by Mid Tex Cellular in a trial, where Vanu base station showed
successfully how it could concurrently run a time division multiple access (TDMA) and a
GSM network, as well as remotely upgrade and fix bugs on the base station via an Internet
link.
2
ACADEMIC YEAR:2020-2021 REGULATION:2017
1.1.3 Other applications of SDR:
Following this successful trial other operators, such as AT&T and Nextel, expressed
interest in the Anywave base station. In 2001 the 3GNewsroom was reporting SDR base
stations as the key solution to the 3G rollout problem. The ability of SDR base stations to
reconfigure on the fly and support multiple protocols.
When radio meets software thought to be the safest option for rolling out 3G. In
reality, SDR didn’t play the key role that was anticipated. However, a closer look at the
operator’s infrastructure shows that programmable devices have become a key component of
current 3G base stations.
In March 2005 Airspan released the first commercially available SDRbased IEEE
802.16 base station. The AS.MAX base station uses picoarraysTM and a reference software
implementation of the IEEE 802.16d standard. The picoarray is a reconfigurable platform that
is 10 times faster in processing power than today’s DSPs.
The AS.MAX base station promises to be upgradeable to the next generation mobile
802.16e standard and so has the potential to offer a future-proof route to operators looking to
rolling out WiMAX services. In addition to the preceding proprietary SDR platforms
developed for the military and commercial sectors, there has also been significant progress in
the SDR development in the open-source research and university communities.
GNU Radio is an open-source architecture designed to run on general-purpose
computers. It is essentially a collection of DSP components and supports RF interface to the
universal software radio peripheral (USRP), an up- and down-convertor board coupled with
ADC and DAC capabilities, which can be coupled to a daughter RF board. GNU Radio has
been extensively used as an entry-level SDR within the research community.
Some major SDR platforms developed in the university and research communities
will be described in detail in this book. As mentioned, due to its high demand on computation
and processing, SDR technology has worked only in devices that have less constraint in size
and power consumption, such as base stations and moving vehicles. But there is an increasing
demand for SDR to enter portable and handheld devices in the future.
1.14 Problems for implementing SDR in portable devices:
The main issue with introducing SDR into portable devices has been that it requires
the use of programmable platforms, which are generally power hungry and hence lead to
reduced battery life and large devices.
However, SDR provides the ability to support multiple waveforms on a single device,
and so ultimately could give an end user increased choice of services if incorporated into a
portable device, such as a handset. SDR could also assist seamless roaming at the national
and international levels.
However, as new processing platforms emerge that overcome power and size
constraints, it is very likely that SDR will make its way into portable devices. Indeed, some
industry insiders are predicting that by 2015 there will be a transition from the current
generation of handsets to SDR handsets.
1.2 Goals of Software Defined Radio:
It is a Modifying radio device easily and cost-effectively has become business
critical. Software defined radio (SDR) technology brings the flexibility, cost efficiency and
power to drive communications forward, with wide-reaching benefits realized by service
providers and product developers through to end users.
The goal was to get a more quickly reconfigurable architecture, i.e., several
conversations at once, in an open software architecture, with cross-channel connectivity
(the radio can "bridge" different radio protocols).
3
ACADEMIC YEAR:2020-2021 REGULATION:2017
1.3 Benefits of Software Defined Radio:
1) Ease of manufacture:
The time to market of a commercial product is a key consideration in modern engineering
design. Software radio implementation and upgradation of devices via software reduces the
design complexity by freeing the designers from the tiresome job associated with analog
hardware design and their implementation.
2) Interoperability:
An SDR can seamlessly communicate with multiple radios that support different wireless
standards. It can also perform bridging between radios as a single multi-channel and multi-
standard SDR can act as a translator for all the radios dedicated to a particular frequency.
3) Multi-functionality:
The flexible architecture would allow the SDR to support multiple wireless standards. With
rapid growth of different wireless standards like Bluetooth, IEEE 802.11 (WLAN) etc., it is
now possible to enhance the services of a radio by leveraging other devices that provides
complimentary services such as data-audio-video transfer via Bluetooth or finding accurate
position via GPS etc. Due to this multi-functionality property of SDR, as different wireless
standards continue to evolve, the functionality of SDR can be enhanced by supporting these
standards by simply updating or installing new software changing the underlying hardware.
4) Opportunistic frequency reuse (cognitive radio):
An SDR can take advantage of underutilized spectrum with the help of cognitive radio
which always senses if some spare spectrum is present there in the surroundings environment.
If the owner or primary user of the spectrum is not using it, an SDR can borrow the spectrum
and assign it to a secondary user until the owner requires it again. This technique has the
potential to dramatically increase the optimal use of available spectrum.
5) Compactness and power efficiency:
The software radio approach, however, results in a compact and in some cases, a power-
efficient design. As the number of functionalities increase, same piece of hardware is reused
to implement multiple interfaces thus less number of different hardware components are
required as well as power consumption is lowered.
6) Ease of upgrades:
In the course of deployment, current services may need to be updated or new services may
have to be introduced. A flexible architecture of SDR allows improvements and addition of
already existing or new functionality through software only instead of replacing the hardware
platform or user terminals.
1.4 Definitions of SDR:
Software defined radio is modifying radio devices easily and cost-effectively has
become business critical. Software defined radio (SDR) technology brings the flexibility, cost
efficiency and power to drive communications forward, with wide-reaching benefits realized
by service providers and product developers through to end users.
Software-defined radio (SDR) is a radio communication system where components
that have been traditionally implemented in hardware (e.g. mixers, filters, amplifiers,
4
ACADEMIC YEAR:2020-2021 REGULATION:2017
modulators/demodulators, detectors, etc.) are instead implemented by means of software on a
personal computer or embedded system.
A software defined radio is one that can be configured to any radio or frequency
standard through the use of software.
1.5 Architectures of Software Defined Radio:
1.5.1 Ideal SDR Architecture:
Ideal SDR architecture consists of three main units, which are reconfigurable digital
radio, software tunable radio along with embedded impedance synthesizer, and software
tunable antenna systems. This structure is illustrated in Figure 1.1.
The main responsibilities of reconfigurable digital radio are performing digital radio
functionalities such as different waveform generation, optimization algorithms for software
tunable radio and antenna units, and controlling of these units.
Software tunable analog front-end system is limited to the components that cannot be
performed digitally such as RF filters, combiners/ splitters, Power Amplifier (PA), Low
Noise Amplifiers (LNA), and data converters.
Moreover, this unit has impedance synthesizer subsystem, which is a crucial unit to
optimize the performance of software tunable analog radio systems. For instance, impedance
synthesizer is used to optimize the performance of software tunable antenna systems for an
arbitrary frequency plan specified by cognitive engine.
Figure 1.1 Ideal SDR Architecture
Reconfigurable digital radio system monitors and controls the software tunable radio
system continuously or periodically depending on system specifications. A basic relationship
between the main units of SDR is described as follows. The cognitive engine sends radio
configuration parameters to the reconfigurable digital radio so that it can reconfigure the
entire radio according to the parameters. These parameters can be waveform type that needs
to be generated (e.g. OFDM, CDMA, UWB), frequency plan (e.g. bandwidth, operating
center frequency), and power spectrum specifications. Moreover, cognitive engine can
request from reconfigurable digital radio to measure or calculate some parameters from
environments such as location information of a particular user.
Reconfigurable digital radio configures itself along with software tunable radio
components and antenna systems. In order to optimize the performance of these two units,
reconfigurable digital radio utilizes the feedback information from software tunable radio,
especially from impedance synthesizer. Based on this information, it adjusts the parameters
5
ACADEMIC YEAR:2020-2021 REGULATION:2017
of software tunable radio and antenna units through radio and antenna control signals,
respectively. Finally, reconfigurable digital radio acknowledges cognitive engine that the
specified configuration is performed.
1.5.2 Realistic SDR Architectures
Due to the current limitations (size, cost, power, performance, processing time, data
converters), ideal SDR architectures are costly. There are various practical SDR platforms
available in the literature. Note that it is expected that these platforms will evolve
significantly in the future as the limitations towards ideal SDR platform are being removed.
Hence, as reconfigurable hardware technology advances, software tunable analog radio
functionalities will be implemented in reconfigurable digital radio platforms. In the
following, we will provide some current and practical SDR architectures. Figure 1.2 shows an
example of practical SDR architecture for the Worldwide Interoperability Microwave Access
(WiMAX) networks.
Figure 1.2 Practical SDR Architecture
Reconfigurable digital radio can be implemented using one of the SDR enabling
technologies such as Digital Signal Processor (DSP) or Field-Programmable Gate Arrays
(FPGAs).
This unit mainly generates and demodulates OFDM waveform and control the radio
components. Generated OFDM signal is in the form of digital Inphase (I) and Quadrature (Q)
samples. Interpolation, digital filtering, Peak-to- Average-Power-Ratio (PAPR) reduction
algorithms and digital Intermediate Frequency (IF) upconversion are applied to the I/Q
signals prior to Digitalto- Analog Converter (DAC). Consequently, DAC converts the digital
OFDM signal into the corresponding analog waveform.
IF signal at the output of DAC is upconverted to a final RF stage using Software
Tunable-UpConverter (STUC). An ST-UC generally consists of software tunable attenuators
and clock synthesizers. With such capability, transmit power level and local oscillator
frequency can be adjusted by reconfigurable digital radio.
Furthermore, IF-to- RF upconversion can be performed in one or multiple stages
depending on the performance criterion such as minimum image rejection level. For instance,
the first IF signal is upconverted to the second IF and then it is upconverted to the final RF
carrier in the case of two-stage upconversion. In the sequel, the RF signal is amplified using
PA according to the power spectral specifications that come from digital radio. A typical PA
consists of software tunable attenuators for adaptive transmit power level control.
The adaptive transmit power level is an important task for adjusting link quality. The
amplified RF signal is filtered out and it is radiated to the air through antenna. Depending on
the duplexing method, radio can be classified into three major groups, which are Time
6
ACADEMIC YEAR:2020-2021 REGULATION:2017
Division Duplexing (TDD), Frequency Division Duplexing (FDD) and Half-Frequency
Division Duplexing (H-FDD) radios.
In case of TDD radios, the transmit/receive switch is used, which is controlled by
reconfigurable digital radio. On the other hand, for the FDD radios, duplexer filter is used to
support simultaneous transmission and reception in different bands. On the receiver side, the
received RF signal is filtered out to suppress unwanted out-of-band signals. The filtered RF
signal is amplified using LNA.
This unit can have digital attenuators to protect the receiver from the signals with high
power. Furthermore, it can consist of Variable Gain Amplifier (VGA), which is controlled by
the reconfigurable digital radio, for RF automatic gain control. The amplified RF signal is
downconverted to a low IF stage using Software Tunable-Downconverter (ST-DC), which
can be performed either in one or multiple down conversion stages as well.
A typical ST-DC consists of software tunable internal or external AGC with peak
detector and software tunable clock synthesizer. The ST-DC as well as ST-UC can contain
externally selectable IF filters to support different bandwidths.
Consequently, ADC digitizes the analog IF signal and generates the corresponding
digital I/Q samples. Decimation and digital filtering are applied to the samples. In the
following steps, digital radio demodulates the received OFDM signal after time and
frequency synchronization. Software tunable radio components in this example are
implemented using programmable Application Specific Integrated Circuit (ASIC)
technology.
These components are configured by reconfigurable digital radio through Serial
Peripheral Interface (SPI) and independent pins. Using SPI and independent configuration
signals, reconfigurable digital radio writes configuration parameters into the registers that are
embedded in software tunable radio components.
Moreover, software tunable radio components can be powered down by the digital
radio to save power. For instance, the components in transmit chain can be switched to power
down mode in the TDD radios when radios receive signal and vice versa. Another example
that can be represented as a model for realistic SDR architecture is SPEAKeasy program
developed by the Department of Defense (DoD) in the United States (US).
The goal of SPEAKeasy program was to generate a portable, but not handheld,
software programmable radio capable of emulating more than 10 existing military radios
operating from 2 MHz to 2 GHz.
These radio systems would be implemented in software instead of the traditional
hardware and could be stored in the systems memory or downloaded from the air.
SPEAKeasy Phase-I was a lab based rack mounted proof-of-concept program to demonstrate
multi-band and multi-mode operations between voice networks. SPEAK easy used DSPs to
perform operations and generate waveforms in software instead of the traditional method of
using hardware.
With the completion of Phase I, the program moved into Phase II. This phase
delivered a portable radio and demonstrated that the overall concept of SDR was feasible.
Several demonstrations were performed including the bridging of communications systems,
in-field re-programming, and in-field repairs using Components Off The Shelf (COTS).
The SPEAK easy was an important stepping stone towards the largest SDR based
program to date, the Joint Tactical Radio System (JTRS). The JTRS is also a DoD initiative
to develop a flexible software programmable radio technology to meet the diverse
communications needs of the military. This includes transmissions of real-time voice, data,
and video across the US military branches as well as with coalition forces and allies.
The JTRS radio technology spans from 2 MHz to 2 GHz using waveforms that are
defined in software using minimal waveform specific hardware. The development of these
7
ACADEMIC YEAR:2020-2021 REGULATION:2017
waveforms allows for interoperability among radios, the reuse of common software across
many different radios, and scalability in the number of available channels. The
implementation of these waveforms is based upon a Software Communications Architecture
(SCA).
The SCA governs the structure and architecture of the JTRS telling designers how
elements of hardware and software are to operate in harmony with each other. The
development of the SCA is an international effort lead by the US with help from Japan,
Canada, England, and Sweden. Many different commercial and international bodies have
embraced the SCA to become the open international commercial standard for SDRs.
Using this architecture with a modular design approach gives the JTRS many benefits
compared with the plethora of current systems. Currently, many different radio systems must
be carried, maintained, and operated. The JTRS will reduce the total communications
equipments weight and footprint by eliminating multiple systems. This will also reduce
maintenance costs and complexity due to the elimination of the equipments and less systems
to maintain.
Upgrades to the JTRS should require minimum support, including the introduction of
new waveforms, due to the system being defined in software, which can be upgraded easily
over the air and in the field, reducing downtime and increasing efficiency.
On the commercial side, there are various efforts on building SDR platforms, mainly
focussed on cellular base stations. Airnet is one of these companies currently using SDR
technology to implement their base stations. Airnet’s AdaptaCell reduces hardware by 90
processing units handling a dozen separate channels reducing the number of DSPs and air
interface boards. It also supports GSM, GPRS, and EDGE with adaptive smart antenna
control.
1.6 Relations with other radios like Cognitive Radio:
One of the main characteristics of cognitive radio is the adaptability where the radio
parameters (including frequency, power, modulation, bandwidth) can be changed depending
on the radio environment, user’s situation, network condition, geolocation, and so on. SDR
can provide a very flexible radio functionality by avoiding the use of application specific
fixed analog circuits and components. Therefore, cognitive radio needs to be designed around
SDR. In other words, SDR is the core enabling technology for cognitive radio.
One of the most popular definitions of cognitive radio, in fact, supports the above
argument clearly: “A cognitive radio is an SDR that is aware of its environment, internal
state, and location, and autonomously adjusts its operations to achieve designated objectives.”
Even though many different models are possible, one of the simplest conceptual model that
describes the relation between cognitive radio and SDR can be described as shown in Figure
1.3. In this simple model, cognitive radio is wrapped around SDR. This model fits well to the
aforementioned definition of cognitive radio, where the combination of cognitive engine,
SDR, and the other supporting functionalities (e.g. sensing) results in cognitive radio.
Cognitive engine is responsible for optimizing or controlling the SDR based on some
input parameters such as that are sensed or learned from the radio environment, user’s
context, and network condition.
Cognitive engine is aware of the radio’s hardware resources and capabilities as well
as the other input parameters that are mentioned above. Hence, it tries to satisfy the radio link
requirements of a higher layer application or user with the available resources such as
spectrum and power. Compared to hardware radio where the radio can perform only single or
very limited amount of radio functionality, SDR is built around software based digital signal
processing along with software tunable Radio Frequency (RF) components.
8
ACADEMIC YEAR:2020-2021 REGULATION:2017
Hence, SDR represents a very flexible and generic radio platform which is capable of
operating with many different bandand waveform formats. As a result, SDR can support
multiple standards (i.e. GSM, EDGE, WCDMA, CDMA2000, Wi-Fi, WiMAX) and multiple
access technologies such as Time Division Multiple Access (TDMA), Code Division
Multiple Access (CDMA), Orthogonal Frequency Division Multiple Access (OFDMA), and
Space Division Multiple Access (SDMA).
Fig. 1.3. The illustration of relationship between SDR and cognitive radio.
The recent boom in the diversity of wireless standards with different options exposes
interoperability and multi-mode support issues. SDR has been considered as an inherent
solution to address such issues.
Although SDR is naturally evolved due to the need to implement radios that can
support multiple mode and standards, utilization of SDR in a cognitive radio is not limited to
the aforementioned functionalities. SDR is a promising technology to introduce cognition
capabilities to cognitive radios.
For instance, one of the crucial cognition capabilities of cognitive radios is the
dynamic spectrum management system. Spectrum sensing, optimization mechanism to utilize
a specific part of the spectrum, and spectrum shaping are the main steps of dynamic spectrum
management systems. In the case of spectrum, sensing devices are required to sense the
spectrum, which can be either embedded into SDR internally or incorporated to SDR
externally.
For instance, antenna can be considered as an internal sensor whereas the video
camera can be considered as an external sensor for SDR structures. In other words, SDR can
have a structure like a miniature spectrum analyzer in order to provide the spectrum
information to cognitive engine.
Either the existing receiver front-end of SDR or a designated receiver parallel to the
receiver side of SDR can be used to perform spectrum capturing. Captured spectrum is
digitized by Analog-to-Digital Converter (ADC) and then the digital samples are sent to
digital signal processor for the post-processing.
9
ACADEMIC YEAR:2020-2021 REGULATION:2017
1.7 Issues of Software Defined Radio:
A. Security Issues
The wireless communication is prone to interference and security threats. In SDR,
security threat is major as the consequence of its reconfiguration capability for handling
different wireless standards. Reconfigurability is increased by adjusting signal parameters
(like frequency, power, and modulation types) through installing or downloading new
software instead of removing and replacing hardware components. The successful
deployment of SDR technologies will depend on the design and implementation of essential
security mechanisms to ensure the robustness of networks and terminals against security
threats.
B. Increased complexity and development cost
In SDR, multiple signals are designed to run on a single platform and platform can be
reconfigured at different times to host different signals according to users need. For example,
a single programmable channel can replace two separate dedicated hardware channels. But
compared to hardware intensive radio, SDR increases complexity of a manufacturer’s design
and development process. Open source hardware such as USRP and software called GNU
Radio Companion (GRC) are commonly used to do experiments in SDR. Since the cost of
USRP is high, a low cost set up is needed which is easily affordable. A low cost alternative
is proposed known as Realtek Software Defined Radio (RTL-SDR) along with an RF mixer.
RF mixer converts signal to higher frequencies and thereby bringing the signal to the tuning
range of RTL-SDR.
C. High speed of ADCs and DACs and their synchronization with DSP
The sampling capabilities of ADCs and DACs are a key challenge for the
implementation of SDR systems. The ability to digitize real time high frequency signal is
fundamental for bridging the analog domain with the digital domain. In the original SDR
concept, the RF signal was directly digitized through a high speed ADC after antenna
at the receiver side. Such a high speed ADC is very power intensive. Further the data
generated by the high speed ADC is required to be processed by a processor operating at
similar high speed. The power consumption of processors at GHz clock rate is also enormous
resulting in a system with power consumption of several tens of watts. This difficulty can be
solved by use of an RF front end designed appropriately that can produce a desired band at IF
frequency of relatively much lower bandwidth compared to the input bandwidth of SDR and
the requirements of high speed ADC are greatly relaxed.
D. Increased power consumption
The software and digital logic implementation imposes a computational burden on the
SDR platform that leads to an increase in power consumption. To mitigate the expected
increase in complexity leading to a decrease in energy efficiency, cooperative wireless
networks are introduced. Cooperative wireless network enables the concept of resource
sharing. Resource sharing is interpreted as collaborative signal processing. This interpretation
leads to the concept of a distributed signal processor. Orthogonal Frequency Division
Multiplexing (OFDM) and the principle of Fast Fourier Transform (FFT) are described as an
example of collaborative signal processing. Hence, the designers must choose trade-off
between flexibility and energy efficiency.
E. Designing of antennas over a wide range of frequencies
Another challenge lies in designing antennas over a wide range of frequencies since
the antennas propagate signals differently for different frequencies transmitted. This leads to
the development of Multiple Input-Multiple Output (MIMO) concept and tunable
reconfigurable antenna implementation within an SDR to maintain uniform and consistent
antenna characteristics over a broad range of frequency or multiband frequencies. In addition,
an electronic circuit called ‘Antenna tuner’ connects the antennas to the rest of the circuit.
10
ACADEMIC YEAR:2020-2021 REGULATION:2017
They are optimized for different antennas and must be matched for optimal power
performance. It improves power delivery to the antenna under poor antenna matching
condition. However, this requirement complicates the radio design and prevents the
implementation of systems with many different frequency ranges on the same SDR platform
1.8 Enabling technologies of Software Defined Radio:
Primitive wireless systems were implemented using hardwired platform. The
transition from hardwired to reconfigurable digital platform happens gradually as the digital
technology evolves. This evolution starts with the introduction of Transistor–Transistor Logic
(TTL) technology followed by general purpose processors (or Microprocessors) that is
programmed through software. This trend continued with the development of microprocessor
that is specialized in digital signal processing tasks, which is DSP.
The need for high speed signal processing technology increased due to the low speed
of DSP and adoption of computationally complex algorithms in the wireless systems. ASIC
technology came to the scene to answer the demands on high speed signal processing. Then,
inflexibility of ASIC technology motivated scientists to develop flexible and reconfigurable
digital hardware technology.
FPGAs technology came to the rescue, which provides both flexibility and
reconfigurability. However, these features come with high power consumption and high cost.
Hence, trend is moving towards development of reconfigurable, flexible, high performance,
low power, small size, and low cost digital hardware platforms. In this line, there are some
recent reconfigurable computing technologies such as PicoArray processors from PicoChip
and Adaptive Computing Machine (ACM) from QuickSilver technologies. Therefore, these
technologies can be categorized under two groups:
1. Traditional Computing Technologies:
The technologies in this category is based on the idea of mapping the algorithm to a
fixed set of hardware resources and/or requirements. The technologies that can be considered
under this category are DSP, ASIC, and FPGAs.
2. Adaptive Computing Technologies:
The basic idea behind this approach is to map the hardware resources and/or
requirements to the algorithm requirements. Both hardware resources and algorithm
requirements are dynamic. In other words, optimum amount of hardware resources are
utilized to perform the given algorithm. General Purpose Processors (GPPs), PicoArray, and
ACM are three examples in this category.
The following are six main selection criteria that is recommended to be considered during
the process of evaluating a reconfigurable digital platform for SDR applications:
1. Reconfigurability: The ability to reconfigure a device to perform an arbitrary task that
is given by cognitive engine.
2. Integration to Other Layers: The level of complexity required to establish interface to
the other layers.
3. Development Cycle: The time that takes to develop, implement, debug, and verify a
digital radio function. Ideally, cognitive engine needs to have a capability of
performing the above development steps. However, it is recommended to provide
bug-free verified digital radio functions to cognitive engine in reality. One way of
achieving this is to store all the digital radio functions in a memory.
4. Performance: The ability to perform a given arbitrary task within a specified time.
5. Power Consumption: The amount of power that is dissipated for performing a given
task within a specified time.
6. Cost and Size: The cost of reconfigurable digital radio platform is an important
parameter. The physical size of the processor is another important criteria.
11
ACADEMIC YEAR:2020-2021 REGULATION:2017
1.8.1 Digital Signal Processors:
DSP is a type of GPP that is specialized on signal processing applications. DSPs and
GPPs have common functions, however, the former supports additional specialized functions
such as Multiply–ACcumulate (MAC), barrel shifter, multiple memory blocks with
supporting buses, and powerful Data Address Generators (DAGENs). DSPs have fixed
processing architecture and they are able to execute different algorithms based on the
sequential instructions that are stored in memory.
However, their inherent fixed processing structure place restrictions on initial design.
Most of the current DSP processors employ modified Harvard architecture, which is shown in
Figure 1.4. According to this architecture, it can be accessed to data and program memory
within a single cycle using either two (program and data) or three (program and two data)
buses. DSPs can process the signals either using fixed point or floating-point arithmetic.
However, most of the computations are performed using the latter format. The main
advantage of floating-point over fixed-point arithmetic is the use of numbers with much
larger dynamic range, which is important in many digital signal processing operations. The
current DSPs have a capability of providing high speed clocks and parallel processing.
Figure 1.4 Modified Harvard model based DSP architecture
However, both wireless service providers and manufacturers are beginning to
acknowledge that the current DSPs cannot satisfy the requirements of SDR applications.
Although the design flow of DSP are well documented in the literature and supported by
mature design tools, it cannot handle complex algorithms. For instance, multiple DSPs can be
required in order to implement a complex algorithm.
However, it is a tedious process to integrate multiple DSPs since the design tools for
this purpose are less mature and more sophisticated. As a result, DSPs provide high
reconfigurability with limited performance and short development cycle. Moreover, they
offer medium level of integration and consume low power. Additionally, DSPs are low cost
and small size devices.
1.8.2. Field-Programmable Gate Arrays
In the beginning of 1960s, discrete logic were used to build systems consisted of
many chips that are connected with wires. Modifying such systems required rebuilding the
board, which took long time and it was costly. Chip manufacturers introduced Programmable
Logic Device (PLD) that is a single chip and composed of an array of unconnected AND–OR
12
ACADEMIC YEAR:2020-2021 REGULATION:2017
gates. The PLDs contained an array of fuses that could be blown open or left closed to
connect numerous inputs to each AND gate. Since PLDs could handle up to 20 logic
equations designing complex systems using multiple PLDs was a challenging process. To
tackle this problem, chip makers introduced Complex PLDs (CPLD) and FPGAs. A CPLD
composed of bunch of PLD blocks whose inputs and outputs are governed by global
interconnection matrix. CPLDs provide two levels of reconfigurability; reconfiguring the
PLD blocks and interconnections between them. The structure of a CPLD is shown in Figure
1.5. The structure of FPGAs is different than that of CPLDs. The FPGAs are composed of an
array of simple and Configurable Logic Blocks (CLBs) and switches that are utilized to
determine the connections between CLBs.
A simplified structure of FPGAs is shown in Figure 1.5. In order to implement an
algorithm in the FPGAs, each CLB is configured individually and then switches are
configured to connect or disconnect CLBs. Although there are different methods for
connecting and disconnecting CLBs, the most widely used technique is based on use of
RAM/flash switches. In this method, static RAM or flash bits are used to control the pass
transistors for each interconnection. For instance, the switch can be closed or opened by
loading bit 1 or 0, respectively. Since the current FPGAs can contain up to 10 million gates,
manual control of switches is impossible. Therefore, FPGAs manufacturers provide
development softwares that take logic design as input and then outputs a bitstream, which
configures the switches.
Figure 1.5 CPLD structure
In the sequel, the main steps of development software for the implementation of a
logic design in the FPGAs are provided. The description of logic design can be entered
mainly in three ways:
1) Using high-level software such as MATLAB/Simulink.
2) Using Hardware Description Languages (HDL) such as VHDL and Verilog.
3) Using schematic editor.
In the first approach, the system is designed by connecting functional blocks in high-
level system design and then interface software is used to translate the design into VHDL
codes. For instance, a complete OFDM transceiver can be designed in MATLAB/ Simulink.
In order to download this design into the FPGAs, Xilinx System Generator can be
used as an interface between MATLAB/Simulink and Xilinx ISE Foundation to perform
automatic code genertion. In the second method, the design is entered as a code using HDLs.
On the other hand, the design can be drawn using a schematic editor. Regardless of the design
entry method, the consequent steps are common.
Once the design is entered, logic synthesizer tools transform the HDL code or
schematic into a netlist. A netlist describes various logic gates and the interconnection
between them. Consequently, implementation tools are used to map logic gates and
13
ACADEMIC YEAR:2020-2021 REGULATION:2017
interconnections into FPGAs. The mapping tool combines the netlist logic gates as a group
and generates the Look-Up-Tables (LUTs).
This is followed by place and route processes, which assign the collections of logic
gates to the specific CLBs and determine the status of switches using routing matrices to
connect the CLBs. Once the implementation process is completed, the program generates
routing matrices that show the final status of switches.
From routing matrices, a bitstream where ones and zeros correspond to opened or
closed switch is generated. The bitstream is downloaded into physical FPGAs chip through
parallel cable. The switches in FPGAs are opened or closed in response to the binary bits in
the bitstream.
Finally, FPGAs perform the operations specified by the design entry. The design can
be verified in different ways. The classical way of verification is to insert a test signal and
observe the output signal using test and measurement equipments. Another way, which is a
modern way, is to generate test signals from high-level program, download the design into
FPGAs, read back the results from the FPGAs, and plot the results in high-level software. For
instance, MATLAB/Simulink along with Xilinx System Generator and ISE Foundation allow
to verify the design using hardware co-sim, which is a second type method.
Note that the fore mentioned steps in the FPGAs design flow are performed by
softwares automatically. In conclusion, FPGAs have a capability of providing high degree of
re-configurability and high-level performance. Furthermore, they provide high level of
integration and short development cycle. However, they are power-hungry, large size, and
expensive devices.
1.8.3. General Purpose Processors
One of the reconfigurable digital hardware technologies is GPP. Von Neumman and
Harvard are two well-known GPP architectures in the literature. The main differences
between these two architectures are the storage, utilization of instructions, and data. In Von
Neumman architecture, instructions and data are stored into the same memory. On the other
hand, instructions and data are stored into separate memory in the Harvard structures. Most of
the current GPPs are based on Von Neumman architecture. However, Harvard architecture
based GPPs are commonly available as well. The simplified block diagram of Von Neumman
architecture is shown in Figure 1.6. The current GPPs do not have a capability to satisfy
dynamic reconfiguration requirements. The main drawback of these devices is low data
throughput due to all data being transferred via the processor bus. Moreover, they are power
hungry, costly and large size devices relative to the other processors.
However, they have potential advantages that cognitive engine benefits from them. Some
of these advantages are:
1. Experimentation: Testing new algorithms or protocols is easy in these devices.
2. Rapid Deployment: Cognitive radio users can easily add new devices or
improvements to the existing structure through software.
3. Tight Integration with the Other Applications: Since the upper layer applications and
underlying communications systems are implemented in the same device, these upper
and radio layers are tightly integrated.
4. Multi Purpose Devices: Different devices such as fax, VoIP capabilities can be added
to cognitive radios through software.
5. Improved Functionality: These devices have the capability of allocating different
channels to different communications standards.
14
ACADEMIC YEAR:2020-2021 REGULATION:2017
Figure 1.6 Simplified block diagram of Von Neumann architecture
The illustration of GPP based reconfigurable digital radio platform is provided in
Figure 1.7. According to this structure, both upper layers and reconfigurable digital radio
functions are implemented in GPP, which provides a tight integration between these upper
layers and reconfigurable digital radio layer.
In such GPP based software radio architecture that is so called virtual radio is
proposed. In this software radio structure, GPP is used as reconfigurable digital hardware
platform and it is optimized to be suitable for software radio applications.
Figure 1.7 Block diagram of a GPP based reconfigurable digital radio platform
Moreover, design tools and programming environment for this structures that is so
called Spectra is developed. The Spectra is a programming tool that is used to support
continuous real-time signal processing applications. This system design tool consists of three
basics components:
1. A library for signal processing modules (e.g. convulotional encoder, Fast Fourier
Transform (FFT)).
2. A set of objects that is used to interconnect signal processing modules.
3. A scripting language such as C++ to define the topology of system and interactions
between the modules.
As a result, GPPs have the capability of providing high degree of reconfigurability
with limited performance due to their inherent architectures. Their development cycle is very
short and they offer high level of integration. Moreover, they are power-hungry, small size,
and expensive devices.
1.8.4. Heterogeneous Systems
It has been observed and reported that all reconfigurable technologies have some
strengths and weaknesses. Another type of reconfigurable digital hardware platform is to
integrate multiple different technologies to perform a certain task. It is obvious that the level
of integration is low in this type of platforms since it introduces additional different type of
processor. However, there are some performance gain due to partitioning the tasks between
the processors. One of the well-known platforms for this approach is hybrid DSP/FPGAs
platform. Many of the FPGA strengths are complementary to the weaknesses of DSPs while
the strengths of the DSPs are complementary to the weaknesses of the FPGAs. A system
15
ACADEMIC YEAR:2020-2021 REGULATION:2017
incorporating both processors has the ability to leverage the strengths of both processors.
Such platforms are called hybrid DSP/FPGAs and the commercial hybrid DSP/FPGAs are
available such as SignalWAVE from Lyrtech.
The following are some common design considerations for hybrid DSP/FPGAs
platforms:
• Since task partitioning among the processors can affect the performance of hybrid
DSP/FPGAs platform significantly, this process needs to be performed carefully.
• Interaction between two processors needs to be understood very well. Otherwise, it
results in performance degradation.
• DSPs use “software design flow” whereas FPGAs use “hardware design flow.”
Hence, there is a little overlap between the skill set of the relevant design teams.
• Linking two processors is a complex task. These hybrid platforms are integrated at the
board level (not at high level).
• High-level abstractions that are used for communication between the DSPs and
FPGAs are practically nonexistent.
1.9 Evolution of Cognitive radio:
The main precursors for CR research was the seminal work by Mitola and Maguire in
1999 and early spectrum measurement studies conducted as early as in 1995 to quantify the
spectrum use, both in the licensed and unlicensed band. In the United States, CR research
focused quickly on dynamic spectrum access (DSA) and secondary use of spectrum as the
main objectives of the initial research. This was due to the fact that it was attracting a number
of early research projects (e.g., URA, SPECTRUM, and MILTON). The most notable project
in the spectrum management and policy research was the XG-project funded by DARPA. The
main goal of the XGproject was to study the so-called policy servers and secondary-use
technologies, particularly for military purposes. However, the early success of XG was
pushing the community to study more broadly the possibilities of CR. Another boost for the
research was given by several vociferous researchers (such as Lessig, Reed, and Peha), who
pointed out that there are possible flaws in the current regulatory domain. In the
standardization domain, three major groups have emerged to work on relevant technologies
and architectures:
IEEE 802.22 and SCC41 (formally P1900) working groups and more recently ETSI’s
Reconfigurable Radio Systsems Technical Committee on CRs and SDRs. Also, the SDR
Forum as an industry group has studied some CR-related issues. Commercially, the most
advanced standardization activity is IEEE 802.22 and related research that aims to provide
dynamic access to vacant TV spectrum. However, IEEE 802.22 requires a rather limited level
of cognition. At the time of this writing, CR is being intensively investigated and debated by
regulatory bodies as the enabling technology for opportunistic access to the so-called TV
white spaces (TVWS): large portions of the VHF/UHF TV bands that become available on a
geographical basis after the digital switchover. In the United States, the FCC already
proposed to allow opportunistic access to TV bands in 2004. Prototype CRs operating in this
mode were put forward to the FCC by Adaptrum, I2R, Microsoft, Motorola, and Philips in
2008.
After extensive tests, the FCC adopted in November 2008 a Second Report and Order
that establishes rules to allow the operation of cognitive devices in TVWS on a secondary
basis . Furthermore, in what is potentially a radical shift in policy, in its recently released
Digital Dividend Review Statement, the U.K. regulator, Ofcom, is proposing to “allow
licence exempt use of interleaved spectrum for cognitive devices”. Furthermore, Ofcom
states that, “We see significant scope for cognitive equipments using interleaved spectrum to
emerge and to benefit from international economics of scale. More recently, on February 16,
16
ACADEMIC YEAR:2020-2021 REGULATION:2017
2009, Ofcom published a new 1.4 Key Applications 9 consultation providing further details
of its proposed cognitive access to TVWS. With both the United States and United Kingdom
adapting the cognitive access model, and the emerging 802.22 standard for cognitive access
to TV bands being at the final stage, we can expect that CR may become mainstream
technology worldwide in the near future.
1.10 Goals of Cognitive radio
Cognitive radio is considered as a goal towards which a software-defined
radio platform should evolve: a fully reconfigurable wireless transceiver which automatically
adapts its communication parameters to network and user demands.
Traditional regulatory structures have been built for an analog model and are not
optimized for cognitive radio. Regulatory bodies in the world (including the Federal
Communications Commission in the United States and Ofcom in the United Kingdom) as
well as different independent measurement campaigns found that most radio
frequency spectrum was inefficiently utilized. Cellular network bands are overloaded in most
parts of the world, but other frequency bands (such as military, amateur
radio and paging frequencies) are insufficiently utilized.
Independent studies performed in some countries confirmed that observation, and
concluded that spectrum utilization depends on time and place. Moreover, fixed spectrum
allocation prevents rarely used frequencies (those assigned to specific services) from being
used, even when any unlicensed users would not cause noticeable interference to the assigned
service. Regulatory bodies in the world have been considering whether to allow unlicensed
users in licensed bands if they would not cause any interference to licensed users. These
initiatives have focused cognitive-radio research on dynamic spectrum access.
The first cognitive radio wireless regional area network standard, IEEE 802.22, was
developed by IEEE 802 LAN/MAN Standard Committee (LMSC) and published in 2011.
This standard uses geolocation and spectrum sensing for spectral awareness. Geolocation
combines with a database of licensed transmitters in the area to identify available channels
for use by the cognitive radio network. Spectrum sensing observes the spectrum and
identifies occupied channels. IEEE 802.22 was designed to utilize the unused frequencies or
fragments of time in a location. This white space is unused television channels in the
geolocated areas. However, cognitive radio cannot occupy the same unused space all the
time. As spectrum availability changes, the network adapts to prevent interference with
licensed transmissions.
1.11 Benefits of Cognitive radio
Cognitive Radio is miraculously helping technology available to the current
increasing numbers of users in this century. It allows constructing a network of radio stations
with the combination of cognitive radio nodes hence gives significant boost to the speed and
availability of channels for the new secondary wireless networks.
There are two possibilities to access vacant channel. One, CR contact to multiple non
CRs transmissions where femtocells get cognitive inspection, erect and then communicate
with non CR mobiles. Two, CR networks would like to contact to the network and operate
through the cognitive radio network. This type of networking has many advantages and
benefits and capable of employing many useful applications for improving the performance
of the entire network as the individual parts.
Cognitive radio networks are more feasible as compared to the single cognitive radio.
It improves the capability of spectrum sensing while gaining this characteristics cognitive
radio networks offer many more advantages of significant value. The purpose of using
cognitive radio is to utilize the spectrum in efficient way. Spectrum sensing is the basic
17
ACADEMIC YEAR:2020-2021 REGULATION:2017
function of cognitive radio networks and gives security with surety that primary channels
would not be disturbed.
Spectrum sensing is the worthy action of the cognitive radio network to improve the
spectrum effectiveness. CR spectrum sensing methods should be able to get update
information about current transmissions running by the primary users and sends information
back to the CPU (Central Processing Unit). There in the main system plan of action would be
suggested and immediate command is prompted to the Cognitive Radio Network for the next
step.
Cognitive radio networks also improve the coverage by relaying technique from one point
to the next node. It helps to reduces the power consumption and does not sacrifice
performance of the network.
Overcome radio spectrum scarcity
Avoid intentional radio jamming scenarios
Switch to power saving protocol
Improve satellite communications
Improves quality of service (QoS)
1.12 Cognitive radio Definitions:
Cognitive Radio (CR) is an adaptive, intelligent radio and network technology that
can automatically detect available channels in a wireless spectrum and change transmission
parameters enabling more communications to run concurrently and also
improve radio operating behaviour.
Such a radio automatically detects available channels in wireless spectrum, then
accordingly changes its transmission or reception parameters to allow more
concurrent wireless communications in a given spectrum band at one location. This process is
a form of dynamic spectrum management.
1.13 Issues of Cognitive radio
• Sensing the current radio frequency spectrum environment: This includes measuring
which frequencies are being used, when they are used, estimating the location of
transmitters and receivers, and determining signal modulation. Results from sensing
the environment can be used to determine radio settings.
• Policy and configuration databases: Policies specifying how the radio can operate
and physical limitations of radio operation can be stored in the radio or made
available over the network. Policies might specify which frequencies can be used in
which locations. Configuration databases would describe the operating characteristics
of the physical radio. These databases would normally be used to constrain the
operation of the radio to stay within regulatory or physical limits.
• Self-configuration: Radios may be assembled from several modules. For example, a
radio frequency front-end, a digital signal processor and a control processor. Each
module should be self-describing and the radio should automatically configure itself
for operation from the available modules. Some might call this “plug-and-play.”
• Mission-oriented configuration: Software defined radios can meet a wide set of
operational requirements. Configuring a SDR to meet a given set of mission
requirements is called mission oriented configuration. Typical mission requirements
might include operation within buildings, substantial capacity, operation over long
distances, and operation while moving at high speed. Mission-oriented configuration
involves selecting a set of radio software modules from a library of modules and
connecting them into an operational radio.
18
ACADEMIC YEAR:2020-2021 REGULATION:2017
• Adaptive algorithms: During radio operation, the cognitive radio is sensing its
environment, adhering to policy and configuration constraints, and negotiating with
peers to best utilize the radio spectrum and meet user demands.
• Distributed collaboration: Cognitive radios will exchange current information on
their local environment, user demand, and radio performance between themselves on
a regular basis. Radios will use their local information and peer information to
determine their operating settings.
• Security: Radios will join and leave wireless networks. Radio networks require
mechanisms to authenticate, authorize and protect information flows of participants.
1.14 Enabling technologies of Cognitive radio
Support for cognitive radio and dynamic spectrum access requires a number of enabling
technologies that are under development by the members of the Wireless Innovation Forum:
Information Process Architecture: Understanding the current state of complex
information systems and their associated communications subsystems to determine
how to enhance them from a process perspective, and analyze them for opportunities
to interact with other systems is a key problem. An information process architecture
solves this problem by providing a top-down model and a series of tools for depicting
the structure of complex systems to aid in defining, designing and selecting relevant
cognitive radio processes and to facilitate an improved understanding of the structure
and relationships between systems that span user domains.
Modeling Language: Flexible and efficient communication protocols are required
between advanced radio systems and subsystems to support next generation features
such as vertical and horizontal mobility, spectrum awareness, dynamic spectrum
adaption, waveform optimization, feature exchanges, and advanced applications. A
modeling language built on detailed use cases, and defining the signalling plan,
requirements and technical analysis of the information exchanges provides the basis
for developing specifications and standards supporting these capabilities. A modelling
language, or meta-language, expressed in a formal declarative language that is
machine readable defines the communications infrastructure of the cognitive radio.
Radio Environment Map: Operation of a cognitive engine requires data and meta-
data defining the spectral environment that a terminal is operating in at a given
moment in time. Referred to as the radio environment map, this data can include
information on spectrum economic transactions, dropouts, handovers, available
networks, and services. Data contained within the map is derived, in part, by
capturing and synthesizing measurements from many radios, and may be stored in a
database that can be accessed remotely by the cognitive engine. Requirements for a
database structure enabling this access including standardized database structures,
data formats and functionality must be defined to support the flexibility necessary to
accommodate current and future cognitive radio spectrum applications, such as
mobility, spectrum economic transactions, dropouts, handovers, available networks,
and services.
Test and Measurement: Cognitive radios pose unique test challenges in quantifying
the performance of critical functions such as spectrum sensing, interference
avoidance, database performance, and adherence to policy. Test methodologies
supporting these challenges must be developed must consider a range of hardware
platforms, protocols, algorithms, use cases, and spectrum stakeholder requirements.
Test equipment functionality and performance, test interfaces and test modes must
also be taken into account.
19
ACADEMIC YEAR:2020-2021 REGULATION:2017
Figure 1.8 Interface between Metalanguage with other systems
1.15 Applications of Cognitive Radio :
The application of CR networks to emergency and public safety communications by
utilizing white space
The potential of CR networks for executing dynamic spectrum access (DSA)
Application of CR networks to military action such as chemical biological
radiological and nuclear attack detection and investigation, command control,
obtaining information of battle damage evaluations, battlefield surveillance,
intelligence assistance, and targeting.
They are also proven to be helpful in establishing Medical Body Area
Networks which can be utilized in omnipresent patient monitoring that aids in
immediately notifying the doctors regarding vital information of patients such as
sugar level, blood pressure, blood oxygen and electrocardiogram (ECG), etc. This
gives the additional advantage of reducing the risk of infections and also increases the
patient's mobility.
Cognitive radio is practical also to wireless sensor networks, where packet relaying
can take place using primary and secondary queues to forward packets without delays
and with minimum power consumption
1.16 Radio frequency spectrum and regulations:
The radio frequency spectrum is an abundant natural resource that uniformly covers
the planet and is available for a wide variety of useful purposes. Beyond the historic voice
communications and increasingly dominant multimedia and data networking focus of this
text, this spectrum is regularly used for a diverse array of applications, including radar for
finding large and small objects (from airplanes in the sky, to obstacles in the vicinity of your
automobile, to studs in the walls of your home), excitation for illuminating spaces (sulfur
lamps), monitoring and sensing applications, and even cooking food in the microwave oven
in your home. The radio frequency spectrum is a component of the overall electromagnetic
spectrum that stretches from roughly zero to nearly 3 × 1027 Hz (cycles per second). In this
broad range, the domain from roughly 10 kHz to 300 GHz is usually described as the radio
frequency spectrum (though interestingly this region includes the spectral regions described
as sonic and ultrasonic on the low end as well as the microwave spectral region on the high
end of the range).
1.17 Allocation, Reallocation and Optimisation:
Based on the fundamental economic law of supply and demand, with a finite supply
of spectrum and an ever-increasing demand for that spectrum, the value of the spectrum has
20
ACADEMIC YEAR:2020-2021 REGULATION:2017
risen and will continue to rise; and that rise will be directly related to the increasing demand.
As such, it is not surprising that all the radio frequency spectrum has been fully allocated for
a considerable period of time (since roughly 1980). Therefore, as new applications have been
created the difficulty in finding a place for these applications in the spectral domain has
greatly increased. For a time, there was enough available spectrum that each application
could be uniquely positioned, if not in an ideal spectral band, at least in a workable band.
Later, regulatory value judgments were required to determine that an existing application
should be retired, replaced, relocated, or have its spectral allocation reduced to make room
for a new high-priority application that could optimally operate in the spectrum used by the
existing application. This is obviously an extremely difficult task. In the United States with
both the NTIA and the FCC providing the allocations, this negotiations-based bureaucratic
decision-making task was rendered even more challenging and more time consuming.
This increasing difficulty is a major driver of the two dominant trends that have
transformed the spectrum management landscape since the early 1980s. The first of these has
already been mentioned, namely, the move to government auctions as a means of using raw
dollars as a dominant means (obviously coupled with public comment) of determining the
most appropriate allocation of spectrum, or at least the block band assignments. This market-
based approach is currently being extended to allow entities that have obtained spectral
resources to resell or lease these resources to others on a long-term or temporal basis. By
using these methods the market should be able to “move” spectrum ownership to those users
and applications that value it the most rather than requiring the regulator to make a judgment
on the optimal use of the spectrum.
Regulators have also been setting aside increasing amounts of spectrum for
unlicensed access, sometimes also known as licensed-exempt spectrum or “spectrum
commons.” These bands are not licensed to any entity, but are available to all to use subject
to “rules of entry,” which typically restrict power levels in order to minimize the probability
of interference. Such bands have proven very valuable for uses such as WiFi, Bluetooth,
radio tagging (RFID), and a wide range of consumer products from garage door openers to
baby alarm monitors. The most widely used band for unlicensed activities is at 2.4 GHz, but
large new bands at 5-6 GHz are now available around the world for new applications.
The trend toward market-based spectrum management is also leading to the establishment of
quasi-autonomous entities set up at arm’s length from the government to manage the national
communications resource with such organizations as the Office of Communications in the
United Kingdom (or more commonly Ofcom) serving as a solid example. This organization
was created via the Office of Communications Act of 2002 and subsequently empowered via
the Communications Act of 2003, with the following general duties:
It shall be the principal duty of Ofcom, in carrying out their functions; (a) to further the
interests of citizens in relation to communications matters; and (b) to further the interests of
consumers in relevant markets, where appropriate by promoting competition.
This organization explicitly is responsible for regulating television, radio,
telecommunications, and various wireless services.
The other approach, called dynamic spectrum access networks, and its tightly related
cousin cognitive access networks (with the primary embodiment being cognitive radios), has
created a push for regulatory approval enabling temporary (usually licensed-exempt) sharing
of spectrum along both spatial and, importantly, temporal dimensions between heterogeneous
users. This approach is based on the important observation that most of the spectrum, in most
of the places, most of the time is completely unused. Given the previous quadruple whammy
discussion, this is a rather amazing circumstance, but based on the measurements performed
in various spectrum occupancy studies less than 20% of the spectral capacity is actually being
21
ACADEMIC YEAR:2020-2021 REGULATION:2017
used, even in urban environments like Chicago and New York To help reconcile these two
seemingly disparate thoughts, though the quadruple whammy circumstance of crowded
airways is a growing fact of life, there are many well-established wireless applications with
long-term spectral band assignments that are rarely used or are used only in specific spatial
locations. For example, ship-to-ship and ship-toshore radios are rarely used in places like
Beijing, Denver, Madrid, or Mexico City since they are not located on major waterways.
Similarly, many military radios are used primarily for training on military bases except in
time of war or during war games, and their bands could be used for other purposes outside
these relevant geographies and times. The increasing recognition of this unused spectral
capacity has helped drive the various initiatives directed toward identifying and supporting
more efficient dynamic sharing mechanisms for the world’s scarce spectrum resources.
1.18 Regulations:
It can be argued that this trend to enable temporal as well as spatial spectrum sharing
took root in the United States in 1985, when in recognition of the growing spectral scarcity
challenge, the U.S. FCC modified its rules for the industrial, scientific, and medical band to
enable its use for wireless communication (FCC Rules, Part 15-247). This was the first of
three major initiatives to begin to address this critical issue. The other two have been the
allowance of ultra-wideband (UWB) underlays, based on an FCC Report and Order filed
February 14, 2002 (and released April 22, 2002), and even more recently, cognitive radio
overlays supported by another FCC Report and Order released March 11, 2005. Other nations
are following this trend, including the direction outlined in the highly regarded Spectrum
Framework Review produced by Ofcom in June 2005.
22