Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
11 views210 pages

Module On Methods in Biology

The document outlines various methods for isolating and purifying nucleic acids (DNA and RNA) and proteins, detailing techniques such as cell lysis, chemical treatments, and centrifugation. It includes specific procedures for genomic DNA, RNA, and plasmid DNA isolation, as well as protein purification methods based on solubility, size, and charge. Additionally, it provides tables of chemicals used in these processes along with their functions.

Uploaded by

lookaditya12345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views210 pages

Module On Methods in Biology

The document outlines various methods for isolating and purifying nucleic acids (DNA and RNA) and proteins, detailing techniques such as cell lysis, chemical treatments, and centrifugation. It includes specific procedures for genomic DNA, RNA, and plasmid DNA isolation, as well as protein purification methods based on solubility, size, and charge. Additionally, it provides tables of chemicals used in these processes along with their functions.

Uploaded by

lookaditya12345
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 210

www.byjusexamprep.

com

1
www.byjusexamprep.com

Index

Sr. No. Chapter Name Page No.

1. MOLECULAR BIOLOGY AND RECOMBINANT 01-82


DNA METHODS

2. HISTOCHEMICAL AND IMMUNOTECHNIQUES 83-108

3. BIOPHYSICAL METHODS 109-139

4. STATISTICAL METHODS 140-158

5. RADIOLABELLING TECHNIQUES 159-163

6. MICROSCOPIC TECHNIQUES 164-177

7. ELECTROPHYSIOLOGICAL METHODS 178-201

8. METHODS IN FIELD BIOLOGY 202-209

2
www.byjusexamprep.com

1. MOLECULAR BIOLOGY and RECOMBINANT DNA METHODS

ISOLATION and PURIFICATION of DNA, RNA and PROTEINS

1. Introduction:

Each gene manipulation process necessitates the use of genetic material such as DNA and RNA.
Nucleic acids are found in nature near proteins and lipoprotein organelles. The dissociation of a
nucleoprotein into nucleic acid and protein moieties, followed by their separation, is required for the
isolation of all nucleic acid species. Nucleic acid isolation is followed by nucleic acid quantification,
which is usually done using spectrophotometry or fluorescent dyes to quantify the average amounts
and purity of DNA or RNA contained in a combination.

Three main processes are involved in isolating genetic material (DNA) from cells (bacterial, viral, plant,
or animal) namely:

a. Cell membrane rupturing to liberate cellular components and DNA.


b. Nucleic acids are separated from other biological components.
c. Purification of nucleic acids.

2. Isolation and Purification of Genomic DNA:

The structure of double-stranded DNA remains unaltered in genomic DNA, which is found in the
nucleus of all live cells (helical ribbon). Animal and plant cells isolate genomic DNA in different ways.
Plant cells are more difficult to isolate DNA from than animal cells due to the presence of a cell wall.
The amount and purity of extracted DNA are determined by the cell's nature.

The steps involved in isolating genomic DNA from a bacterium are as follows:

a. Bacterial culture growth and harvest: Bacterial cell culture is more convenient than any other
microorganism because it simply requires a liquid medium (broth) with adequate quantities of
critical nutrients for bacterial cell development and division.
i. Bacterial cells are frequently cultured in a complicated medium, such as Luria-Bertani (LB),
with a difficult-to-decipher medium composition.
ii. After centrifugation, the cells are separated and resuspended in 1% or less of the original
culture volume.
b. Cell wall rupture and cell extract preparation: Apart from the plasma membrane, the bacterial
cell is covered by an extra layer called the cell wall, with some E. coli species having a
multilayered cell wall. The following methods can be used to lyse the cell wall and liberate the
genetic material, DNA:
i. Physical method: This is done by using some mechanical forces based on the type of sample
used to extract the DNA. For hard tissues sometimes magnetic beads are used to
homogenize the tissue so that its cell wall ruptures.
ii. Chemical method: It involves chemicals like metal chelating agents (EDTA), surfactant (i.e.
SDS), an enzyme (e.g. lysozyme) to rupture the cell wall of bacteria.

3
www.byjusexamprep.com

Table: Chemicals along with their function used in isolation of DNA

Chemical Function

Catalyses the breakdown of the cell wall i.e. the peptidoglycan layer at
Lysozyme
β(1-4) linkage.

A chelating agent necessary for destabilizing the integrity of the cell


EDTA (Ethylene diamine wall.
tetra-acetic acid)
Inhibits the cellular enzymes that degrade DNA.
Removes Mg ions required for cell integrity and enzyme functionality.
2+

SDS (Sodium dodecyl Helps in the removal of lipid molecules and denaturation of membrane
sulphate) proteins.

iii. DNA Purification from the cell extract: A cell extract contains considerable amounts of
protein and RNA in addition to DNA, which can be purified further by adding a 1:1 mixture
of phenol and chloroform to the cell lysate for protein separation.
o Between the aqueous phase, which contains DNA and RNA, and the organic layer, the
proteins collect as a white mass.
o Using pronase or protease in addition to phenol/chloroform to treat the lysate
guarantees that all proteins are removed from the extract.
o Ribonuclease, an enzyme that rapidly breaks RNA into its ribonucleotide subunits,
may effectively eliminate RNA.

Table: Chemicals used in Purification of DNA


Chemicals Function

Phenol The protein structure is unfolded and digested by phenol.

Chloroform's principal purpose is to safeguard genomic DNA in the case of


a catastrophe.
Chloroform The efficacy of phenol to denature the protein is improved by chloroform.
Chloroform enables for adequate separation of the organic and aqueous
phases, as well as keeping DNA safe in the aqueous phase.

It aids in the reduction of foaming during the interphase. It stops a solution


Isoamyl alcohol
from emulsifying.

Ethanol It helps in the precipitation of DNA.

iv. The concentration of DNA solution: Concentration of DNA can be done using ethanol along
with salts such as sodium acetate, potassium acetate etc. These salts provide metal ions
like sodium ions (Na+), potassium ions (K+) which help in the aggregation and hence
precipitation of DNA molecules.

4
www.byjusexamprep.com

Table: Chemicals along with function


Chemicals Functions

Tris It keeps the solution's pH constant while simultaneously permeabilizing the


cell membrane.

NaCl It protects against denaturation.

MgCl2 Prevents DNA from becoming mixed up with the contents of other cell
organelles.

TE buffer Dissolves DNA

Fig. Isolation and Purification of Genomic DNA

3. Isolation and Purification of RNA:

RNA (ribonucleic acid) is a polymeric material made up of a lengthy single-stranded chain of phosphate
and ribose units bound to the ribose sugar found in living cells and many viruses. Homogenization,
phase separation, RNA precipitation, washing, and re-dissolving RNA are all processes in the RNA
preparation process.

The most often used reagent is TRIzol Reagent, which is a ready-to-use reagent for isolating RNA from
cells and tissues. It operates by preserving RNA integrity while disrupting and breaking down cells and
cell components during tissue homogenization.

Table. Following are some chemicals used in making the composition of TRIzol Reagent:

Chemicals Function

Guanidinium thiocyanate It is a chemical compound that functions as a chaotropic agent.


These chemicals interfere with intermolecular interactions like H-
bonding Van der Waal and Hydrophobic forces resulting in
denaturation of proteins and thus disrupting the outer surface.
Phenol: Chloroform: Isoamyl Phenol- Chloroform helps in organic and aqueous phase separation
alcohol (25:24:1) and isoamyl alcohol aids in the reduction of foaming during the
interphase.

5
www.byjusexamprep.com

Sodium Acetate It provides Na ions that neutralize the negative charge RNA.
+

Glycerol It protects the quality and purity of nucleic acids from lipid
contaminations.
Isopropanol It helps in the precipitation of RNA

The following is the procedure for isolating and purifying RNA:

a. Organic extraction method: This method involves exactly the same steps of DNA isolation and
purification via using chemicals.

b. Direct lysis method: The disruption of the sample and stabilisation of nucleic acids is
accomplished using lysis buffer under specific circumstances. Samples can also be purified from
stabilised lysates if required. This approach minimises bias and recovery efficiency effects by
removing the necessity for binding and elution from solid surfaces.

4. Extraction Of Plasmid Dna and Its Purification:


Samples such as bacteria, animal/ plant tissues are collected and handled with ultra-precaution.
Depending on the occurrence of bacteria the plasmids differ in their physical properties such as in size
(kbp), geometry and copy number. Generally, plasmid size ranges from 1 kbp (kilo base pair) to 1000
(kilo base pair) megaplasmids that are many hundred base pairs in size.

Plasmids consist of low, medium, or high Copy Numbers (Copy number refers to the average or
expected number of copies per host cell). A low copy number of the plasmid is associated with a low
yield and might therefore be required to set up more cultures. During cell division the plasmids get
segregated randomly in the daughter cells will generate a high copy number plasmid. Whereas, during
cell division and partition, the plasmids divided equally into the daughter cells will generate a low copy
number of plasmids. A high copy number have the greater stability of the plasmid when random
partitioning occurs at cell division.

6
www.byjusexamprep.com

a. Process of plasmid DNA isolation:


i. For proper growth of the bacterial culture the optimization process starts with
preparation of nutrient media i.e. Luria Bertani (LB) medium (dissolve 10 g tryptone, 5 g
yeast extract, and 10 g NaCl in 800 ml distilled water; add dropwise 1N NaOH to obtain
pH 7.0. make the final volume to 1 litre by adding distilled water and sterilize by
autoclaving).
ii. The cell cultures are incubated at 37 °C with constant shaking (200–250 rpm) for 12–16 h
overnight.
iii. Then, the choice of interesting bacteria is grown in an aseptic condition. Then, the process
of digestion is initiated.
iv. Digestion is done with the help of alkaline lysis (using detergent sodium dodecyl sulfate
(SDS) and a strong base sodium hydroxide) is a method used in molecular biology, to
isolate plasmid DNA or other cell components such as proteins by breaking the cells open.
v. In this step, disruption of most cells occurs, simultaneously chromosomal as well as
plasmid DNA are denatured. The remaining lysate is then cleared by centrifugation,
filtration or magnetic clearing.
vi. Actually, the detergent cleaves the phospholipid bilayer of the membrane and the alkali
denatures the proteins which are involved in maintaining the structure of the cell
membrane.
vii. Subsequently, neutralization with potassium acetate allows the covalently closed plasmid
DNA to reanneal and solubilize.
viii. After which a series of steps involving agitation, precipitation, centrifugation, and the
removal of the supernatant, removal of cellular debris is performed. Once the following
steps are performed a step towards obtaining a purified plasmid has proceeded.
ix. The purified plasmid DNA is immediately digested with restriction enzymes, cloning, PCR,
transfection, in vitro translation, blotting and sequencing.
x. Then, the bacteria are resuspended in a resuspension buffer (50mM Tris-Cl, 10 mM EDTA,
100 µg/ ml RNase A, pH 8.0. after resuspension the culture is treated by 1% SDS (w/v) /
alkaline lysis buffer (200mM NaOH) to liberate the plasmid DNA from the E. coli host cells.
xi. After this, a neutralization buffer (3.0 M potassium acetate, pH 5.0) is used to neutralize
the lysate.
xii. After the process of centrifugation, the desired nucleic acids should be bound to the
column and impurities such as protein and polysaccharides (in case of animal tissues) and
polysaccharides and pigments (in case of plant tissues) should be removed.
xiii. The process of washing is done by a two-step mechanism. Firstly, low concentration of
chaotropic salts (salts that is responsible for protein denaturation by altering hydrophobic
interactions that are responsible for the protein structures stabilization) to remove
residual proteins and pigments. After which ethanol washing is performed.
xiv. Columns consist of silica resin that binds to selective DNA/RNA.
xv. The DNA is then eluted with the help of a low-ionic-strength solution such as TE buffer or
water.
xvi. The binding of DNA and silica is driven by dehydration and hydrogen bond formation, due
to the competition against weak electrostatic repulsion.

7
www.byjusexamprep.com

xvii. A high concentration of salt will help drive DNA to adsorb onto silica, and a low
concentration will release the DNA.
xviii. After this the precipitated protein, genomic DNA, and cell debris are then centrifuged, the
supernatant obtained is loaded on the column.
xix. From the pellet, the contamination such as salts, metabolites, and soluble
macromolecular cellular components are removed by ethanolic wash buffer (1.0 M NaCl,
50mM MOPS, pH 7.0, isopropanol (v/v) 15 %).
xx. Thus pure plasmid DNA is eluted under low ionic strength conditions with slightly alkaline
buffer (5 mM Tris / HCl, pH 8.5).

Fig. A schematic representation of the process of Plasmid DNA isolation

8
www.byjusexamprep.com

5. Protein Isolation and Purification Techniques:


a. The Protein purification method is actually based on the solubility, size, charge and binding
specificity of proteins. All of these factors are dependent on the properties of amino acids.
b. The molecular weight (MW expressed in terms of Dalton or Kilo Dalton) of a protein is just the
addition of the masses of the individual amino acids that are composing the protein.
c. Proteins are many complex materials so, before beginning, it is necessary to isolate the protein
from the abundance
d. Once the material containing protein is obtained it is ground to get an intercellular protein.
e. The grinding is done in the presence of a buffer and inhibitors to protect the desired protein
from denaturation.
f. Once the material is suspended it is centrifuged at 500 rpm for 10 min to separate out the pellet
and supernatant.
g. After centrifugation, dialysis is performed to exchange the solvent around a protein.
h. The protein solution is placed inside a semi-permeable membrane which is suspended in a larger
volume of a buffered solution.
i. Through the process of dialysis, the membrane has to be permeable to water and ions so that
the desired proteins stays at the membrane, likewise, unwanted materials such as other ions,
salts etc will be removed.
j. The process of removal takes place by diffusion and osmosis and then the dialysate (Wastes and
excess water containing impurities) is discarded. The process is repeated at least 4-5 times.
k. After the dialysis, the component obtained is separated on the basis of

i) size via gel filtration or size exclusion chromatography


ii) charge via ion-exchange chromatography
iii) affinity via a specific binding affinity

a. Gel Filtration or Size Exclusion:


i. Gel Filtration also called size-exclusion chromatography is useful for protein, DNA
purification, buffer exchange, desalting, or for separating components in a group of
solutions.
ii. The gel Filtration method uses the technique to separate molecules with different
molecular sizes, using mild conditions. Through this, the large molecules travel out of the
column with the void volume the smaller molecules, salts, etc. remain in the column.
iii. Gel filtration columns are also used to
● remove unincorporated nucleotides during DNA sequencing
● to the separation of free low molecular weight labels
● to terminate the reactions between macromolecules and low molecular weight
● reactants
● to remove the products, cofactors or inhibitors from enzymes
● to remove the unreacted radiolabels such as [α-32P] ATP from nucleic acid labelling
reactions

9
www.byjusexamprep.com

Fig. Representation of Gel Filtration or Size Exclusion


b. Ion Exchange Chromatography:
i. Through Ion exchange chromatography the separation of ionizable molecules is based on
their total charge.
ii. This is the most reliable technique to follow because it enables the separation of similar
types of molecules that would be difficult to separate by other techniques.
iii. Ion exchange chromatography is of two types - anion & cation exchangers.
● Anion Exchanger
Anion exchanger removes the anions from the protein mixture. Before elution begins all
positively and uncharged proteins are separated through the column.
● Cation Exchange
Cation exchanger removes the cations from the protein solution. Before elution all negatively
and uncharged proteins will fall through the column.

Fig. Schematic representation of Ion Exchange chromatography

10
www.byjusexamprep.com

GEL ELECTROPHORESIS

In an electric field, a molecule having a net charge will move. Electrophoresis, a technique for
separating proteins and other macromolecules like DNA and RNA, is a potent tool. The electric field
strength (E), the net charge on the protein (z), and the frictional coefficient all affect the velocity of
migration (v) of a protein (or any molecule) in an electric field (f). The viscous drag fv caused by friction
between the moving molecule and the medium opposes the electric force, Ez, forcing the charged
molecule toward the oppositely charged electrode. The frictional coefficient f is determined by the
migrating molecule's mass and shape, as well as the viscosity () of the fluid.

V = Ez
F
For a sphere of radius r,
F=6Πηr

1. Instrumentation of electrophoresis:
a. Electrophoresis requires only two pieces of equipment: a power supply and an electrophoresis
machine.
b. There are electrophoresis units for both vertical and horizontal gel systems. Commercially
available vertical slab gel units are commonly used to separate proteins in acrylamide gels.
c. The gel is created by clamping two glass plates together but keeping them apart using plastic
spacers.
d. Gels are usually 12cm x 14cm in size, with a thickness of 0.5 to 1mm.
e. To provide loading wells for samples, a plastic comb is put in the gel solution and removed after
polymerization.
f. The lower electrophoresis, tank buffer surrounds the gel plates and provides some cooling when
the device is built.
g. The gel is poured onto glass or plastic plates and set aside to cool (an insulated surface through
which cooling water is passed to conduct away generated heat.)
h. A thick wad of wetted filter paper is used to connect the gel to the electrode buffer, whereas
agarose gels for DNA electrophoresis are run submerged in the buffer. The electrophoresis unit's
electrodes receive a direct current from the powerpack.
i. Each electrophoresis is done in a specific buffer, which is required to keep the ionization of the
molecules being separated constantly.
j. Any change in pH will affect the total charge of the molecules being separated, as well as their
mobilities (rate of migration in the applied field).

11
www.byjusexamprep.com

2. Types of support media used in electrophoresis:

Filter paper or cellulose acetate strips were the first supports used in electrophoresis, and they were
wetted with electrophoresis solution. These media are no longer in use. Nowadays, polyacrylamide
gels or agarose gels are utilized.

a. Agarose gel:
i. Agarose is a linear polysaccharide with a molecular weight of 12000 Da that is made up of
the agarose basic repeat unit (which contains alternating units of galactose and 3,6-
anhydrogalactose).
ii. It's one of the ingredients in agar, which is a polysaccharide mixture made from seaweeds.
It's utilized in concentrations ranging from 1% to 3%.
iii. Dry agarose is suspended in an aqueous buffer, which is then boiled until it creates a clear
solution, which is then poured and allowed to cool to room temperature to form a hard gel.
iv. Intermolecular and intramolecular H-bonding inside and between long agarose chains are
responsible for the gelling capabilities. The starting concentration of agarose controls the
pore size of the gel; a large pore size correlates to a low concentration, and vice versa.
v. Although the alternate sugar residues are devoid of charges, the substitution of carboxyl,
methoxyl, pyruvate, and sulfate groups occurs to variable degrees, which can cause
electroendosmosis during electrophoresis.
vi. Agarose is therefore graded according to its sulfate concentration; the lower the sulfate
concentration, the higher the purity.
vii. These gels are utilized for both protein and nucleic acid electrophoresis.
viii. In comparison to the sizes of proteins, the pore size of a 1 percent agarose gel is enormous.
ix. As a result, it's commonly utilized in procedures like immune-electrophoresis and flat-bed
isoelectric focusing, in which proteins must move freely in the gel matrix according to their
inherent charge.
x. Because the hole sizes are large enough for RNA and DNA molecules to pass through the
gel, such large pure gels are also employed to separate much larger molecules like RNA and
DNA.
xi. The availability of agarose with a low melting point is an advantage of employing it (62-65 C).
0

xii. This gel may be reliquefied by heating to 65°C, allowing DNA samples to be cut out of the
gel, returned to the solution, and recovered, for example.

b. Polyacrylamide gel:
i. Polymerization of acrylamide monomer in the presence of a modest amount of N, N'-
methylene bis acrylamide results in a cross-linked polysaccharide gel (aka- bis-acrylamide).
ii. Bis-acryl amide is a cross-linking agent made comprised of two acrylamide molecules
connected by a methylene group.
iii. Acrylamide monomers are polymerized into long chains in a head-to-tail fashion,
introducing a second site for chain extension.
iv. As a result of this, a cross-linked matrix with a rather well-defined structure emerges.

12
www.byjusexamprep.com

v. The addition of ammonium persulfate and the base N, N, N', N'- tetra-methyl initiates the
polymerization of acrylamide, which is an example of free radical catalysis.
vi. The persulphate ion is decomposed by TEMED to produce a free radical.
R• + M à RM•
RM•+ M à RMM•
RMM•+M à RMMM• and so on... S2O82- + e– à SO42- + SO-•4 R• + M à RM• RM•+ M
à RMM• RMM•+M à RMMM• and so on...
vii. Alternative methods for polymerizing acrylamide gels include photopolymerization.
viii. The ammonium persulphate and TEMED are substituted with riboflavin, and the gel is
placed in front of strong light for 2-3 hours after it is poured.
ix. Riboflavin photodecomposition produces a free radical, which initiates polymerization.
x. Acrylamide gels are characterized by the overall amount of acrylamide present, and the pore
size of the gel can be adjusted by adjusting the acrylamide and bis-acrylamide
concentrations.

3. Different Types Of Electrophoresis:


a. Electrophoresis- SDS-PAGE:
i. There are several different types of electrophoresis. SDS-PAGE (Sodium Dodecyl Sulphate-
Polyacrylamide Gel Electrophoresis) is the most widely used method for qualitatively
analyzing protein mixtures, especially useful for monitoring protein purification.
ii. Because the method is based on the separation of proteins by size, it can also be used to
determine the relative molecular mass of proteins.
iii. SDS is an anionic detergent (CH3-(CH2)10-CH2OSO3–Na+). SDS-PAGE samples are boiled for
5 minutes in a sample buffer containing beta-mercaptoethanol and SDS.
iv. Mercaptoethanol breaks down the disulphide bridges that keep the protein's tertiary
structure together, causing it to open up into a rod-shaped structure with a sequence of
negatively charged SDS molecules running along the polypeptide chain.
v. The negatively charged SDS molecules totally dominate the original natural charge on the
molecule.
vi. Any rotation that tries to fold up the protein chain will result in repulsion between
negatively charged on various portions of the protein chain, causing the conformation to
revert to the rod shape.
vii. The sample buffer also contains an ionizable tracking dye, commonly bromophenol blue, for
examining the electrophoretic run, as well as sucrose and glycerol for sample solution
density.
viii. When the sample is injected into the loading well, it settles gently through the
electrophoresis buffer to the bottom.
ix. The material to be separated is put into a smaller (about 1cm) stacking gel that is poured on
top of the main separating gel.
x. Before entering the main separating gel, the protein sample is concentrated into a sharp
band on the stacking gel.
xi. The pore size of the stacking gel (4 percent acrylamide) is quite large, allowing the proteins
to move freely and concentrate or stack under the influence of the electric field.

13
www.byjusexamprep.com

xii. The band sharpening effect is caused by the fact that negatively charged glycinate ions
(electrophoresis buffer) have lower electrophoretic mobility than protein SDS complexes,
which have lower mobility than chloride ions (Cl–) in the loading buffer and stacking gel.
xiii. When the current is turned on, all of the ionic species must migrate at the same pace, or
the electrical circuit will be broken. Only if they are in an area with a higher field strength
can glycinate ions move at the same speed as Cl.
▪ The negatively charged protein-SDS complexes are now moving towards the anode,
and because they have the same charge per unit length, they travel into the separating
gel with the same mobility under the applied electric field.
▪ However, the proteins separate as they move through the separating gel due to the
molecular sieving capabilities of the gel.
xiv. Simply said, the smaller the protein, the easier it is for it to move through the pores of the
gel, but large proteins are gradually slowed down by frictional resistance due to the sieving
effect of the gels.
xv. Because the bromophenol blue dye is a tiny molecule, it is completely unretarded and hence
indicates the electrophoresis front.
xvi. When the dye reaches the bottom of the gel, the current is switched off, and the gel is
removed from between the glass plates and agitated for a few hours in a suitable stain
solution (typically Coomassie brilliant blue), then rinsed in destain solution overnight.
xvii. The destain solution removes unbound background dye from the gel, revealing stained
proteins as blue bands against a clean backdrop.
xviii. A typical gel would take 1 to 1.5 hours to mix and set, 3 hours to run at 30mA, and 2-3 hours
to stain before destaining overnight.
xix. A typical separating gel consisted of 15% polyacrylamide gel.

Fig. Pictorial representation of PAGE electrophoresis

14
www.byjusexamprep.com

b. Iso-electric focusing gel (IEF):

i. Because it is based on the separation of molecules according to their various isoelectric


points, it is suitable for the separation of amphoteric substances such as proteins. It has a
high resolution, allowing it to distinguish proteins with isoelectric points that differ by as
small as 0.01 pH unit.
ii. Horizontal gels on glass plates or plastic sheets are the most frequently utilized method for
IEF.
iii. Separation is accomplished by applying a potential difference across a gel with a pH gradient
created by the addition of ampholytes (complex mixtures of synthetic polyamino-
polycarboxylic acids).
iv. Ampholytes come in a variety of pH ranges (from 3 to 10) and are chosen such that the
material being separated has an isoelectric point within that range.
v. Bio-Lyte and pharmalyte are two commercially available ampholytes.
vi. Traditionally, 1-2mm thick isoelectric focusing gels were employed, but currently, thin layer
IEF gels with a thickness of just 0.15mm are utilized, with a layer of electrical insulating tape
serving as a spacer between the gel plates.
vii. Because this approach necessitates the proteins to move freely in response to their charge
in the presence of an electric field.
viii. IEF is used in low % gels to avoid any sieving action within the gel.
ix. For the investigation of high relative molecular mass proteins that may undergo
considerable sieving even in a low percentage acrylamide gel, a 4 percent polyacrylamide
gel is commonly employed, but agarose is also used.
x. Ampholytes, gel material, and riboflavin are mixed together and poured over a glass plate
to make a thin layer of IEF gel (25cmX10cm).
xi. The second glass plate is then put on top of the first to form a gel cassette, and the gel
polymerization is completed by placing the gel in front of strong light for
photopolymerization.
xii. This takes about 2-3 hours.
xiii. The glass plates are separated once the gel has been set to expose the gel adhered to one
of the glass sheets.
xiv. A potential difference is provided to electrode wicks (3mm), which are strips of wetted filter
paper (anode-phosphoric acid and cathode-sodium hydroxide) placed along the full length
of either side of the gel.
xv. The ampholytes produce a pH gradient between the anode and the cathode as a result of
this voltage differential.
xvi. After that, the power is switched off, and the samples are applied by putting tiny squares of
filter paper soaked in the sample on the gel.
xvii. After around 30 minutes, a voltage is given again to allow the sample to electrophorese off
the paper into the gel, at which point the paper squares can be withdrawn from the gel.
xviii. Proteins that are originally at pH areas below their isoelectric point will be +vely charged
and migrate towards the cathode, depending on where on the pH gradient the sample was
loaded.

15
www.byjusexamprep.com

xix. As they go, the pH of the surrounding environment rises continuously, and the +ve charge
on the protein falls, until the protein reaches a Zwitter ion form with no net charge, at which
point the movement stops.
xx. Proteins that are negatively charged at pH levels above their iso-electric point move towards
the anode until they reach their isoelectric points and become stationary. High voltage (up
to 2500V) is utilized to produce fast separation (2-3 hours).
xxi. As a result, the gel is rinsed with a fixing solution (10 percent v/v trichloroacetic acid), which
precipitates the proteins and enables the removal of considerably smaller ampholytes.
xxii. After that, the gel is dyed with Coomassie brilliant blue before being destained.
xxiii. A combination of proteins with known isoelectric points can be run on the same gel to
determine the pI of a specific protein.
xxiv. Following staining, the distance between each band and the electrode is measured, a graph
of distance versus pI for each protein is generated, and the pI of an unknown protein may
be calculated using this calibration line.

Figure. IEF working principle

16
www.byjusexamprep.com

c. Cellulose acetate electrophoresis:

i. One of the more traditional techniques, it has a variety of uses, including the clinical
examination of blood samples. It has an advantage over the paper in that it is a lot more
homogenous medium with consistent pore size and does not absorb proteins as well.
ii. Single samples are typically run on cellulose acetate strips (2.5 X 12 cm), which are far easier
to set up and run, however, numerous samples are commonly run on broader sheets.
iii. The first technique is to use cellulose acetate in electrophoresis buffer (pH 8.6 for serum)
and load 1 to 2 litres of the sample.
iv. A filter paper wick connects the strip's end to the electrophoresis buffer tanks, and
electrophoresis is performed at 6-8 V/cm for roughly 3 hours.
v. The strip is stained for protein, destained, and the band seen after electrophoresis.
vi. In contrast to a conventional serum protein separation, which displays around 6 main bands,
this serum protein profile varies in disease states, and a clinician can get information about
a patient's illness condition from the changed pattern.

d. Pulsed-field gel electrophoresis (PFGE):

i. Because PFGE can separate DNA fragments up to 2X 103 kb, it may be utilized to separate
entire chromosomes by electrophoresis.
ii. Electrophoresis in agarose with two electric fields applied alternately at various angles for
set time intervals is the fundamental method (60secs).
iii. When the initial electric field is activated, the coiled molecules expand in the horizontal
plane and begin to travel through the gel.
iv. When this field is interrupted and the second field is applied, the molecule is forced to travel
in a different direction.
v. When a long chain molecule undergoes conformational changes in an electric field, there is
a size relaxation behaviour, thus the smaller the molecule, the faster it re-aligns itself with
the new area and is willing to continue moving through the gel.
vi. The re-alignment of larger molecules takes longer. Smaller molecules pull ahead of bigger
molecules when the field reverses, causing them to segregate according to size.

e. Agarose gel electrophoresis of DNA:

i. Because most DNA fragments would be unable to penetrate a polyacrylamide gel, an


agarose gel with a greater pore size is necessary for the separation of DNA molecules, which
are much bigger than proteins.
ii. Because the charger per unit length (due to the PO4 group) in every given segment of DNA
is the same, under an applied electrical field, all DNA samples should migrate towards the
anode with the same mobility.

17
www.byjusexamprep.com

iii. In an agarose gel, separation is accomplished due to the gel matrix's resistance to their
movement.
iv. The biggest molecules will have the greatest trouble passing through the gel pores, whilst
tiny molecules would flow through quite easily.
v. As a result, DNA molecule mobility during gel electrophoresis will be determined by their
size, with the smallest molecule moving the quickest.
vi. 0.3 percent agarose gels are used to separate DNA molecules between 5 and 60 kb, whereas
2 percent gels are utilized for samples between 0.1 and 3 kb.
vii. Use 0.8 percent gels on a regular basis to separate DNA molecules with sizes ranging from
0.5 to 10 kilobases.
viii. Horizontal, submarine, submerged gels are used to run DNA gels because they are
completely immersed in buffer.
ix. Boiling dissolves agarose in gel buffer, which is then placed into a glass or plastic plates and
enclosed by an adhesive tape or plastic frame to form a 3mm-deep gel.
x. Placing a plastic well-forming template or comb in the poured gel solution creates loading
wells, which are then removed once the gel has set.
xi. The gel is put in an electrophoresis tank, coated with buffer, and the sample is fed into the
wells directly.
xii. Samples are obtained by dispersing them in a buffer solution containing sucrose, glycerol,
and ficoll, which gives the solution structure and allows it to settle to the well's bottom.
xiii. The fluid also contains a dye, such as a bromophenol blue, to monitor sample loading and
serve as a marker for the electrophoresis front.
xiv. Since the dynamics of DNA molecules in the well are significantly higher than in the gel,
most molecules in the well accumulate against the gel within a few moments of the current
being switched on, producing a tight band at the start of the run, no stacking gel is required
for DNA electrophoresis.
xv. The DNA in the gel must be dyed and viewed once the system has been run.
xvi. The fluorescent dye ethidium bromide is the most commonly used reagent.
xvii. The DNA bands glow orange-red (binds with/intercalates between the stacked base pairs of
DNA) after being gently washed in an ethidium bromide solution and examined under UV
light (300nm).
xviii. Prolonged imaging of DNA with UV light can lead to DNA damage by nicking and base-pair
dimerization, as little as 1cm of DNA can be seen as a 1cm broadband.

18
www.byjusexamprep.com

Fig. Pictorial representation of AGE

f. Two-dimensional polyacrylamide gel electrophoresis (2D-PAGE):

i. Combines IEF, which separates proteins in a mixture based on charge (pI), with SDS-size
PAGE's separation method.
ii. When these two techniques are combined, a two-dimensional PAGE is created, which is the
most advanced analytical approach for separating proteins currently accessible.
iii. On the addition of ampholytes, 8M urea, and non-ionic detergents, the first dimension (IEF)
is performed out in polyacrylamide gels in small tubes (internal diameter 1-2mm).
iv. The gel is then gently extruded from the tube, incubated for 15 minutes in a solution
containing SDS, then put along with the stacking gel of an SDS-added gel and set in place by
pouring molten agarose in electrophoresis buffer over the gel.
v. After the agarose has set, electrophoresis begins, and the SDS-bound proteins flow into the
gel, where they are sorted by size.
vi. Is capable of resolving 1000 to 2000 proteins from a complete cell or tissue extract on a
regular basis.

19
www.byjusexamprep.com

Fig. Pictorial representation of 2D-Gel Electrophoresis

20
www.byjusexamprep.com

RECOMBINANT DNA TECHNOLOGY

rDNA: An artificially created DNA molecule that combines DNA sequences not usually found together
in nature. rDNA technology has revolutionized research in the field of medicine and agriculture.

Fig. A typical rDNA technology roadmap

21
www.byjusexamprep.com

1. Requirements for recombinant DNA technology:

a. Enzymes: restriction endonucleases, ligases, polymerases

b. Vector System: plasmid

c. Gene of Interest: insert

d. Cloning system: PCR, DNA Libraries

e. Experimental Organism: E.coli host

a. Nucleic Acid Manipulating Enzymes


i. Nucleases
o deoxyribonucleases
o ribonucleases

22
www.byjusexamprep.com

ii. Ligases
o Polymerases
o Terminal Deoxynucleotidyl transferase
iii. Nucleic Acid Modifying Enzymes
o Alkaline Phosphatase
o Polynucleotide Kinase

Some prominent nucleases used in RDT:


i. DNase I--the degradation of DNA template in transcription reactions.
ii. Micrococcal nuclease—non-specific endonuclease, digest ds, ss, Circular or linear nucleic
acids. Useful in degrading nucleic acids in Protein preparations.
iii. Mung bean nuclease—ss-specific exonuclease, removal of 5’ or 3’ extensions from DNA or
RNA termini, generation of new restriction sites.
b. Restriction Endonucleases:
Primarily found in bacterial genomes and plasmids. Also found in archaea, viruses and eukaryotes.
Certain bacterial strains are rich in source of more than one restriction enzyme. Eg. Neisseria and
Helicobacter pylori. >3500 restriction enzymes that recognize 259 different DNA sequences
are known (Williams R. J., 2003), the majority of these are Type II.
Named so as they restrict or prevent viral infection by degrading the invading nucleic acid. Host DNA
is protected by restriction endonucleases as it is modified by methylases.
Classified on the basis of :
a. Structure or composition
b. recognition site and cleavage site
c. Cofactor & activator requirement
d. 4 Types: Type I, Type II, Type III and Type IV

Type Subunit structure Cofactors & Recognition Cleavage site


activators site

I One enzyme with three Mg+2 Interrupted Distant (>1000 bp) and
(First) subunits, each for AdoMet Bipartite variable from
recognition, cleavage & (SAM), (asymmetrical recognition site
methylation (RMS-HsdM, ATP ; one EcoKI:
HsdR & containing 3 AAC(N6)GTGC(N>400)
HsdS(HsdM2HsdR2HsdS- nt and other ↓
structure of active enzyme)) with 4-5 nt TTG(N6)CACG(N>400)↑
separated by a
spacer of Interact with two
about 6-8 nt) asymmetrical bi-partite
recognition site,
translocate the DNA in

23
www.byjusexamprep.com

an ATP-hydrolysis
dependent manner and
cut DNA distal to
recognition site (approx.
half-way between two
sites)

II Simple subunit organization. Mg+2 (with Palindromic, Defined, within


(Most Usually homodimeric or one undivided, 4-8 recognition site, may
commo homotertrameric enzymes exception) nt in length result in a cohesive (3'
nly overhang or 5'overhang)
used) Two different enzymes, one or blunt end
for recognition EcoRI:
(Homodimer) & other for G↓A A T T C
methylation C T T A A↑G
(monomer)

III One enzyme with 2 Require Non- Cuts approx 25 bases


different subunits, one for Mg+2, ATP Palindromi downstream to recognition
recognition and c site
modification (Mod) and Stimulated EcoP15I:
other for cleavage (Res) by AdoMet CAGCAG(N) 25–26↓
Mod2Res2 stoichiometry GTCGTC(N) 25–26↑
of active nuclease
Interact with head-to-head
arranged asymmetrical
recognition sites, translocate
DNA in an ATP-hydrolysis
dependent manner and cut
the DNA close to one
recognition site.

24
www.byjusexamprep.com

IV Heterotrimer 6 Mg+2 Interrupted Cuts both strands on both


(2 R-M, 1 S) AdoMet, Bipartite sides of recognition site a
GTP (two defined, symmetric, short
separated distance away and leaves
non- 3' overhangs, for example,
palindromi BcgI:
c ↓10(N)CGA(N)6TCG(N)12↓
sequences ↑12(N)GCT(N)6ACG(N)10↑
that are
inversely Generally recognize and
oriented) cleave methylated DNA.
McrBC is a widely used
enzyme.

Vectors:
In recombinant DNA technology vector is known as DNA molecules which as a transporting vehicle
that carries foreign DNA into a host for cloning and expression purposes. An ideal vector has some
important features.
a. Ability to replicate into the host
b. For insertional cloning, they should have a unique restriction enzyme
c. They have genetic markers for host cells containing the vector
d. Low molecular weight

Two types of vector are generally used cloning vector and expression vector.

a. Cloning vectors

Cloning vectors are used when a number of copies of the gene of interest is required. Mostly cloning
vectors is used to construct gene libraries. Many organisms are used as cloning vectors source. Some
cloning vectors are created synthetically like YAC (Yeast artificial chromosome) and BAC (bacterial
artificial chromosomes. And many cloning vectors are taken from bacteria and bacteriophages. The
requirement of any vector is it should be genetically modified so that it can accommodate foreign
DNA, from the insertion site where new DNA will be fitted.
Example PUC, pBR322 and others
In all cases, the vector needs to be genetically modified in order to accommodate the foreign DNA by
creating an insertion site where the new DNA will fit ted. Example: PUC cloning vectors, pBR322
cloning vectors, etc.

b. Expression Vectors

Expression vectors are used when desired protein is needed from the gene of interest. For getting
protein as a product in this system it is needed to allow the expression of the gene of interest and for
that transcription and translation are employed. Apart from the origin of replication, selectable
markers, and multiple cloning sites the expression vectors also have a special additional sequence

25
www.byjusexamprep.com

i. Bacterial promoter- promoter works to proceed site of restriction at where foreign DNA
need to be inserted, and this allows transcription of foreign DNA and this process is
regulated by adding some substance which induces the promoter.
ii. A DNA sequence is required, when transcribed into RNA, produces a prokaryotic

ribosome binding site.


iii. Sequences of Prokaryotic transcription initiation and termination.
iv. Sequences that are able to tot control transcription initiation, such as regulator genes and
operators.

a. Bacterial Vectors:

i. Plasmids are naturally occurring circular DNA having self-replicating power, it is present
in many prokaryotes and a few eukaryotes also, a wide example of a plasmid vector is
pBR322 this vector replicates in E.Coli, range of plasmid occurs up to 1kb to over 300 kb.
Plasmids provide a simple way of cloning small DNA fragments, ideal plasmid has a small
size, high copy number, its origin of replication.

Fig. Plasmid Vector

26
www.byjusexamprep.com

Fig. Strategy for cloning into a plasmid vector

ii. Lambda Phage vector These are the viral vectors in which genes of interest incorporate on
the viral genome because of virus infectivity efficacy. Examples is M13- based vectors

Fig. Lambda Phage vector

27
www.byjusexamprep.com

iii. Cosmids are the hybrid vectors of plasmid and Lambda phage, their replication occurs in
the cell-like that of plasmid or they package like phage.

Fig. Cosmid Vector

iv. FosmidsThese vectors contain the F-plasmid origin of replication and have a lambda cos
site. These are similar to cosmid but have a low copy number.

v. Phagemid Phagemid vectors are plasmid DNA, which contains a small segment of the
genome of filamentous phages like M13, fd, and f1.

Fig. Phagemid vector

28
www.byjusexamprep.com

vi. PAC and P1 vectors P1 bacteriophage have a larger genome than the lambda phase, and it
is designed with an important replication component of P1 which incorporate into a
plasmid.P1 derived artificial chromosomes are known as PAC.

a. Yeast Vectors:

Yeast known as S. cerevisiae has a genome of approx. 2*107 BP, which contains 16 linear
chromosomes, and some strains possess a type of plasmids.

i. Yeast episomal plasmids (YEp) These vectors are based on endogenous yeast plasmids
which contain the information of their replication and segregation.

ii. Yeast integrated plasmid(YIp) These are mainly bacterial plasmids that carry yeast
genes. These vectors lack the yeast origin of replication.

iii. Yeast replicative plasmids(Rp) These vectors contain autonomous replication sequences
(ARS), of chromosomal origin.

iv. Yeast centromeric plasmid(YCp) These vectors contain both ARS and yeast centromere.

v. Yeast Artificial Chromosome(YAC) YAC allows cloning within the yeast cell. YAC is
essentially pbr322 into which several yeast genes have been inserted. Essential
components of YAC are; centromere, Telomere, Autonomous replicating
sequence(ARS)elements.

b. Plant Vectors:

Plant vectors are based on either plasmid-based or viral genome


i. Plasmid based vectors

In Agrobacterium species, Ti and Ri plasmids are present which are used as vectors for the plants.
in two bacteria A.tumefaciens and A.rhizogens being constructed Ti plasmid present in
A.tumifecienc and Ri plasmid present in A.rhizogens.

Gene OCS nos Tms1 Tms2 tmr ags


Product Octopine Nopaline Tryptophane -2- Indole Isopentyle Atropine
Synthetase Synthetase monooxygenase Acetamide transferase Synthetase
hydrolase
Function Opine Opine Auxin synthesis Auxin Cytokinin Opine synthesis
synthesis synthesis Synthesis synthesis

Table of T-DNA gene in Ti plasmids

29
www.byjusexamprep.com

Fig. Ti plasmid vector in plants

c. Viral vectors:

Some plant viruses are also explored as plant vectors. But major problems arise with plant viruses
in that they have an RNA genome. Two classes of plant virus Caulimovirus and geminivirus who
have DNA also used as vectors.

d. Vectors for Animals:

i. For insects In the drosophila P element, Transposome is used as a vector. Baculovirus is


also used as a vector in several insects.

ii. For Mammal In cloning vectors for mammals, many viruses are used like SV40,
papillomavirus at present retrovirus are also used.

Major Cloning Vectors: A Comparison

30
www.byjusexamprep.com

DNA Library

a. A genomic library can be considered as a recombinant bacteriophage because it contains a


portion of cellular DNA from a foreign organism.
b. This consist of all important details of the genome in form of fragments. Bacteriophages are
often used to clone genomic DNA fragments because
i. Its genomes are comparatively bigger than that of the plasmid.
ii. It can be engineered to remove a large amount of DNA that is not relevant for
infection and replication in bacterial host cells.
iii. the portion of missing DNA can be replaced by foreign DNA fragments (18- 20kbp).
iv. It helps in preparing to make infectious phage particles that can infect host bacteria by
mixing recombined phage DNA.
v. It replicates in numerous new recombinant phages, and then lyse the cells to release
the phage.
c. As compared to the mammalian genome (2 billion base pairs) a small plasmid (1000 base pairs)
we are supposed to get 2 million, a minimum number of phage clones that must be screened to
find a sequence of interest.
d. So, to comprehend this common strategy for genetically engineering a cloning vector is
implanted. This helps in the determination of the minimum properties that your vector must
have and removing non-essential DNA sequences.
e. Let’s say, The Yeast Artificial Chromosome (YAC), hosted by (replicated in) yeast cells have a
very large foreign DNA inserts! This is because to be a chromosome that will replicate in a yeast
cell requires one centromere and two telomeres.
f. During replication telomere and centromere plays a very vital role. The telomeres prevent the
chromosome from shortening during replication of the DNA and the centromere attaches the
chromatids to spindle fibres so that they can separate during anaphase in mitosis (and meiosis).
g. So along with a centromere and two telomeres along with a restriction site enables
recombination with inserts as long as 2000 Kbps in YAC.
However, a vector is chosen on the basis of gene regulation, revealing known and uncovering
new regulatory DNA sequences.
h. Clear cut information can be obtained that what other genes are nearby, and the location of
genes on chromosomes.
i. Genomic DNA sequences from one species can be a probe for similar sequences in some other
species. Apart from this, a comparative sequence analysis informs about the gene evolution and
the evolution of species.
j. Let’s look at cloning a genomic library in phage. As you will see, the principles are similar to
cloning a foreign DNA into a plasmid, or in fact, any other vector, but the numbers and details
used here exemplify cloning in phage.

31
www.byjusexamprep.com

a. Preparation of Genomic library:

i. To start with a DNA fragment of high molecular weight (i.e., long molecules of) are isolated,
purified and then digested with a restriction enzyme.
ii. The digestion is done partially to generate overlapping DNA fragments of random length.
iii. Then the digest is electrophoresed on agarose gels so that the DNA (stained with ethidium
bromide, a fluorescent dye that binds to DNA) looks like a bright smear on the gel.
iv. All of the DNA recombines with suitable vector DNA.
v. To reduce the chances of clones they are screened for a sequence, early clones will generate
a Southern blot to determine the size of genomic DNA fragments which is most likely to
contain the desired gene.

Fig. Representation of southern blotting technique to identify a genomic sequence that is to be cloned

b. Recombining Size-Restricted Genomic DNA with Phage DNA:

i. After the elution of restriction digested DNA fragments of the right size ranges from the gel,
DNA is mixed with compatibly digested phage DNA at certain concentrations.

ii. This favours the formation of H-bonds between the ends of the phage DNA and the genomic
fragments.

iii. In addition to this DNA ligase covalently links the recombined DNA molecules.

32
www.byjusexamprep.com

Fig. Representation of preparation of genomic DNA library

c. Creating Infectious Viral Particles with Recombinant Phage DNA:

i. In addition to this, the recombined phage DNA is packed with added purified viral coat
proteins to make infectious phage particles.
ii. Thereafter, the phage is subjected to the culture tube full of host bacteria (typically E. coli).
iii. This will lead to infection after which the recombinant DNA enters the cells where it
replicates.
iv. After replication, the production of a new phage takes place which eventually lyses the host
cell.

d. Screening a Genomic Library; Tittering Recombinant Phage Clones:

i. To detect the number of virus particles the phage lysate is tittered on a bacterial lawn
(growing many bacteria on the agar plate that will simply grow together rather than as
separate colonies).
ii. The lysate is diluted up to 10 folds and spread over a bacterial lawn.

Fig. Preparation of infectious DNA by packaging with viral protein coating

33
www.byjusexamprep.com

e. Probing the Genomic Library:

i. After the serial dilution the gene of interest is isolated on the basis of size-selected
fragments).

ii. These fragments are then inserted into the phage vectors in the first place. This will only
represent a partial genomic library.
iii. To get a number of plaques replica filters of plaque is made which is similar to that of the
bacterial colonies.
iv. The lawn will have some viral protein-coated phage DNA in a plaque and some will not.
v. The filter will remove the uncoated particles and follow the hybridized to a probe directly
with a known sequence.
vi. One of the commonly known genomic library is cDNA (complementary DNA) where probes
for screening a genomic library were usually an already isolated and sequenced
complementary DNA clone.
vii. This was either obtained from the same species as the genomic library, or from a cDNA
library of a related species.
viii. The process initiates after soaking the filters in a radioactively labelled probe.
ix. Then after the X-Ray film is placed over the filter, exposed and developed. Eventually,
black spots will form where the film lay over a plaque containing genomic DNA
complementary to the radioactive probe.
x. cDNA formation takes place from the synthesis of DNA from an RNA template, via reverse
transcription.
xi. cDNA further serves as a template in various downstream applications for RNA studies
such as gene expression. And thus it is considered as the initial step for molecular biology.

Fig. Process of cloning large genomic DNA fragments by packaging with viral protein coating

34
www.byjusexamprep.com

f. Utility of Genomic Library:

i. Determination of the complete genome sequence of an organism.


ii. To study and generate transgenic animals with the help of genetic engineering as it will be
the major source of genomic sequence information.
iii. To analyse the function of regulatory sequences in vitro.
iv. To study the ongoing genetic mutations.
genome mapping, sequencing and the assembly of clone contigs (A set of gel readings that are
related to one another by overlapping their sequences.).

g. Advantages of genomic libraries:

i. It helps in the identification of a specific clone that encodes a particular gene of interest.

ii. It is useful for the detection of relatively small genomes likewise in prokaryotic organisms.

iii. In the case of eukaryotic organisms the genome sequence of a particular gene, including its
regulatory sequences and its pattern of introns and exons can be easily studied.

h. Disadvantages of genomic library:

i. Very long genome containing a lot of DNA which does not code for proteins and also contain
non-coding DNA such as repetitive DNA and regulatory regions is difficult to analyze.

ii. If the screening method requires the expression of a gene the genomic library will not be a
useful tool.

cDNA libraries:

Representative of mRNA population, and therefore reflect mRNA levels. cDNA is prepared by Reverse
Transcription of cellular mRNA.

Advantages:

a. cDNA is devoid of introns so the size of cDNA is smaller than genomic DNA.
b. Bacteria do not have splicing machinery, so cDNA clones can be easily expressed.
c. Represent the expression profile of mRNAs in a particular tissue from which the RNA was
extracted.
d. Represent the diversity of splice isoforms in particular in specific tissues.
e. cDNA libraries are constructed in lambda ZAP vectors.
f. Has high capacity for cloning (up to 10kb)
g. Possess polylinker region for cloning
h. Availability of T3 and T7 RNA Pol sites flanking the polylinker.

35
www.byjusexamprep.com

GENE KNOCKOUT TECHNOLOGY

1. Gene Knockout:

i. The gene knockout approach has long been one of the most reliable and well-established
methods for examining the function of a gene or a group of genes.
ii. A gene is a functioning portion of DNA that encodes a protein; inactivating a gene, either by
removing DNA sequence, modifying DNA sequence, or introducing a mutation, prevents a
gene from performing its usual role, resulting in loss of function.
iii. The gene is subsequently put into a model organism's germline cells and allowed to develop.
It can be injected into the developing embryo using artificial vectors.
iv. An observable phenotypic variation or a change in the biochemical phenotype can be
noticed if a gene knockdown succeeded properly.

Process of gene knockout:


a. Selecting a gene for knockout:

The initial stage in any genetic engineering project is to choose a target; in this case, the target
is a gene that we want to research or whose function we want to comprehend. After that, some
dry lab work is done, such as examining the structure, length, and other parameters linked to
our gene of interest. The best-suited plasmid for the experiment is chosen based on this.
However, the gene of interest must first be discovered and mapped to a chromosome.

b. Construction of vector:
i. A vector is a vehicle for transferring our gene of interest or any other DNA sequence to our
target cells; in most cases, a plasmid is used.
ii. A plasmid is a piece of bacteria's extrachromosomal DNA that is employed in genetic
engineering operations. Plasmid DNA- Structure, Function, Isolation, and Applications is a
good place to start learning about plasmids.

Fig. A general structure of a plasmid used in genetic engineering experiments.

36
www.byjusexamprep.com

BAC- bacterial artificial chromosome vector is used for gene knockout experiments.
i. Assume we've introduced a frameshift mutation into our DNA sequence that prevents the
creation of proteins.
ii. We changed the gene by removing part of the ORFs (open reading frames) and inserting a
new gene into the BAC. Now, to be on the safe side, we've added a marker DNA sequence
to our data, which is usually an antibiotic resistance gene.
iii. Other critical DNA sequences, including the replication origin, promoter sequence, and
recognition sequences, are also put into the plasmid with it. Our plasmid is now ready to be
transformed.
iv. "A marker gene is merely introduced to make the insert observable so that the results may
be reported; it acts as a reporter."
v. NeoR gene- neomycin resistance gene is a common reporter or marker sequence used in
gene deletion research, and mouse cells die when it is present (because the NeoR gene is
generally not present in mice).
vi. The gene is inserted between the left and right arms of the plasmid or target vector, and
the NeoR gene is put into the target nucleic acid after the left and right arms recombinant
with the gene of interest. It's sometimes combined with a negative selection marker gene
or a negative reporter gene.

c. Insertion into ES cells:

i. Our plasmid is now injected into ES cells via artificial methods such as electroporation,
sonication, or microinjection. Embryonic stem cells can be split more quickly and into a
variety of cell types. ES cells have the ability to grow into mature mouse tissues.
ii. The electrophoration method, in which a gene is inserted into a cell under the influence of
an electrical current, is one of the most effective techniques employed by scientists in gene
knockout. Our plasmid has now entered our target cells, ES cells.
iii. Recombination will occur if it detects the target gene, and a mutation will be put into the
target gene. We also employed one marker gene sequence, which is introduced into the
genome of transformed cells alongside the mutant gene sequence.
iv. Our changed cells are now cultured on a neomycin-containing medium, allowing the NeoR
gene-containing cells to thrive. We can also tell the difference between cells that have the
NeoR gene and cells that don't.

d. Confirming the insert:


i. It's likely that some cells will be converted while others will not when we cultivate our cells
in vitro. The insert can now be validated using the polymerase chain reaction. DNA is taken
from cultivated cells for this purpose.
ii. To achieve amplification, we use a set of primers that are particular to our marker DNA
sequence. Cells are transformed if the amplicons are visible; else, our experiment fails. If
the ES cells are successfully changed, we may now refer to them as genetically modified
cells.

37
www.byjusexamprep.com

e. Injecting into the embryo:

Select altered cells and place them in our model organism's developing embryo. Allow it to
develop properly.
f. Breeding:
i. Our model organism's embryo now has two types of cell populations, one wild type and one
changed (transformed) cell population; this animal is referred to as chimaera.
ii. After genetically modifying our chimeric animal, we breed it with a regular animal,
producing offspring with two different genotypes: one with homozygous normal genotype
and the other with homozygous changed genotype (and heterozygous as well).

Fig. Strategy overview

How to confirm gene knockout?


i. One of the most critical and important aspects of the investigation is the validation of gene
knockout.
ii. Though many other ways are utilised to accomplish so, a polymerase chain reaction is one
of the most common currently. For most investigations, the polymerase chain reaction is
one of the most extensively used and most reliable procedures. Two sets of primers are
used here to confirm or validate gene knockout.
iii. The flanking areas of the target sequence are used to design one set of primers, and the
sequence of the marker gene, which is the antibiotic resistance gene, is used to design the
other set of primers.

38
www.byjusexamprep.com

iv. Now write down one of the experiment's most crucial findings: the antibiotic gene gets
transferred to the target genome if the target gene is recombined.
v. As a result, our experiment will be successful if we obtain a DNA band in the PCR reaction
using primers that are complementary to the marker gene. The flanking primers specific to
the gene of interest, on the other hand, cannot be amplified and the DNA band cannot be
obtained after the target gene is deleted.
vi. However, as a control reaction, wild type DNA was employed to obtain a DNA band with
primer set 1. Following the completion of the amplification procedure, the data are
confirmed using agarose gel electrophoresis. On a 2 percent agarose gel, the PCR results
may be seen.

Why mice?

Mice with an inactivated gene of interest created to study the function of that particular gene is called
knockout mice. We are using the mice in the genetic engineering studies and knockout studies because
of the similarities between the genes of humans and mice. 99% of human and mice genes are similar,
thus instead of using human embryos directly for the experiment, using mice is a wise decision. Due
to several ethical issues associated with human embryo studies, scientists are using mice for gene
knockout and gene knock-in studies.

1. Methods for gene knockout:

For constructing an artificial gene knockout organism several methods are used to silence or remove
the gene of interest.
a. Gene silencing
b. Conditional knockout
c. Homologous recombination
d. Gene editing
e. Knockout by mutation

a. Gene silencing:

i. RNA interference, often known as RNAi, is a cell's inherent process for post-translational
modification such as gene silence or suppression. For mRNA-target specific gene
suppression, smaller double-stranded RNAs such as siRNA or shRNA are utilised.
ii. SiRNA or shRNA can be injected into the cell by artificial techniques or liposomes, which
is recognised by the cell's defence mechanism and processed in the RISC. The "Ago"
endonuclease aids in the separation of the guided-strand and passenger strand, the latter
of which binds to the target mRNA and prevents protein synthesis.
iii. Either the guided-strand or the 3' untranslated region of the mRNA binds to exact
complementary sequences on the mRNA and blocks gene expression.

39
www.byjusexamprep.com

b. Conditional knockout:
i. The conditional knockout approach inactivates a gene in a specific tissue for a specified
function at a given time. Furthermore, the process of homologous recombination, it is
done in the adult animal rather than the embryonic stage.
ii. The conditional gene knockout method can be used to study cancer and other deadly
diseases in mammalian model organisms. For exploring conditional knockout, the Cre-
LoxP approach is used, in which the site-specific recombinase "Cre" recombinates the
short target sequences called LoxP.
iii. The conditional gene knockout method is frequently used to examine the effect of various
diseases on model organisms. For example, the knockout mice model is used to study the
effect of BRCA1 gene mutation.

c. Homologous recombination:
i. Homologous recombination is a well-known and commonly utilised method for exploring
gene knockout in genetic engineering. Homologous recombination is the exchange of
nucleic acid between identical or homologous sequences.
ii. And scientists are employing this concept to replace our target gene with the gene of
interest. Identical DNA sequences up to 2Kb are put in the vector together with the
antibiotic resistance gene and incorporated into the target genome using artificial
methods including electroporation, microinjection, or sonication.
iii. The vector recombined with the target DNA sequence, our DNA of interest with the
antibiotic resistance gene incorporated into the target genome, once it was put into the
cell.

40
www.byjusexamprep.com

d. Site-specific nucleases

Fig. Frameshift mutation resulting from a single base pair deletion, causing altered amino acid sequence and
premature stop codon.
i. There are now three methods for introducing a double-stranded break that involve
precisely targeting a DNA sequence.
ii. When this happens, the cell's repair mechanisms will attempt to repair the double
stranded break, most commonly by using non-homologous end joining (NHEJ), which
involves ligating the two cut ends together directly.
iii. This can be done incorrectly, resulting in base pair insertions or deletions, resulting in
frameshift mutations.
iv. These mutations can make the gene in which they occur nonfunctional, resulting in a gene
knockout. Because this method is more efficient than homologous recombination, it can
be utilised to construct biallelic knockouts more easily.

e. Zinc-fingers
i. DNA binding domains in zinc-finger nucleases allow them to accurately target a DNA
sequence. Each zinc finger can detect codons in a specific DNA sequence and may thus be
constructed in a modular fashion to bind to that sequence.
ii. These binding domains work in tandem with a restriction endonuclease, which can trigger
a DNA double-strand break (DSB). Repair procedures may introduce mutations that
render the gene inoperable.

f. TALENS
i. A DNA binding domain and a nuclease that can cleave DNA are also found in transcription
activator-like effector nucleases.
ii. The amino acid repeats in the DNA binding region each identify a single base pair of the
desired targeted DNA sequence.

41
www.byjusexamprep.com

iii. When this cleavage occurs in the coding region of a gene and NHEJ-mediated repair
introduces insertions and deletions, a frameshift mutation occurs, affecting the gene's
function.

g. CRISPR/Cas9

i. CRISPR/Cas9 (clustered regularly interspaced short palindromic repeats) is a genome-


editing technology that uses a guide RNA complexed with the Cas9 protein.
ii. Instead of the time-consuming assembly of constructs required by zinc-fingers or TALENs,
the guide RNA can be tailored to match a specific DNA sequence through simple
complementary base pairing. A double-stranded break in the DNA will be caused by the
paired Cas9.
iii. Attempts to repair these double-stranded breaks, similar to zinc-fingers and TALENs,
frequently result in frameshift mutations, resulting in a non-functional gene.

h. Gene editing

i. Gene editing is a relatively new method in modern genetics, in which scientists use
nucleases to delete nucleic acid by homologous recombination. Gene-editing nucleases
such as ZFNs, TALENs, and CRISPR-CS9 are employed in genetic engineering.
ii. Traditional methods such as ZFN and TALEN are ineffective and inefficient, but CRISPR-
CAS9 is more effective and efficient. CRISPR-Cas9 protein-nuclease cuts the exogenous
nucleic acid to protect the bacterium by clustering regularly spaced short palindromic
repeats found naturally in bacteria.
iii. Scientists can cut and insert fresh DNA at the place they want to research using this
process in gene therapy. All three approaches involve site-specific nuclease action to
create a double-stranded cut in DNA, which is then repaired by the cell's own DNA repair
machinery using non-homologous end-joining. This mechanism can cause a frameshift or
deletion mutation in a DNA sequence, resulting in a non-functional protein product.

1. Gene knockout Vs gene knockdown:


i. Gene knockout, as stated throughout this article, is the process of making a gene non-
functional, whereas gene knockdown is the process of reducing a gene's expression.
ii. In simple terms, the gene knockout approach makes a gene completely inert, whereas the
gene knockdown method keeps the gene active but reduces its expression to determine
its activity in a certain cell type.
iii. Introducing a mutation- gene knockout- is one of the most effective ways to inactivate a
gene. Scientists can use siRNA or shRNA gene knockdown to lower gene expression
utilising RNA interference.

42
www.byjusexamprep.com

2. In vitro mutagenesis:

i. Another application of cloned DNA is in vitro mutagenesis, which involves creating a


mutation in a piece of cloned DNA. The DNA is then transplanted into a cell or organism,
and the repercussions of the mutation are studied.
ii. Mutations are beneficial to geneticists because they allow them to investigate the
components of any biological process. Traditional mutational analysis, on the other hand,
relied on the occurrence of random spontaneous mutations, which was a hit-or-miss
method with no way of knowing what type or position of changes would be achieved. In
vitro mutagenesis can be used to change the kind and location of certain mutations inside
a gene.
iii. A cloned gene is treated in vitro (in a test tube) to get the desired mutation, and then this
piece is reintroduced into a living cell, where it replaces the resident gene.
Oligonucleotide-directed mutagenesis is one way of in vitro mutagenesis. A mutation is
targeted at a specific place in a sequenced gene. Chemically, an oligonucleotide is a short
stretch of synthesised DNA with the required sequence.
iv. For example, the oligonucleotide could have adenine instead of guanine in one precise
place. Despite the one base pair mismatch, this oligonucleotide will hybridise to the
complementary strand of the cloned gene.
v. Various enzymes are used to allow the oligonucleotide to prime the vector for the creation
of a full strand. When the vector is put into a bacterial cell and replicated, the mutant
strand serves as a template for a complementary mutant strand, resulting in a fully mutant
molecule. The donor organism is then reintroduced with this fully mutated cloned
molecule, and the mutant DNA replaces the native gene.
vi. Another kind of in vitro mutagenesis is gene disruption or gene knockout. Here the local
functioning gene is replaced by a fully non-functional clone. The advantage of this
technique over random mutagenesis is that certain genes can be knocked out at will,
leaving all other genes undisturbed by the mutagenic procedure.
vii. A functioning gene is replaced by an inactivated gene generated utilising recombinant
DNA technology in gene knockout. The mutant phenotype (observable traits) that results
when a gene is "knocked out" typically discloses the gene's biological function.

43
www.byjusexamprep.com

Fig. Schematic representation of gene knockout technology

44
www.byjusexamprep.com

PROTEIN SEQUENCING

Step 1. Breaking the Disulphide bonds:

a. The disulphide linkage interferes with the complete sequencing procedure as it doesn’t
allow the release of cleaved amino acids from the peptide chain. Polypeptide chains linked
by disulfide bonds can be separated by two common methods. First, Oxidation of disulfide
bonds with performic acid.

b. The second method involves the reduction by B-mercaptoethanol or dithiothreitol


(Cleland’s reagent) to form cysteine residues. This reaction is followed by further
modification of the reactive SH groups to prevent the reformation of the disulfide bond.
Acetylation by iodoacetate serves this purpose which modifies the -SH group.

c. If protein is made up of two or more polypeptide chains and held together by noncovalent
bonds then denaturing agents, such as urea or guanidine hydrochloride, are used to
dissociate the chains from one another.

Fig. Breaking the disulfide bonds

Step 2. Cleavage of polypeptide chain:

Sequencing techniques currently in use cannot determine the amino acid sequence of
polypeptides with more than about 50 amino acid residues. Thus it is necessary to cleave large
polypeptides into smaller peptides and to determine the amino acid sequences of the
peptides individually. This cleavage can be done by using below mentioned chemicals or
enzymes.

45
www.byjusexamprep.com

Fig. Chemical and enzymatic cleavage

Step 3. Sequencing the peptides:

Once the peptide fragments are generated, we can start the sequencing of each polypeptide chain. It
has the following steps:

a. Identifying the N-terminal residue:

The N-terminal amino acid analysis is a 3 steps process.

i. Derivatization of terminal amino acid- The chemical reaction is performed to label


terminal amino group with compounds such as sanger reagent, 1-fluoro-2,4-
dinitrobenzene (DFNB) and dansyl chloride. In most of the case these reagents also label
freee amino group present on basic amino acids such as lysine and arginine.
Dinitrofluorobenzene reacts with the free amine group to form dinitrophenyl-amino acid
complex.

ii. Hydolyse the protein- Acid hydrolysis of dinitrophenyl-amino acid complex leads to the
breaking of peptide bond to release dinitrophenyl amino acid complex in solution.

iii. Separation and analysis of derivatized amino acids- A HPLC or TLC separation of complex
and comparing with the standard amino acids is done.

b. Edman degradation sequencing method:

Edman Degradation sequencing method has the following steps:

i. Similar to Sanger reagent, phenylisothiocynate reacts with the terminal amino group to
form a cyclic phenylthiocarbamoyl derivative.

ii. Under the acidic conditions, the terminal amino acid is cleaved from the main chain as
thiazolinone derivative.

46
www.byjusexamprep.com

iii. Thiazolinone derivative is extracted into the organic solvent and it


forms phenylthiohydantoin-amino acid (PTH-amino acid) derivative in the presence of
acid.

iv. PTH amino acid complex can be identified by HPLC or TLC and comparison with the
standard amino acids can be done.

v. Step 1-4 can be repeated again with the next amino acid residue in the peptide chain.

Fig. Edman degradation for sequencing

Step 4. Ordering the peptide fragments: The usage of different protein cleavage reagents produces
overlapping amino acid stretches and these stretches can be used to put the whole sequence.

Step 5. Locating disulfide bonds: The protein cleavage by trypsin is performed with or without
breaking disulphide linkage. Amino acid sequence analysis of the
fragments will provide the site of a disulphide bond. The presence of one disulphide will reduce two
peptide fragments and will appear as one large peptide fragment.

Mass Spectrometry Method:

In recent past mass spectroscopy in conjugation with proteomics information has also become a
popular tool to characterize each peptide fragment to deduce its amino acid sequence.

47
www.byjusexamprep.com

DNA SEQUENCING

DNA sequencing consists of biochemical techniques for determining the nucleotide order in a DNA
molecule. Two different methodologies for DNA sequencing include:

i. The chain termination method (Frederick Sanger, 1977)

ii. The chemical degradation method (Allan Maxam and Walter Gilbert, 1977)

To begin with, both the methods were equally popular but the chain termination procedure has gained
precedence in recent years for genome sequencing. This is partly because the chemicals used in the
chemical degradation method are toxic and therefore hazardous to the health of the researchers
doing the sequencing experiments, but majorly because of the ease of automating the chain
termination method.

1. Maxam-Gilbert Sequencing/Chemical Degradation (Enzymatic Method):

Maxam-Gilbert method is based on separate chemical reactions of Purines and Pyrimidines. It is a two-
step catalytic process involving:

a. Chemical modification of specific bases by two reagents namely, Dimethyl sulfate and
Hydrazine that selectively attack purines and pyrimidines, respectively.
b. Followed by sugar-phosphate bond cleavage at the base modification sites by Piperidine.

Purines react with DMS and pyrimidines with hydrazine resulting in breakage of the glycosidic bond
between the ribose sugar and the base displacing the base. Piperidine catalyzes phosphodiester bond
cleavage where the base has been displaced. The chemical treatment used for base modifications
includes

a. Only G – Methylation of G (N7) with DMS at pH 8.

b. A+G -- Treatment with piperidine formate at pH 2; weakens the glycosidic bond of purines.

c. C+T -- Hydrazine treatment; splits the rings of pyrimidines.

d. Only C – Hydrazine treatment in the presence of 1.5M NaCl; wherein, only cytosine reacts with
hydrazine.

48
www.byjusexamprep.com

a. Strategy:

i. Target DNA is radiolabelled with phosphorus, achieved by using the polynucleotide kinase
enzyme that catalyses the transfer of 32P from γ32P- ATP to 5'-OH end of each dsDNA strand.

ii. The radioactive DNA is then denatured with NaOH and the two single strands are separated
by electrophoresis.

iii. The purified singular strands are then analysed separately; subjected to four chemical
cleavage reactions (G, G+A, C+T, C).

iv. The resulting products of each of the reactions are separated by denaturing PAGE.

v. Autoradiography of the gel.

vi. Base-calling (sequence read) from bottom to the top.

vii. Replication of gel run.

viii. Sequence confirmation.

b. Points to Remember:

i. The number of bases cleaved in each reaction does not depend upon the length of
ii. the DNA fragment but only on the concentration of reagents, reaction temperature and the
incubation time.

iii. It is important to choose optimal reaction conditions to ensure higher efficacy.

iv. Good gel resolution is also very important.

v. In order to read more, multiple loadings are done at different times.

vi. Sequencing gels used in Maxam-Gilbert/Sanger method are run in the presence of
denaturing agents like urea and formamide which prevent the formation of secondary DNA
structure due to intrachain base pairing, which would affect electrophoretic mobility of the
ssDNA molecules (due to conformation change).

49
www.byjusexamprep.com

Fig. Maxam-Gilbert Sequencing workflow

50
www.byjusexamprep.com

2. Sanger Sequencing/Chain Termination Using Dideoxy Nucleotides:

The chain termination method relies on the use of Dideoxyribonucleoside triphosphates (ddNTPs)
which are simply derivatives of the normal deoxyribonucleotides lacking the 3’-hydroxyl group (3’-H
in place of 3’-OH). The four ddNTPs (dATP, dCTP, dGTP and dTTP) function as terminators, compete in
the reaction with normal dNTPs, and produce no phosphodiester bond. Hence, whenever the radio-
labelled ddNTPs are incorporated in the chain, DNA synthesis terminates. The methodology is based
on the underlying principle that single-stranded DNA molecules that differ in length by just a single
nucleotide can be separated from one another by polyacrylamide gel electrophoresis.

Methodology:

i. The DNA template is denatured to single strands.

ii. DNA primer (with 3’ end near sequence of interest) is annealed to the template DNA and
extended with DNA polymerase.

iii. Four reactions are set up, each containing:

● DNA template – eg. a plasmid

● Primer

● DNA polymerase (DNA Pol I Klenow is used as it has 5’—3’ synthesis and 3’—5’
exonuclease activity; Lacks 5’—3’ Exo activity

● dNTPs (dATP, dTTP, dCTP, and dGTP)

iv. Next, a different radio-labeled dideoxynucleotide (ddATP, ddTTP, ddCTP, or ddGTP) is


added to each of the four reaction tubes at 1/100th the concentration of normal dNTPs.
v. Each of the four reaction mixtures produces a population of DNA molecules with DNA
chains terminating at each “ddNTP terminator” base.
vi. Extension products in each of the four reaction mixtures also end with a different radio-
labelled ddNTP (depending on the base).
vii. Next, each reaction mixture is electrophoresed in a separate lane (4 lanes) at the high
voltage on a polyacrylamide gel.
viii. The pattern of bands in each of the four lanes is visualized on an X-ray film.
ix. The location of “bands” in each of the four lanes indicate the size of the fragment
terminating with a respective radio-labelled ddNTP.
x. The DNA sequence is hence deduced from the pattern of bands in the four lanes
(sequence read from bottom-top).

51
www.byjusexamprep.com

Fig. Sanger Sequencing workflow


3. Automated Sequencing:
Manual sequencing has basically died out due to its several limitations such as, the need of four lanes,
use of radioactive gels, and is labour-intensive; a technician on an average per day from one gel can
get four sets of four lanes, with maybe only 300 base pairs of data from each template. This approach
is not well suited for automation; it is rather more desirable to acquire sequence data in real-time by
detecting the DNA bands within the gel during the electrophoretic separation. Fluorescence detection
is a highly effective alternative to autoradiography; with fluorescent ddNTPs forming the basis of
automated sequencing.
Automated sequencing is advantageous:
a. Yields good quality data.
b. Output is machine-readable and thus eliminates chances of errors due to sequence reading.
c. With this method, all four terminators can be placed in a single tube, and only one reaction is
necessary.
d. Fluorescence detection is beneficial because it eliminates the use of radioactive reagents and can
be readily automated.

52
www.byjusexamprep.com

4. Points to Remember:
a. Sanger sequencing is simple, accurate and reproducible.
b. Faithful synthesis of a complementary copy of DNA. The complementary sequence is read
to deduce the nucleotide order of the original DNA.
c. Needs a single-stranded template; generated by either chemical or heat-denaturation.
d. Discrimination of ddNTPs; Klenow possesses this and leads to variation in the number of
fragments for each base, often resulting in uneven band intensities or peak heights. Thus,
a high concentration of ddNTPs is required. Alternatively, T7 DNA Polymerase could be
more useful; T7 DNA Pol lacks 5’—3’ exonuclease as well as the 3’---5’ Exo activity of
Klenow (undesirable as it leads to hydrolysis of primers).
e. Also, point mutations or small deletions can be carried out for reducing the interactions
with specific ddNTPs.
f. Currently, almost all DNA sequencing is performed using the chain termination method,
developed by Frederick Sanger. The complete genome of ∅ X174 DNA virus (first complete
sequence determination by Sanger et al.) and other viruses; many bacterial and archaeal
genomes; Saccharomyces cerevisiae (first unicellular eukaryotic genome sequence);
nematode Caenorhabditis elegans ( first complete sequencing of a multicellular
organism); human genome (Human Genome Project); plant (Arabidopsis thaliana) have
all been accomplished first-generation Sanger Sequencing.

53
www.byjusexamprep.com

GENOME SEQUENCING

First generation sequencing technologies comprise the sequencing-by-cleavage strategy developed by


Maxam and Gilbert and the sequencing-by-synthesis approach pioneered by Sanger, which has
remained the premier DNA sequencing technique since its introduction in 1977. The genome
sequence of humans, as well as several others model organisms, have been sequenced using the chain
termination method. Since the completion of the first human genome sequence, a need was felt for
even greater sequencing throughput, faster and more economical sequencing technologies. This
demand acted as an incentive to develop new, the next-generation sequencing (NGS) strategies.

NGS has the ability to process millions of sequence reads in parallel (millions or billions of DNA
fragments from a single sample sequenced in unison) at a time, entire genome in less than a day at a
fraction of the cost of first-generation technologies. The main advantages of NGS over classical Sanger
are :

a. Speed

b. Accuracy

c. Sample size

d. Cost

e. Eliminates the need of cloning

f. Sacalable; large volume of data generation

54
www.byjusexamprep.com

Table. NEXT-GENERATION SEQUENCING PLATFORMS vs SANGER SEQUENCING:

Method Single- Ion Pyrosequenci Sequencin Sequencin Chain


molecule semiconduct ng g by g by terminatio
real-time or (Roche/454) synthesis ligation n (Sanger
sequenci (Illumina) (SOLiD sequencin
ng sequencin g)
g)
(PACBIO)

Read length 2900 bp 200 bp 700 bp 50 to 250 50+35 or 400 to 900


average1 bp 50-50 bp bp

Accuracy 87% 98% 99.9% 98% 99.9% 99.9%


(read
length
mode),
99%
(accuracy
mode)

Reads per 35 – 75 Up to 5 1 million Up to 3 1.2 to 1.4 N/A


run thousand million billion billion

Time per run 30 2 hours 24 hours 1 to 10 1 to 2 20 minutes


minutes 2 days weeks to 3 hours
hours depending
upon
sequence
and
specificati
on read
length

Cost per 1 $2 $1 $10 $0.05 to $0.13 $2400


million bases $0.15

Advantages Longest Less Long read Potential Low cost Long


read expensive size. Fast for high per base. individual
length, equipment. sequence reads.
Fast. Fast yield, Useful for
Detects 4 depending many
mC, 5mC, upon application
6 mA sequencer s.
model

55
www.byjusexamprep.com

Disadvantag Low yield Homopolym Runs are Equipment Slower More


es at high er errors expensive, can be very than other expensive
accuracy. Homopolyme expensive. methods and
Equipme r errors. impractical
nt can be for larger
very sequencin
expensiv g projects.
e

1. ROCHE/454 SEQUENCING PLATFORMING:

This technique is based on a chemical light-producing enzymatic reaction for determining the
nucleotide order of the target DNA strand. It incorporates Emulsion PCR for cluster (clonal)
amplification and Pyrosequencing sequencing strategy that was developed by Mostafa Ronaghi and
Pal Nyren in 1996. In this sequence-by-synthesis approach, the addition of a specific base out of the
four deoxynucleotides to end of the growing strand is detectable by the accompanying release of a
flash of light.

The four enzymes included in the pyrosequencing system are the Klenow fragment of DNA polymerase
I, ATP sulfurylase, luciferase and apyrase. Apyrase is included in the pyrosequencing technology for
degradation of unincorporated nucleotides and excess ATP between base additions. The reaction
mixture also contains the enzyme substrates adenosine phosphosulfate (APS), luciferin and the
sequencing template with an annealed primer to be used as starting material for the DNA polymerase.
The first reaction, DNA polymerase. The first reaction, DNA polymerization, occurs if the added
nucleotide forms a base pair with the sequencing template and, thereby, is incorporated into the
growing DNA strand the inorganic pyrophosphate, released by the DNA polymerase serves as a
substrate for ATP sulfurylase, which produces ATP. Finally, the ATP is used by luciferase to generate
light and the light signal is detected. Hence, only if the correct nucleotide is added to the reaction
mixture, light is produced. Apyrase removes unincorporated nucleotides and ATP between the
additions of different bases.

56
www.byjusexamprep.com

SEQUENCING WORKFLOW:

a. Library preparation of single-stranded DNA, 200-500 bp long and ligate adapters.

b. Perform emulsion PCR, amplifying a single DNA template molecule in each microreactor (bead).

c. Sequence all clonally amplified sample fragments in parallel using pyrosequencing technology.

d. Analyse sequence results-

i. Align overlapping sequence of individual reads to define contigs (Shotgun)

ii. Order and orient contigs, create scaffolds (Paired-End)

iii. Identify variants (Amplicon)

iv. Determine gene expression patterns (Transcriptome)

57
www.byjusexamprep.com

2. ILLUMINA:

Illumina next-generation sequencing utilizes a fundamentally different approach from the classic
Sanger chain-termination method, it is based on sequencing by synthesis (SBS) technology — tracking
the addition of labelled nucleotides as the DNA chain is copied – in a massively parallel fashion.
Illumina sequencing systems can deliver data output ranging from 300 kilobases up to 1 tera base in a
single run, depending on instrument type and configuration. In this sequencing technique, random
fragments of the genome to be sequenced are immobilized in a flow cell, and then amplified in situ,
resulting in localized clusters of around 1000.

58
www.byjusexamprep.com

SEQUENCING PIPELINE:

a. Genomic DNA Library preparation:

b. Cluster generation:

i. Single molecules hybridize to the lawn of primers and bound molecules are then extended
by polymerases.

ii. The double-stranded molecule is denatured and the Original template is washed away.

iii. Newly synthesized strand then covalently attached to the flow cell surface. Single
molecules bind to the flow cell in a random pattern.

iv. Bridge Amplification-

● the single-strand flips over to hybridize to adjacent primers to form a bridge and
hybridized primer is extended by polymerases forming a double-stranded bridge.

● The double-stranded bridge is denatured. As a result, two copies of covalently bound


single-stranded templates were generated.

● Single-strands flip over to hybridize to adjacent primers to form bridges hybridized


primer is extended by a polymerase.

● The bridge amplification cycle is repeated until multiple bridges are formed.

59
www.byjusexamprep.com

v. Linearisation –

● dsDNA bridges are denatured.

● Reverse strands cleaved and washed away leaving a cluster with forwarding strands
only.

vi. Blocking, wherein, free 3’ ends are blocked to prevent unwanted DNA priming.

i. Primer hybridisation, the sequencing primer is hybridized to an adapter sequence.

c. Sequencing and Analysis (generate base calls):

i. Illumina performs sequencing by Cyclic Reversible Termination(CRT) technique that


utilizes reversible terminators in a cyclic fashion comprising nucleotide incorporation,
fluorescence imaging and cleavage.

ii. All 4 labelled nucleotides in 1 reaction-

60
www.byjusexamprep.com

● C and A bases are excited by a red laser.

● T and G bases are excited by the green laser.

3. ION TORRENT (SEMICONDUCTOR SEQUENCING/ pH based Sequencing) :

Ion torrent sequencing does not make use of optical signals. Instead, they exploit the fact that the
incorporation of a dNTP into a growing DNA strand involves the formation of a covalent bond and the
release of pyrophosphate and a positively charged hydrogen ion. The release of hydrogen ions results
in changes in the pH of the solution, which is detected by a detector. This technology differs from
other sequencing technologies in that the system does not require nucleotide labelling and no optical
detection is involved.

SEQUENCING WORKFLOW:

a. The first step includes library construction which involves DNA fragmentation and adaptor
ligation.
b. These fragments are clonally amplified on the small beads by emulsion PCR.
c. Enriched beads are primed for sequencing by annealing a sequencing primer and are deposited
into the wells of an Iron Chip, a specialized silicon chip designed to detect pH changes within
individual wells of the sequencer as the reaction progresses stepwise.
d. Each microwell contains many copies of the single-stranded template DNA molecule to be
sequenced and DNA polymerase is sequentially flooded with a single species of unmodified
dNTP.
e. If the introduced dNTP is complementary to the leading template nucleotide, it is incorporated
into the growing complementary strand. The hydrogen that is released during the reaction
changes the pH of the solution.

61
www.byjusexamprep.com

f. Beneath each microwell, the Ion Sensitive Field Effect Translator (ISFET) detects the pH change
and a potential change is recorded as a direct measurement of the next cycle when a different
dNTP species is introduced.
g. If the introduced dNTP is not complementary there is no incorporation and no biological
reaction.
h. If homopolymer repeats are present in the template sequence, multiple dNTP molecules will be
incorporated in a single cycle. This leads to a corresponding number of released hydrogen ions
and a proportionally higher electric signal.

4. SOLiD SEQUENCING PLATFORM (Sequencing by Oligonucleotide Ligation & Detection):

This sequencing technology is based on sequential ligation with dye-labelled oligonucleotides, which
enables parallel sequencing of clonally amplified DNA fragments. Mate-paired analysis and two bases
encoding facilitate the study of genomes. It basically exploits the mismatch sensitivity of DNA ligase
to determine the underlying sequence of nucleotides in the target DNA sequence.

SEQUENCING PIPELINE:

a. Library preparation – two types of library either fragment or mate-paired.

b. Clonal amplification of DNA templates by emulsion PCR. Although like the Roche 454
technology, the DNA template fragments are clonally amplified on beads, however, the beads
are placed on the solid phase of a flow cell so that greater density is achieved.

c. Bead deposition, 3’ end modified beads are deposited onto a glass slide (segmented into 1, 4 or
8 chambers; flow cell).

d. Then Sequencing by ligation, followed by data analysis. A mixture of different fluorescently


labelled dinucleotide probes is pumped into the flow cell. As the correct dinucleotide probe

62
www.byjusexamprep.com

incorporates the template DNA, it is ligated onto the pre-built primer on the solid phase. After
wash-out of the unincorporated probes, fluorescence is captured and recorded. Each
fluorescence wavelength corresponds to a particular dinucleotide combination. Then
the fluorescent dye is removed and washed and the next sequencing cycle starts.

63
www.byjusexamprep.com

5. SMRT/PACBIO (3rd gen NGS):

While the second-generation sequencing (SGS) technologies have offered improvements over Sanger
sequencing, their limitations, in particular their short read lengths, make them not a desirable
approach for problems, such as sequence assembly and determination of complex genomic regions,
gene isoform detection, and methylation detection. Single-molecule real-time (SMRT) sequencing,
developed by Pacific Biosciences (PacBio), offers an alternative approach to overcome these
limitations.

SEQUENCING WORKFLOW:

PacBio sequencing captures sequence information during the replication process of the target DNA
molecule.

a. The template, called a SMRTbell, is a closed, single-stranded circular DNA that is created by
ligating hairpin adaptors to both ends of a target dsDNA molecule.

b. When a sample of SMRTbell is loaded to a chip called an SMRT cell, a SMRTbell diffuses into a
sequencing unit called a zero-mode waveguide (ZMW), which provides the smallest available
volume for light detection.

64
www.byjusexamprep.com

c. In each ZMW, a single polymerase is immobilized at the bottom, which binds to either hairpin
adaptor of the SMRTbell and starts the replication.

d. Four fluorescent-labelled nucleotides, which generate distinct emission spectrums, are added
to the SMRT cell. As a base is held by the polymerase, a light pulse is produced that identifies
the base.

e. The replication processes in all ZMWs of an SMRT cell are recorded by a series of light pulses,
and the pulses corresponding to each ZMW can be interpreted to be a sequence of bases (called
a continuous long read, CLR). Since the SMRTbell forms a closed circle, after the polymerase
replicates one strand of the target dsDNA, it can continue incorporating bases of the adapter
and then the other strand.

PacBio sequencing enables much longer read lengths and faster runs than SGS methods but is
hindered by lower throughput, higher error rate, and higher cost per base.

65
www.byjusexamprep.com

6. APPLICATIONS OF NGS:

a. Whole-genome sequencing

i. De novo sequencing
ii. Re-sequencing

● SNP Genotyping

● Study Molecular evolution of a species or population

● Track pathogen outbreaks

b. Methylome Analysis

i. DNA Methylation

▪ Via Bisulphite Sequencing

▪ Membrane-bound domain (MBD) analysis for methylation rich regions


ii. Histone methylation sites (in combination with ChIP-Seq)
c. Chromatin Immunoprecipitation ChIP-Seq
▪ High-resolution analysis of histone modifications

▪ Identification of Transcription Factors(s) binding sites

▪ Determination of the position of nucleosomes

▪ Epigenetic modifications at Translation start site (TSS)

66
www.byjusexamprep.com

d. RNA Seq
▪ Gene expression studies
▪ Germline vs expressed alleles
▪ Transcriptome alignment
▪ Single nucleotide variation (SNV)
▪ Post-transcriptional SNVs
▪ Determination of fusion gene products

GENE EXPRESSION ANALYSIS

The synthesis of (particular) gene products is controlled by mechanisms collectively termed as Gene
Regulation. The regulation of gene expression occurs in all forms of life from prokaryotes, viruses to
eukaryotes. The expression of a particular gene(s) is turned on when the products of these genes are
needed for growth or in response to changes in the organism’s environment. Subsequently, the
expression is turned off when the gene products are no longer needed.

Gene expression can be regulated at several different levels, transcription, mRNA processing, mRNA
turnover, translation and enzyme function. Regulatory finetuning at the level of transcription and at
the translational level are the two most important modes of control of gene expression.

1. ANALYSIS AT mRNA LEVEL:

Prominent gene expression analysis methods at the transcript level include:-

a. Northern Blot

b. Microarray

c. RT-PCR

d. RNA Seq

a. Northern Blot

The term "blotting" refers to the transfer of biological samples from a gel to a membrane and their
subsequent detection on the surface of the membrane. Subtypes of blotting, namely, Northern,
Western & Southern depend upon the target molecule that is being sought; and Northern blotting
is simply a variant of the traditional Southern blotting; the target nucleic acid here being RNA instead
of DNA. The technique measures the amount and size of RNAs transcribed from genes and estimates
their abundance. In this technique, an RNA extract is electrophoresed in an agarose gel, using a
denaturing buffer to ensure that the RNAs do not form inter/intra molecular base pairs. After
electrophoresis, the gel is blotted onto nitrocellulose, nylon or a reactive DBM
(diazobenzyloxymethyl) paper, and hybridized with a labelled probe.

67
www.byjusexamprep.com

b. Microarray Technology (DNA Chip/Gene Chip/Bioarray/Gene array)

A DNA microarray is a collection of thousands of identified genes affixed/immobilised on a solid


support, usually glass, silicon chips or nylon membrane for the purpose of expression profiling and
monitoring expression levels for thousands of genes simultaneously. The most well-known use of
DNA microarrays is for profiling mRNA levels.

An orderly arrangement of samples is basically what constitutes an array (either Macro-


/Microarray). Usually, a single DNA microarray chip may contain thousands of spots each
representing a single gene and collectively the entire genome of an organism. The first complete
eukaryotic genome (Saccharomyces cerevisiae) on a microarray was published in 1997 (Science).

In a DNA microarray, a large number of DNA probes, each one with a different sequence, are
immobilized at a defined position on a solid surface (such as glass, silicon chips or nylon membrane).
The probe can be spotted on a glass microscope slide or a piece of nylon membrane (low-density
arrays) or on the surface of a wafer of silicon (high-density array).

Microarrays are used to detect RNAs that may or may not be translated into active proteins. Since
there can be tens of thousands of distinct reporters on an array, a single microarray experiment can
be regarded as equivalent to that many genetic tests in parallel. Arrays (in particular, microarrays)
have therefore dramatically accelerated investigations in both researches as well as disease
diagnosis.

A microarray detects the presence and abundance of labelled nucleic acids in a biological sample,
which will hybridize to the DNA on the array on the basis of the standard Watson-Crick base pairing,
and can be detected via the label.

In a typical gene expression profiling experiment, the following four sequential steps are followed
to measure the gene expression in a sample:

i. Sample preparation and labelling


ii. Hybridization
iii. Washing
iv. Image acquisition

To determine which genes are turned on and which are turned off in a given cell, a researcher first
collects the mRNA molecules present in that cell. Then labelling of each mRNA molecule is done by
attaching a fluorescent dye. Next, the labelled mRNA is placed onto a DNA microarray slide. The
mRNA that was present in the cell will then hybridize/ bind to its complementary DNA on the
microarray, leaving its fluorescent tag. Thereafter, the chip is scanned to measure the fluorescent
areas on the microarray and the image acquired. Detection depends on the type signal generated
by the binding of a reporter probe (i.e., fluorescent, chemiluminescent, colorimetric or radioisotope)
to the target DNA sequence. The microarray is scanned or imaged to obtain the complete
hybridization pattern.

68
www.byjusexamprep.com

Now, if a particular gene is very active, it will produce many molecules of mRNA, which hybridize to
the DNA on the microarray and will resultantly generate a very bright fluorescent area. Genes that
are comparatively less active produce fewer mRNAs and result in dimmer fluorescent spots. If there
is no fluorescence, it implies none of the messenger molecules has hybridized to the DNA, thereby
indicating that the gene probed is inactive.

APPLICATIONS:

i. Investigating cellular states and processes. Patterns of expression that change with the
cellular state or growth conditions can give dues to the mechanisms of processes such as
sporulation, or the change from aerobic to anaerobic metabolism.

ii. Diagnosis of disease. Testing for the presence of mutation can confirm the diagnosis of
suspected genetic disease, including detection of a late-onset condition such as Huntington
disease.

iii. Drug selection. Allows detection of genetic factors that govern responses to drugs, that in
some patients render treatment ineffective and in other cause unusual serious adverse
reactions.

iv. Specialized diagnosis of disease. Different types of leukaemia can be identified from
different patterns of gene expression.

v. Pathogen resistance. Comparisons of genotypes of expression patterns, between bacterial


strains susceptible and resistant to an antibiotic, point to the proteins involved in the
mechanism of resistance.

vi. Investigating cellular states in responses to pathogen infection and environmental change.

vii. Investigating cellular states during the cell cycle.

viii. DNA microarrays have also been used to detect DNA-protein interactions (e.g. transcription
factor binding).

The lack of standardization in arrays presents an interoperability problem in bioinformatics,


which hinders the exchange of array data. The "Minimum Information About a Microarray
Experiment" (MIAME) checklist is an effort to make microarray data computationally more
suitable. MIAME defines the level of detail that should exist in a microarray result. The analysis
of DNA microarrays poses a large number of statistical problems like the normalization of the
data. This arises because of the enormous number of genes present on a single chip; for
instance, even if each gene is extremely unlikely to randomly yield a result of interest, the
combination of all the genes is likely to show at least one or a few occurrences of this result
which are false positives.

69
www.byjusexamprep.com

c. RNA Sequencing

The new-age bioinformatic technique, RNA-Seq is able to overcome the limitations of the
microarray technology. Transcriptome-profiling technique based on deep sequencing (NGS)
technologies. RNA-Seq provides a far more precise measurement of levels of transcripts and
their isoforms than other methods. In contrast to microarray methods, sequence-based
approaches directly determine the cDNA sequence.

RNA-Seq also fares well compared to the other group of gene expression methodologies; the
Tag-based methods like serial analysis of gene expression (SAGE) , cap analysis of gene
expression (CAGE) and massively parallel signature sequencing (MPSS) . Although these tag-
based sequencing approaches are high throughput and can provide precise, digitized gene
expression levels; however, most of them are based on expensive Sanger sequencing
technology, and a significant portion of the short tags cannot be uniquely mapped to the
reference genome. Further, only a segment of the transcript is analysed and isoforms are
generally indistinguishable from each other.

d. RT-PCR

Reverse Transcription PCR, a variation of the traditional polymerase chain reaction and is used to
amplify, isolate and identify a known transcriptomic sequence from a cellular or tissue RNA sample.
It is basically the conventional PCR coupled with(preceded by) an additional reaction step; involving
the use of reverse transcriptase enzyme for the conversion of RNA sample molecules to cDNA.

It is primarily used to measure the amount of a specific RNA. This is achieved by monitoring the
amplification reaction using fluorescence, a technique called real-time PCR or quantitative PCR
(qPCR). Combined RT-PCR and qPCR are routinely used for the analysis of gene expression and
quantification of viral RNA in biomedical research and clinical diagnostics. Nowadays, single-enzyme
One-step RT-PCR protocols are being used that provides greater ease and convenience. Reverse
transcription and PCR amplification are made possible in a single step by using a recombinant form

70
www.byjusexamprep.com

of Tth polymerase from archaea, Thermus thermophilus, which possess both the thermostable DNA-
dependent DNA polymerase activity of a Taq Pol and also intrinsic reverse transcriptase properties.

2. ANALYSIS AT PROTEIN LEVEL:

a. Western Blot

The technique combines the separation power of SDS PAGE together with high recognition
specificity of antibodies (hence, the name Immunoblotting). The specificity of the antibody-antigen
interaction enables a target protein to be identified in the midst of a complex protein mixture and
subsequently provide qualitative and semi-quantitative data about the protein of interest. An
antibody against the target protein could be purified from serum of animals (mice, rabbits, goats)
immunized with this protein. Alternatively, if the protein contains a commonly used tag or epitope,
an antibody against the tag/epitope could be purchased from a commercial source (like the anti-6
His antibody).

There are six steps involved in western blot:

i. Sample preparation

Proteins can be extracted from different samples, such as tissues or cells. Since tissue samples
display a higher degree of structure; the tissues are first broken down by the mechanical means; a
homogenizer or sonication. Protease and phosphatase inhibitors are commonly used to prevent the
digestion of the sample at cold temperatures. After protein extraction, it is important to detect the
concentration of proteins, which permits the mass of proteins loaded into each well. And a
spectrophotometer is often used for proteins concentration.

ii. Gel electrophoresis

The most used gel is polyacrylamide gels (PAG) and buffers loaded with sodium dodecyl sulfate
(SDS). Western blot uses two types of agarose gel: Stacking Gel that is used for concentrating all
proteins in one band and Separating Gel that allows for separating proteins according to their
molecular weight. Smaller proteins migrate faster in SDS-PAGE when a voltage is applied. PAGE can
separate proteins ranging from 5 to 2,000 kDa according to the uniform pore size which is controlled
by the different concentrations of PAG. Typically, separating gels are made in 5%, 8%, 10%, 12% or
15%. While choosing the appropriate percentage of the separating gel, we consider the size of the
target proteins. The smaller the known weight of proteins is, the higher percentage of gels that
should be used.

iii. Protein transfer

After separating proteins by gel electrophoresis, proteins are moved from within the gel onto a solid
support membrane to make the proteins accessible to antibody detection. The main method for
transferring proteins is Electroblotting, which uses an electric field oriented perpendicular to the
surface of the gel, to pull proteins out of the gel and move them into the membrane. The result is a
membrane with a copy of the protein pattern that was originally in the polyacrylamide gel. The
membrane is placed between the gel surface and the filter.

71
www.byjusexamprep.com

The transfer sandwich is created as follows:

a fibre pad (sponge),

filter papers,

the gel,

a membrane,

filter papers,

a fibre pad (sponge).

The efficiency of transfer depends on,

i) Gel composition,

ii) Complete contact of the gel with the membrane,

iii) The position of the electrodes,

iv) Transfer time, size and composition of proteins,

v) Field strength, and

vi) The presence of detergents and alcohol in the buffer

72
www.byjusexamprep.com

The transfer is done in either Semi-Dry Or Wet Conditions,

i) Wet transfer (or tank transfer) offers high transfer efficiency, flexibility in buffer system and
method choices but at a cost of time and effort. Wet conditions are usually more reliable as
it is less likely to dry out the gel.

ii) Semi-dry blotting on other hand is more convenient and is time-saving with the flexibility to
use multiple types of buffer systems. But semi-dry blotting can have lower efficiency of
transfer of large molecular weight proteins (>300 kDa).

iii) Dry transfer offers both high-quality transfers with speed as well as convenience because
buffers are not required but are challenged by limited flexibility.

iv. Blocking

It is an important step in the western blot to prevent antibodies from binding to the membrane non-
specifically. Typical blockers are Bovine Serum Albumin and non-fat dry milk. When the membrane is
placed in the dilute solution of proteins, the proteins attach to all places in the membrane where the
target proteins have not attached. In this way, the “noise” in the final product of the western blot can
be reduced and give clearer results.

v. Antibody incubation

After blocking, the primary antibody binds to the target protein wherein the primary antibody is
incubated with the membrane. The choice of a primary antibody depends on the antigen to be
detected. Washing the membrane with the antibody-buffer solution is helpful for minimizing
background and removing unbound antibodies (minimise non-specific binding). After rinsing, the
membrane is exposed to the specific enzyme-conjugated secondary antibody. Based on the species of
the primary antibody, we can choose the appropriate secondary antibody.

In general, the primary antibody that recognizes the target protein in a western blot is not directly
detectable. Therefore, tagged secondary antibodies are used as the means of ultimately detecting the
target antigen (indirect detection). The choice of secondary antibody depends on either the species
of animal in which the primary antibody was raised (the host species) or any tag linked to the primary
antibody (e.g., biotin, histidine (His), hemagglutinin (HA), etc.)

An anti-mouse secondary will bind to just about any mouse-sourced primary antibody. The usage of
secondary antibody is hence rather economical as it facilitates the sharing of a single source of mass-
produced antibody, and provide far more consistent results. The secondary antibody is usually linked
to biotin or to a reporter enzyme such as alkaline phosphatase or horseradish peroxidase; several
secondary antibodies will bind to one primary antibody and enhance the signal.

vi. Protein detection and visualization

A horseradish peroxidase-linked secondary is used in conjunction with a chemiluminescent agent, and


the reaction product produces luminescence in proportion to the amount of protein. A sensitive sheet

73
www.byjusexamprep.com

of photographic film is placed against the membrane, and exposure to the light from the reaction
creates an image of the antibodies bound to the blot.

There are several detection systems available for protein visualization, such as colourimetric
detection, chemiluminescent detection, radioactive detection, and fluorescent detection. The
electrochemiluminescence (ECL) system is widespread, owing to its high sensitivity, ease of use and
safety compared to radio-active detection.

Signal normalisation

To account for possible errors in sample preparation and loading, normalization of samples to remove
inter-sample/gel variation is done. Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) or β-actin
are some commonly used examples of “housekeeping” proteins (HKP). They act as internal loading
controls assuming their expression remains stable under the experimental conditions used.

Membranes may be stained to visualize total protein by several methods. Coomassie staining is
reliable, and the most common approach for use as an unbiased method of total protein assessment.

ADVANTAGES:

i. The western blotting method entails various advantages as compared to other


immunosorbent assays (ISAs), like ELISA. Western blotting (immunoblot) expands on the
idea of ELISA by allowing separation of the protein mix by size, charge, and/or
conformation. The described method of stripping allows for the detection of several targets,
contrary to ELISA where only one protein can be detected.

ii. As the gel electrophoresis of proteins separates the proteins into bands, one can determine
the size of the target protein/polypeptide.

iii. (semi-)quantify the protein of interest by running an internal quantity standard in parallel
with the samples in the gel. Similarly, the protein content of the samples can be compared.

iv. Due to the high affinities of antibodies toward their epitopes, it is a very sensitive method
and even picogram quantities of a target protein can be detected.

v. Where Silver staining, detects 10 ng of protein and all proteins in a given sample, western
blotting can detect as little as 0.1ng of protein, coupled with high selectivity. The two
primary advantages of western blotting are hence, sensitivity and specificity.

A disadvantage of western blotting is that it is time-consuming (compared to ELISA). Although


straightforward and simple, it thus may become an erroneous method due to its time consuming
multistep protocol.

74
www.byjusexamprep.com

APPLICATIONS:

i. Clinical Implications

The utility of this technique is not just limited to research initiatives; it has wide implications in
clinical fields as well; used as a confirmatory test for HIV diagnosis, and also in demonstration
of specific antibodies in the serum for diagnosis of important diseases like neurocysticercosis
and tubercular meningitis. Study on stem cell signalling and differentiation, as well as drug
response in tumour cells, was studied using the advanced single-cell western blotting technique.

ii. Detection of Protein Phosphorylation States

Many proteins are post-translationally modified by kinases, which add phosphate groups to
their substrates. Phosphorylated proteins become heavier, due to the added weight of the
phosphate group, and thus migrate more slowly than their un-phosphorylated forms in a gel.
This is also applicable for detecting how a certain treatment changes the post-translational
modification of a protein (i.e. phosphorylation, ubiquitination, etc.).

i.Changes in Protein Levels Across Time Points

Temporal changes in protein level. For example, if each sample is a protein mixture of cells that are
in different phases of the cell cycle, then western blotting will reveal how much a protein is present
or absent during each phase.

75
www.byjusexamprep.com

ii.Detection of Truncated Protein Isoforms

Many proteins are cleaved for them to be activated or have naturally occurring truncation isoforms.
Each isoform may have a different level of activity, a different target protein or represent a different
cellular state. Western blotting is suitable for detecting the ratio of truncated to normal isoforms of
a protein.

iii.Detection of Tagged Proteins

Specific proteins are engineered to contain short sequences of amino acids that serve as a tag, such
as the HA-tag, the Myc-tag. These tags serve as a foreign protein epitope that does not naturally
occur in the biological system being studied. Thus, the tag enables easy detection of the protein
compared to all other naturally occurring proteins. An antibody directed to the tag will identify the
presence and amount of tagged protein in the western blot.

Several more sensitive alternative methods such as single cell-resolution western blot, far-western
blotting, diffusion blotting, automated microfluid western blotting, western blotting using capillary
electrophoresis and microchip electrophoresis have all been developed and being refined to
overcome problems associated with the traditional multi-step western blotting.

76
www.byjusexamprep.com

ISOLATION, SEPARATION AND ANALYSIS OF CARBOHYDRATE AND LIPID MOLECULES

1. Carbohydrates: Isolation, Separation and analysis:


a. Isolation and separation of carbohydrates

The length of time it takes to prepare a sample for carbohydrate analysis is determined by the
type of food being tested. Fruit juices, syrups, and honey are examples of aqueous solutions that
require minimum preparation prior to analysis. Many meals, on the other hand, contain carbs
that are physically or chemically connected with other components, such as nuts, cereals, fruit,
breads, and vegetables. Before analysing these foods, it is frequently required to separate the
carbohydrate from the remainder of the food. Although the exact method of carbohydrate
isolation varies depending on the carbohydrate type, food matrix type, and analysis aim, there
are key methods that are common to various isolation approaches. Foods are typically dried
under vacuum (to avoid thermal deterioration), crushed to a fine powder (to aid solvent
extraction), and then defatted using solvent extraction.
Boiling a defatted sample with an 80 percent alcohol solution is one of the most frequent
procedures for extracting low molecular weight carbohydrates from meals. In alcoholic solutions,
monosaccharides and oligosaccharides are soluble, but proteins, polysaccharides, and dietary
fibre are insoluble. By filtering the boiled solution and collecting the filtrate (the part that goes
through the filter) and the retentante, the soluble components can be separated from the
insoluble components (the part retained by the filter). The concentrations of these two
components can then be determined by drying and weighing them. In addition to
monosaccharides and oligosaccharides, the alcoholic extract may contain a variety of additional
tiny molecules that could interfere with the subsequent analysis, such as amino acids, organic
acids, pigments, vitamins, minerals, and so on. Prior to doing a carbohydrate analysis, it is
normally essential to eliminate these components. This is generally accomplished by adding
clarifying agents to the solution or passing it through one or more ion-exchange resins.
i. Clarifying agents: Many food water extracts contain colourful or turbid components that can
interfere with spectroscopic analysis and endpoint findings. As a result, prior to analysis,
solutions are frequently clarified. Heavy metal salts (such as lead acetate) are the most often
used clarifying agents because they create insoluble complexes with interfering compounds
that may be removed by filtration or centrifugation. However, it is critical that the clarifying
agent does not precipitate any carbs from solution, since this would result in a carbohydrate
content underestimation.
ii. Ion-exchange: Because many monosaccharides and oligosaccharides are polar, non-charged
molecules, they can be separated from charged molecules using ion-exchange columns. Most
charged pollutants can be removed using a mix of positively and negatively charged columns.
Bypassing a solution through a column having a non-polar stationary phase, non-polar
molecules can be eliminated. Prior to the examination, proteins, amino acids, organic acids,
minerals, and hydrophobic substances can be separated from carbohydrates.

Prior to analysis, the alcohol in the solutions can be removed by evaporation under a vacuum,
leaving an aqueous sugar solution.
b. Analysis of Carbohydrates

77
www.byjusexamprep.com

i. Chromatographic and Electrophoretic methods

The most powerful analytical techniques for determining the type and content of
monosaccharides and oligosaccharides in foods are chromatographic procedures. To separate
and identify carbohydrates, thin layer chromatography (TLC), gas chromatography (GC), and
high-performance liquid chromatography (HPLC) are often utilized. Bypassing the solution to
be tested through a column, carbohydrates are separated based on their differential
adsorption characteristics. Depending on the type of column employed, carbohydrates can be
segregated based on their partition coefficients, polarities, or diameters. Because it is capable
of quick, specific, sensitive, and exact measurements, HPLC is now the most important
chromatographic technology for assessing carbohydrates. Furthermore, GC demands that the
samples be volatile, which normally necessitates derivatization, whereas HPLC allows samples
to be examined directly. HPLC and GC are frequently combined with NMR or mass
spectrometry in order to determine the chemical structure of the molecules that make up the
peaks.

Carbohydrates can also be separated by electrophoresis after being derivative (i.e., by


reaction with borates) to make them electrically charged. A gel is coated with a solution of
derivative carbohydrates, and then a voltage is placed across it. The carbohydrates are then
segregated by their size: the smaller a carbohydrate molecule is, the faster it moves in an
electrical field, which may be determined.

ii. Chemical methods

Many chemical methods for determining monosaccharides and oligosaccharides rely on the
fact that many of these molecules are reducing agents that can react with other components
to produce precipitates or coloured complexes that can be measured. Carbohydrate
concentrations can be measured gravimetrically, spectrophotometrically, or by titration. If
non-reducing carbs are first hydrolyzed to make them reduce, the same procedures can be
used to determine them. An analysis for reducing sugars before and after hydrolysis can be
used to identify the content of both non-reducing and reducing sugars. Carbohydrates can be
quantified using a variety of chemical methods. The majority of them fall into one of three
categories: titration, gravimetric, or colorimetric. Below is an example of each of these
different categories.

iii. Titration Methods

A titration method for estimating the concentration of reducing sugars in a sample is the Lane-
Eynon method. The carbohydrate solution to be tested is added to a flask containing a
specified amount of boiling copper sulphate solution and a methylene blue indicator using a
burette. The copper sulphate in the flask reacts with the reducing sugars in the carbohydrate
solution. Any further addition of reducing sugars leads the indicator to turn from blue to white
until all of the copper sulphates in the solution has reacted. The amount of sugar solution
needed to get to the endpoint is kept track of. Because the reaction is not stoichiometric, a

78
www.byjusexamprep.com

calibration curve must be created by doing the experiment with a series of standard solutions
with known carbohydrate concentrations.

The disadvantages of this method are that I the results are dependent on the precise reaction
times, temperatures, and reagent concentrations used, and thus these parameters must be
carefully controlled; (ii) it cannot distinguish between different types of reducing sugars, (iii)
it cannot directly determine the concentration of non-reducing sugars, and (iv) it is susceptible
to interference from other types of molecules that act as reducing agents; and (v) it is
susceptible to interference from other types of molecules that act as

iv. Gravimetric Methods

A gravimetric method for estimating the concentration of reducing sugars in a sample is the
Munson and Walker method. Under carefully regulated conditions, carbohydrates are
oxidised in the presence of heat and an excess of copper sulphate and alkaline tartrate,
resulting in the creation of a copper oxide precipitate.

The amount of precipitate generated is proportional to the starting sample's concentration of


reducing sugars. The amount of precipitate present can be measured gravimetrically (by
filtration, drying, and weighing) or titrimetrically (by weighing) (by redissolving the precipitate
and titrating with a suitable indicator). Although this method has the same drawbacks as the
Lane-Eynon method, it is more accurate and reproducible.

v. Colorimetric Methods

A colorimetric method for detecting the concentration of total sugars in a sample is the
Anthrone method. Under acidic conditions, sugars react with the anthrone reagent to produce
a blue-green colour. Sulfuric acid and the anthrone reagent are added to the sample, which is
then heated until the reaction is complete. The solution is then allowed to cool before being
tested for absorbance at 620 nm. The absorbance and the amount of sugar in the initial
sample have a linear relationship. Because of the presence of the strongly oxidising sulfuric
acid, this approach can determine both reducing and non-reducing carbohydrates. It is non-
stoichiometric, like the other methods, thus a calibration curve must be created using a set of
known carbohydrate content standards.

The Phenol-Sulfuric Acid method is an example of a colorimetric method that is commonly


used to assess the total carbohydrate concentration in meals. In a test tube, phenol and
sulfuric acid are added to a clear aqueous solution of the carbohydrates to be evaluated. As a
result of the interaction between the carbohydrates and the phenol, the solution turns a
yellow-orange tint. The carbohydrate concentration in the sample is proportional to the
absorbance at 420 nm. All non-reducing sugars are converted to reducing sugars by the
sulfuric acid, hence this approach determines the total sugars present. Because this method
is non-stoichiometric, a calibration curve must be created using a set of known carbohydrate
concentration standards.

79
www.byjusexamprep.com

vi. Enzymatic Methods

Enzyme-based analytical methods rely on their capacity to catalyse certain reactions. These
methods are good for determining carbs in foods because they are quick, highly specific, and
sensitive to low amounts. Furthermore, there is frequently little sample preparation required.
Solid meals must first be dissolved in water before being tested, but liquid foods can be tested
directly. There are a variety of enzyme test kits available for commercial purchase that can be
used to conduct carbohydrate analysis. The manufacturers of these kits provide step-by-step
instructions for performing the analysis. The two most common methods for determining
carbohydrate concentration are: I allowing the reaction to complete and measuring the
product concentration, which is proportional to the initial substrate concentration; and (ii)
measuring the initial rate of the enzyme-catalysed reaction, which is proportional to the
substrate concentration.

vii. Physical Methods

The carbohydrate content of meals has been determined using a variety of physical methods.
These methods rely on a change in a food's physicochemical characteristics when the
carbohydrate concentration changes. Polarimetry, refractive index, infrared, and density are
some of the most commonly utilised approaches.

● Polarimetry: Planar polarized light can be rotated by molecules with an asymmetric


carbon atom. The rotation angle of plane polarised light flowing through a solution is
measured by a polarimeter. A monochromatic light source, a polarizer, a known-length
sample cell, and an angle-of-rotation analyser make up a polarimeter.

The equation a = [a]lc connects the degree of polarization to the concentration of optically
active molecules in solution, where an is the observed angle of rotation, [a] is the optical
activity (which is a constant for each type of molecule), l is the pathlength, and c is the
concentration.

Because the temperature and wavelength of light used to affect the overall angle of
rotation, these parameters are generally fixed to 20°C and 589.3 nm (the D-line for
sodium). If the kind of carbohydrates present is known, a calibration curve of a versus
concentration is built using a series of known concentration solutions, or the value of [a]
is derived from the literature. The carbohydrate concentration in an unknown sample can
be calculated by measuring the angle of rotation and comparing it to the calibration curve.

● Refractive Index: The refractive index (n) of a material is equal to the velocity of light in a
vacuum divided by the velocity of light in the substance (n = c/cm). To estimate a
material's refractive index, utilise the angle of refraction (r) and angle of incidence I at a
border between it and another material with a known refractive index. In practise, the
refractive index of carbohydrate solutions is frequently tested at a quartz barrier. Because
a carbohydrate solution's refractive index grows with concentration, it can be used to
estimate how much carbohydrate is present. Because temperature and wavelength affect

80
www.byjusexamprep.com

the RI, measurements are usually done at a certain temperature (200 C) and wavelength
(589.3nm). This operation is quick and simple to carry out, and it only requires a few hand-
held tools. This method is extensively used in industry to measure sugar concentrations
in syrups, honey, molasses, tomato products, and jams.

1. Lipids: Isolation, Separation and Analysis:

For biophysical research, pure lipid formulations in the range of 1–100 mg are frequently required.
Analytical and biological approaches, on the other hand, require micrograms. In terms of lipid
quantities, X­ray and NMR structure studies are very challenging.

As a result, rare lipid sources such as cell cultures, which are suitable for biochemical research, are
unable to be used. For biophysical studies of biological lipids, the capacity to purify copious and suitably
pure milligrams samples is a must. Advances in X­ray sample preparation, such as "the rock and roll
approach" and increased ESR equipment sensitivity by the resonator, are, however, reducing the
quantity of lipid required for structural approaches.

Chemical synthesis, which can be ramped up into the hundreds of milligrams range, challenges the
extraction of lipids from biological sources for levels required for biophysical research. Synthetic lipids,
on the other hand, are a single molecular species, but biological lipids are made up of a range of
molecular species with distinct fatty acyl chains. In contrast to the broader transition encountered by
polydisperse lipid classes acquired from biological sources, a single molecule structure exhibits evident
phase cooperativity at the thermotropic and lyotropic transition lines of the phase diagram.

For synthesised lipids, the interpretation of biophysical findings takes advantage of distinctive values
found in the small periods where the molecular regime changes. Synthetic lipid studies, on the other
hand, are unreliable for predicting the behaviour of biological materials. Confirmatory research is
required to determine the applicability of synthetic lipids to increasingly complicated mixtures. The
selection of appropriate sources is the initial stage in the production of biological lipids. In an ideal
world, the source would be plentiful, commercially available, and suitably enriched in the target lipid.

For the most abundant neutral lipids derived from oil, milk, or storage tissues, the requirements are
met. Alternatively, polar lipids can be made as a by-product of "degumming" these crude oils in order
to recover neutral lipids as a repurified grade on a large scale. Soybean, canola, and sunflower
processing are examples of trait-enhanced oilseeds with altered molecular species profiles.

Preparation from animal tissues with a specific lipid metabolism could also be a viable option. The high
quantities of polyunsaturated omega­3 lipids, lipophilic vitamins A and D, and antioxidants seen in fatty
fish from cold marine settings, for example, have been examined. In historical and medical traditions,
empiric knowledge of the action of numerous crude lipid formulations is commonly used to identify a
source of certain lipids. Culinary practises and recipes, cosmetics, lubricants, and even partially
specified reagents in the laboratory can all be traced back to interesting physicochemical activities with
a trained eye.

81
www.byjusexamprep.com

The usage of lipids taken from biological sources indicates that they have exceptional biophysical
properties. A list of sources has been compiled, and it will provide a historical perspective on lipid
preparative biochemistry work during the last two centuries. A few examples of such attempts include
egg yolk containing lecithin used as an emulsifier and spermaceti used in cosmetics.

Certain procedures for the industrial preparation of specific lipids have also been developed. The soy
lecithin fraction, for example, is made by degumming crude soy oil with a purity of approximately 20%.
phosphosphatidylcholine, 20% phosphatidylethanolamine, and 10% phosphatidylinositol The "gum" of
soybean oil is made up of phospholipids and mucilaginous substances. Soy lecithin fraction production
is estimated to be between 200,000 and 300,000 t, with industrial uses centred on their physical
properties as a food additive in margarine for antispatter and as an emulsifier, in chocolates, caramels,
and coatings to control viscosity, crystallisation, and sticking, and in instant foods like cocoa powders,
coffee creamer, and instant breakfast for wetting, dispersing, and emulsifying.

82
www.byjusexamprep.com

2. HISTOCHEMICAL and IMMUNOTECHNIQUES

HYBRIDOMA TECHNOLOGY
It is the method of forming hybrid cell lines by fusing a specific antibody-producing B-cell with a
myeloma cell.
Procedure:
a. Mouse is immunized with the antigen of interest.
b. Antigen-specific B-cells proliferate and begin producing antibodies in mice.
c. Spleen rich in B-cells removed from the mouse.
d. B-cells are fused with myeloma cells.
Selection Media:
Hybridomas are selected by the use of a selective medium; in which myeloma cells die, but hybridomas
survive. Widely used selective system: Aminopterin (a synthetic derivative of pterin)
a. This folate analogue acts as the competitive inhibitor for the enzyme dihydrofolate
reductase which catalyzes the reduction of dihydrofolate to tetrahydrofolate.
b. Normal animal cells synthesize purine nucleotides and thymidylate for DNA synthesis by
a de-novo pathway requiring tetrahydrofolate.
c. The addition of aminopterin inhibits the de-novo nucleotide synthesis pathway.
d. Normal cells survive as they are able to use the salvage pathway for nucleic acid synthesis.
e. But if the cells are unable to produce the enzyme HGPRT (Hypoxanthine-Guanine
Phospho Ribosyl Transferase); they are unable to utilize the salvage pathway.
f. Thus, in this procedure myeloma cells are engineered to be deficient in HGPRT.
g. After the fusion of lymphocytes with HGPRT-negative myeloma cells; HAT (Hypoxanthine-
Aminopterin Thymidine) medium is added; which kills the myeloma cells.
h. Finally, hybridomas are selected.

83
www.byjusexamprep.com

Fig. Production route of hybridoma technology

Nomenclature for monoclonal antibody:


a. Mouse: -momab
b. Chimeric: -ximab
c. Humanized: -zumab
d. Human: -umab

a. Murine
i) Derived from mouse
ii) Patients treated with murine monoclonal antibodies develop a human antimouse
antibody (HAMA) response
Eg: Afelimomab

b. Chimeric
i) Antigen binding parts (variable region) of a mouse with effector parts (constant region) of
human
Eg: Rituximab

c. Humanized
i) Human antibody with complementary determining regions (CDR) or hypervariable region
from mouse
Eg: Daclizumab

d. Human
i) Genes from the variable Fab portion of human antibodies is inserted in the genome of
bacteriophage and replicated
ii) Mixed with antigen and the complementary antibody-producing phages selected
Eg: Adalimumab

Applications:
Some popular engineered monoclonal antibodies include:-

a. Immunotoxins
Protein-based drugs that contain 2 domains:
i) Binds to specific target cells
ii) Kills the cells following internalization

Toxins used to prepare immunotoxins to include Ricin, Shigella toxin and Diphtheria toxin

b. Heteroconjugates/Bispecific antibodies

84
www.byjusexamprep.com

Hybrids of 2 different antibody molecules. One half of the antibody has specificity for tumor
and the other half has specificity for an immune cell-like NK cell, activated macrophage or
cytotoxic T-lymphocyte (CTL)

c. Nanobodies
A peptide chain of about 110 amino acids long, comprising one variable domain (VH) of a
heavy-chain antibody or of a common IgG. Completely lack light-chain. This structure makes
them completely resistant to pH and heat.

ELISA

The enzyme-linked immunosorbent assay (ELISA) is a very sensitive method for detecting and
quantifying a wide range of substances, including antibodies, antigens, proteins, glycoproteins, and
hormones.

Antibodies and antigens are combined to generate a quantifiable outcome in the detection of these
products. An antibody is a kind of protein generated by the immune system of a person. Antigen-
binding regions are found in this protein type. An antigen is a protein that can come from a variety of
places and, when attached to an antibody, triggers a chain of actions in the immune system.

In ELISA testing, this interaction is used to detect particular protein antibodies and antigens using only
a tiny quantity of a test sample.

The ELISA test was created by modifying a radioimmunoassay (RIA). Instead of using radioactive iodine
125, enzymes were used to conjugate tagged antigen and antibody.

Common steps in ELISA:

a. Coating either with antigen or antibody


b. Washing the plates to remove any possible unbound antigen, antibody or BSA. Generally,
each addition process is followed by a wash step.
c. Blocking can be done with any one from bovine serum albumin, non-fat dry milk, casein,
and gelatin in Phosphate buffered saline (PBS).
d. Detection of signal

Four main types of ELISA:

1. Direct ELISA (Plate coated with antigen):

Step 1. Add antigen to plate and incubate for 37°C for 1 hour or at 4°C for overnight.

Step 2. Then use BSA to block any unbound sites on the ELISA plate. It reduces false-positive
findings by preventing non-specific antibodies from adhering to the plate.

85
www.byjusexamprep.com

Step 3. The plate is rewashed, and a primary detection antibody that has been enzyme-conjugated
is added and incubated for 1 hour.

Step 4. The plate is rewashed to eliminate any unattached antibodies, and then a
substrate/chromophore is added to the plate, like alkaline phosphatase or Horseradish Peroxidase,
which causes a colour change.

The primary detection antibody in a direct ELISA binds directly to the protein of interest.

The hydrolysis of phosphate groups from the substrate by alkaline phosphatase or the oxidation of
substrates by Horseradish Peroxidase causes the sample to change colour.

Advantages-

a. It is faster than indirect ELISA because it has fewer stages.


b. No secondary antibody cross-reactivity.
Disadvantages- Has a poor sensitivity when compared to other forms of ELISA.

2. Indirect ELISA (Plate coated with antigen):

Indirect ELISA necessitates the use of two antibodies: a primary detection antibody that binds to the
protein of interest and a secondary enzyme-linked antibody that works in tandem with the main
antibody.

Step 1. Add antigen to the plate and incubate for 37°C for 1 hour or at 4°C for overnight.

Step 2. Then use BSA to block any unbound sites on the ELISA plate.

Step 3. Add primary antibody.

Step 4. After adding the primary antibody, incubate it with the enzyme-conjugated secondary
antibody.

Step 5. Similar to Step 4 of direct ELISA

Advantages-

a. When compared to the direct ELISA, the indirect ELISA has a better sensitivity.
b. Because there are so many different primary antibodies that may be utilized, it's also less
costly and more versatile.
Disadvantage- Cross-reactivity between secondary detection antibodies is a possibility.

3. Sandwich ELISA (Plate coated with Antibody):

Because the antigens are sandwiched between two layers of antibodies, it's called a "sandwich" (One
layer of antibody is called a capture and another layer is called detection antibodies).

86
www.byjusexamprep.com

Step 1. Add capture antibody on the plates and incubate.

Step 2. Block the unbound sites on the plates.

Step 3. After that, the antigen of interest is introduced to the plates in order for it to attach to the
capture antibody.

Step 4. After rewashing the plate, the primary detection antibody is added and the plate is
incubated.

Step 5. The secondary enzyme-conjugated antibody is then added, and the mixture is incubated.

Step 6. To produce a colour change, the plate is rewashed and the substrate is applied.

Advantage- Among all ELISA variants, it has the highest sensitivity.

Disadvantage- Costly and time-consuming.

4. Competitive ELISA:

The competitive/inhibition ELISA is primarily used to detect interference in an anticipated signal


output in order to determine the quantity of an antigen or antibody in a sample.

This ELISA uses two particular antibodies, one that is enzyme-conjugated and the other that is present
in the test serum (if the serum is positive). When the two antibodies are mixed in the wells, they will
compete for antigen binding.

Analysing result of competitive ELISA- The presence of a colour change indicates that the test is
negative since the antigens were bound by the enzyme-conjugated antibody (not the antibodies of
the test serum). The absence of colour implies that the test was positive and that antibodies were
present in the test serum.

Advantage- It requires minimal sample purification and can detect a wide spectrum of antigens in a
single sample.

Disadvantage- Low specificity and cannot be used in dilute samples.

87
www.byjusexamprep.com

Fig. Diagrammatic representation of ELISA variants

ELISA Applications:

a. Antibodies in the Blood: Detecting and Measuring the presence of autoantibodies,


Antibodies against infectious disease (antibacterial, antiviral, antifungal), Hepatitis A, B, C,
HIV, etc.
b. Tumour Marker Levels Detection and Estimation- Prostate-specific antigen (PSA),
Carcinoembryonic Antigen (CEA) detection and estimation.
c. Detection and estimation of hormone levels like human chorionic
gonadotropin, Luteinizing hormone, Follicular stimulating hormone, Prolactin,
Testosterone

88
www.byjusexamprep.com

d. Detection of past exposure to SARS-CoV-2, HIV, Hepatitis etc.


e. Screening for viral contaminants in donated blood.
f. Detection of drugs like Amphetamine, Methamphetamine, Cocaine etc.
g. Blood typing and pregnancy detection.

RADIOIMMUNOASSAY (RIA)

Radioimmunoassay technique (RIA) is very sensitive in vitro technique used to measure the
concentration of antigens (eg, hormone levels in the blood) through the use of antibodies directed
against these antigens. Radioimmunoassay (RIA) is based on the principle of all immunoassays which
is the recognition of an antigen present in a sample by antibodies directed against this antigen. The
principle of radioimmunoassay is very similar to that of competitive ELISA and allows quantification of
small molecules, peptides and proteins in biological samples.

1. Principles and Technique of RIA:

RIA is performed by using antibody-antigen binding and radioactive antigen. The basic principle of RIA
is a competitive binding reaction, where the analyte (for example, antigen) competes with radio-
labelled antigen for binding to the fixed antibody or the binding sites of the receptor. The binding of
the unlabelled antigen to the fixed and limited amount of antibody causes displacement of radio-
labelled antigen and results in decreasing the radioactivity of the antigen-antibody complex.

2. Workflow:
a. A radio-labelled antigen (e.g. Insulin labelled with I125) is made to compete with an unstable
antigen for a limited number of binding sites of a specific antibody raised against insulin.
b. The antigen binds to the antibody. Owing to inadequate binding sites, some of the antigens will
be free and will include radio-labelled antigens also.
c. After equilibrium, the antigen-antibody complex is precipitated by using suitable reagents.
d. The supernatant is separated from the precipitate by centrifugation.
e. Both the precipitate (the bound antigen, B-form) and the supernatant (the free antigen, F) will
have radioactivity since they have I125 – insulin.

89
www.byjusexamprep.com

f. The extent of the radioactivity of the two forms is measured in gamma-ray well type scintillation
counters.
g. The magnitude of the radioactivity of the free form may be related to the concentration of the
un-labelled antigen.
h. Alternatively, the radioactivity of the sound form or the ratio of B/F is also related to the
concentration of the un-labelled antigen.
i. Different concentrations of the un-labelled insulin standard are used separately with the same
concentration of the labelled insulin.
j. The assay is very sensitive since the labels used for RIA have high specific activity.
k. Normally, an antibody is raised for any antigen to be estimated. The technique is said to be radio
immuno-assay since it couples radioactivity and immune function (antigen binding to antibody).

3. Types of RIA:
There are two different methods of RIA that are commonly employed for drug detection in biological
matrices, double-antibody RIA and coated-tube RIA.

a. double-antibody RIA- In double-antibody RIA, a second antibody is added to facilitate


precipitation of the bound primary antibody. Once the primary/secondary antibody-antigen
complex precipitates, the unbound labelled drug can be easily removed.

b. Coated-tube RIA- In coated-tube RIA, the primary antibody is coated on the inside of each tube.
The unbound labeled drug can be easily removed by pouring off the supernatant.

90
www.byjusexamprep.com

RIA Applications:

a. It has a significant role in the diagnosis of diseases.


b. Radioimmunoassay is employed for the estimation of Vitamins like B2, and folic acid ; hormones
like insulin, thyroxine (T4), triiodothyronine (T3), Cortisol, testosterone, dihydrotestosterone,
estrogens; trophic hormones like ACTH, FSH, LH; drugs like digoxin, digitoxin; antigens like the
Australia antigen.
c. RIA can help to differentiate the basic biochemical lesion in endocrinology whether the
increased level of a hormone is due to the production of the hormone as such or the tropic
hormone.
d. This technique offers safety to the patient in the use of drugs if there is only a narrow margin
between the therapeutic and toxic dosage.
e. This technique is also useful in diagnosing insulinomas, sex hormone sensitive tumors, etc. and
this facilitates proper treatment of the diseases.

91
www.byjusexamprep.com

FLUORESCENCE IN SITU HYBRIDIZATION (FISH)

It is a molecular cytogenetic technique that uses specific fluorescent probes which can bind to only
particular parts of a nucleic acid sequence.

1. Types of probes for FISH:

a. Locus specific probes-


It binds to a particular region of a chromosome. This type of probe is useful when scientists have
isolated a small portion of a gene and then wants to determine on which chromosome that gene
is located, or how many copies of a gene exist within a particular genome.

b. Alphoid or centromeric repeat probes-


They are generated from repetitive sequences which are found in the middle of each
chromosome. These probes can also be used in combination with locus specific probes to
determine whether an individual is missing the genetic material from a particular chromosome.

c. Whole chromosome probes-


They are actually collections of smaller probes, each of which binds to a different sequence
along the length of a given chromosome.

Fig. Types of FISH probes

2. The procedure of FISH:

a. The basic elements of FISH are a DNA probe and a target sequence
b. Before hybridization, the DNA probe is labelled indirectly with a hapten (left panel) or
directly labelled via the incorporation of a fluorophore (right panel)

92
www.byjusexamprep.com

c. The labelled probe and the target DNA are denatured


d. Combining the denatured probe and the target allows the annealing of complementary
DNA sequences
e. If the probe has been labelled indirectly, an extra step is required for visualization of the
non-fluorescent hapten that uses an enzymatic or immunological detection system.
Finally, the signals are evaluated by fluorescence microscopy

Fig. Procedure of FISH

3. Interpretation of FISH: Each fluorescently labelled probe that hybridizes to a cell nucleus in the
tissue of interest will appear as a distinct fluorescent dot;
a. Diploid nuclei will have two dots
b. If there is a duplication in the region of interest, it will result in more than two dots
c. If there is a loss in the region of interest, one or zero dots will result
d. If a small deletion is present in the region complementary to the probe, the probe will not
hybridize
e. If duplication is present, more probes will be able to hybridise

4. FISH Variants:

a. Single-molecule RNA FISH-


It is a method of detecting and quantifying mRNA and other long RNA molecules in a thin layer
of the tissue sample.

93
www.byjusexamprep.com

b. Fibre FISH-
It is an alternative technique to interphase or metaphase preparations where interphase
chromosomes are attached to a slide in such a way that they are stretched out in a straight line,
rather than being tightly coiled. This is done by applying mechanical shear along the length of
the slide, to cells that have been fixed to the slide and then lysed or to a solution of purified
DNA.

c. Q-FISH-
It combines FISH with PNAs and computer software to quantify fluorescence intensity. This
technique is mostly used for research in telomere length.

d. Flow-FISH-
It uses flow cytometry to perform FISH automatically using per-cell fluorescence measurements.

e. MAR-FISH-
It refers to the technique of combining radio-labelled substrates with conventional FISH to
detect phylogenetic groups and metabolic activities simultaneously.

f. MA-FISH-
This Microfluidics-assisted FISH (MA-FISH) uses a microfluidic flow to increase DNA hybridization
efficiency, decreasing expensive FISH probe consumption and reducing the hybridization time.
It is used to detect HER2 gene in breast cancer.

g. Hybrid Fusion-FISH-
It utilizes a primary additive excitation/emission combination of fluorophores to generate
additional spectra through a labelling process known as dynamic optical transmission (DOT).
Hybrid Fusion FISH enables highly multiplexed FISH applications that are targeted within clinical
oncology panels.

Advantages:
a. It is a rapid technique and a large number of cells can be stored in a very short period of time
b. The efficiency of hybridization and deletion is high
c. Sensitivity and specificity is high
d. Cytogenic data can be obtained from non-dividing or terminally differentiated cells
e. Cytogenic data can be obtained from poor samples that contain too few cells for routine
cytogenetic analysis

Disadvantages of FISH:
a. Only one or a few abnormalities can be assessed simultaneously
b. Cytogenetic data can be obtained only for target chromosomes thus FISH is not a good
screening tool for cytogenetically heterogeneous disease

94
www.byjusexamprep.com

Applications of FISH:
a. Different diseases like Prader-Willi syndrome, Angelman syndrome, Cri-du-chat, Down
syndrome etc can be detected using FISH
b. Can be used to detect and localize specific target sequences
c. Used to detect chromosomal abnormalities in interphase nuclei

GENOMIC IN-SITU HYBRIDIZATION (GISH)

It is a cytogenetic technique that allows one to radiolabel parts of the genome within the cells. It is an
advancement of the FISH technique.

1. Steps involved in GISH:

a. Probe DNA
b. Isolation and shearing of probe DNA
c. Isolation and sizing the competitor DNA (Competitor DNA – DNA from a test organism that is
denatured and then used in vitro hybridization experiments in which it competes with DNA
(homologous) from a reference organism; used to determine the relationship of the test
organism to the reference organism)
d. Nick translation labelling of probe DNA
e. Purification of labelled DNA probe
f. Chromosome preparation
g. In situ hybridization
h. Detection of hybridization
i. Microphotography

95
www.byjusexamprep.com

Fig. GISH Overview

Denaturation of the chromosome DNA:

In situ hybridization of probe and blocking DNA in the target sequence of the chromosome. Detection
of the probe in the chromosome DNA of one parent, in indirect labelling. The chromosome DNA
molecule of the second parent is associated with the unlabelled blocking DNA. Visualization of
hybridization signals associated with a probe (green) in a fluorescence microscope. Unmarked
chromosomes are visualized with a counter-staining (blue).

Advantages of GISH:

a. It is a quick, accurate, sensitive, informative and comparative approach


b. It can differentiate chromosomes from different genomes

Applications of GISH:

a. Used to identify chromosomal rearrangements in cancer patients


b. Detects the specific nucleotide sequence within cell and tissues
c. It is possible to detect a single-copy sequences on chromosomes with probes
d. It is useful for investigating the origins of wild and cultivated polyploid plant species

96
www.byjusexamprep.com

CHROMATIN IMMUNOPRECIPITATION (CH-IP) ASSAY


Chromatin immunoprecipitation (ChIP) assays are performed to identify regions of the genome with
which DNA-binding proteins, such as transcription factors, cofactors, enzymes (like polymerases) and
histones, associate.

1. Principle/Strategy: In ChIP assays, proteins bound to DNA are temporarily crosslinked and the DNA
is sheared prior to cell lysis. The target proteins are immunoprecipitated along with the crosslinked
nucleotide sequences, and DNA Is then removed and identified by PCR, sequenced, and applied to
microarrays for further analyses.

2. Workflow:

a. Cells grown under the desired experimental condition are fixed with formaldehyde which forms
heat reversible DNA-protein crosslinks. Formaldehyde serves to fix the protein-DNA interactions
occurring in cell
b. After crosslinking, cells are lysed and fragmented using either sonication (crosslinking/X-ChIP
protocols) or enzymatic digestion (MNase in Native ChIP)
c. to release the chromatin. These fragments are purified and then used to perform ChIP.
d. It is performed by incubation of fractionated chromatin with an antibody directed to a protein
of interest.
e. The antibody recognizes its targeted protein and precipitates protein-DNA from solution. In this
way, only DNA fragments crosslinked to the protein of interest are enriched, while DNA-protein
complexes that are not recognized by the antibody are washed away.
f. The purified DNA is analysed by dot blot or Southern blot using a radiolabelled probe
g. derived from the cloned DNA fragment of interest.

3. Utility:

a. Protein-DNA complexes are crosslinked, immunoprecipitated, purified, and amplified for


gene- and promoter-specific analysis of known targets.
b. Analyse epigenetic modifications and genomic DNA sequences bound to specific
regulatory proteins under a particular set of conditions.
c. Useful for kinetic analysis of events occurring on chromosomal sequences in vivo.
d. Additionally, if the immunoprecipitated DNA is hybridized to microarrays that contain the
entire genome displayed as a series of discrete DNA fragments, the precise genomic
location of each precipitated DNA fragment can be determined. In this way, all the sites
occupied by the gene regulatory protein in the original cells can be mapped on a genome-
wide basis.
e. Map the localization of post-translationally modified histones, histone variants,
transcription factors, or chromatin-modifying enzymes on the genome or on a given locus.

97
www.byjusexamprep.com

Fig. ChIP Assay Workflow

4. ChIP Variants:

a. Native ChIP:
Technique specific to histone modifications and native chromatin is used as the starting
chromatin. Micrococcal nuclease (MNase) first cuts the chromatin linker DNA, which yields
fragments of intact nucleosomes ranging from 200bp-1000bp. The DNA fractions of interest are

98
www.byjusexamprep.com

then pulled down by the specific antibody-antigen reactions. Therefore, this process enriches
the DNA fragments bound by target proteins. Finally, DNA can be purified from the complex and
analysed with PCR or qPCR protocols.

b. Cross-linking ChIP:
Determines what specific proteins are associated with different regions in the genome. The
target protein is cross-linked together with DNA, and then the DNA is broken into smaller
fragments with sonication.

c. RIP:
RNA immunoprecipitation (RIP) is similar to ChIP, except that RNA-binding proteins are
immunoprecipitated instead of DNA-binding proteins. Immunoprecipitated RNAs can then be
identified by RT-PCR and cDNA sequencing. It is beneficial for studying in vivo interactions
occurring between proteins and RNA forming RNP complexes. Can be coupled with existing
microarray techniques to unravel in vivo RNA-protein interactions en masse.

ChIP Advantage: This technique has the potential of trapping weak interactions that would not
normally survive extraction conditions.

Limitations:

a. Results obtained are highly dependent on the quality of the immunoprecipitating


antibody used.
b. Also depends on the abundance of targets.
c. Incomplete reversal of cross-links or crosslinking of antigen epitope makes it
inaccessible to the immunoprecipitating antibody.

99
www.byjusexamprep.com

Fig. RIP Workflow

100
www.byjusexamprep.com

FLOW CYTOMETRY
1. Flow cytometry is a laser-based technique that counts, sorts and differentiates specific cells
in a heterogeneous fluid mixture.
2. Mixture of cells are suspended in liquid streams are passed through a laser light beam.
3. An interaction with light is measured by an electronic detection apparatus as light scatter and
fluorescence intensity.
4. The fluorochrome is stoichiometrically bound with the cellular component; the fluorescence
intensity will ideally represent the amount of a particular cell component.
5. Flow cytometry is one of the powerful tools as it allows multiparametric analysis of the
physical and chemical characteristics of up to thousands of particles per second.
6. It rapidly performs the quantitative method for analysis and purification of cells in suspension.
7. With the help of flow cytometry, the phenotype and function and even sort live cells can be
determined.
8. Inflow cytometry, fluorescence-activated cell sorting (FACS) mechanism is performed which
adds the degree of functionality.
9. It uses specific antibodies labelled with fluorescent conjugates which allow to collection of
data simultaneously and also to sort biological samples from a heterogeneous mixture.
10. Inflow cytometry, the user defines the parameters on how the cells should be differentiated.
For differentiation, an electric charge on each cell is provided using electromagnets.
11. Nowadays the terms flow cytometry and FACS and flow cytometry are homologous to each
other and are often used to describe the laser-based biophysical technique.

1. Principle of flow cytometry:

a. When a detailed analysis is to be performed for a large number of different cell types in a
population this technique of flow cytometry is performed.
b. The cells are differentiated on the basis of size and structure.
c. A target (antigen) specific fluorescently-tagged antibodies are used to identify and
segregate various sub-populations.
d. A cell is allowed to pass through a narrow channel in such a way that each cell is
illuminated by a laser one at a time.
e. After which a series of sensors detects the refracted or emitted light. The data collected
is then integrated and compiled to generate information about the sample.

2. Instrumentation of Flow Cytometer:

a. The flow cytometer instrument is divided into three core systems: fluidics, optics, and
electronics.
b. Where the sample fluid is injected the segment is called fluidics.
c. The flow cell is optimized with sheath fluid that carries and aligns the cells or particles so
that they pass through a narrow channel and into the laser beam. This process is called
hydrodynamic focusing which analyses one cell at a time by laser interrogation.

101
www.byjusexamprep.com

d. The optics system is comprised of various filters, light detectors, and the light source,
which is usually a laser line producing a single wavelength of light at a specific frequency.
e. This is where the particles are passed through at least one laser beam. Lasers from
ultraviolet to far-red and have a variable range of power levels are used.
f. Electronics is actually the interrogation by the laser beam excites any compatible
fluorescent probes that are conjugated to antibodies, causing the probes to emit light (or
fluoresce) at specified wavelengths.

3. Light signals:

a. Forward Scatter Light Signals (FSLS)-

i. When a light signal is refracted by a cell in the forward direction and continues in
the same direction of the light. This signal is collected by forward scatter channel
(FSC), and is commonly used to determine the particle size.
ii. The bigger particles emit more forward scattered light than smaller particles, and
larger cells will have a stronger forward scatter signal as compared to the smaller
particles.

b. Side Scatter Light Signals (SSLS)-

i. When a refracted light travels in a different direction than its original path is
considered as a side scattered light signal.
ii. With help of this information of granularity and complexity of the cells is
determined.
iii. Cells that have low granularity and complexity will produce less side scattered
light, while highly granular cells with a high degree of internal complexity (such as
neutrophils) will show higher side scatter signals.
iv. This is affected by the shape and size of cells and is more sensitive to
membranes, cytoplasm, nucleus etc.

Both FSLS and SSLS are based on multiple factors such as sample preparation usage of
fluorescent. labelling technique.

4. Separation based on fluorescence emission:


a. Apart from forward and side scatter, cells can also be separated based on the light emitted
by fluorescent molecules.
b. Some cells possess naturally fluorescing materials inside a cell or fluorescence-tagged
antibodies.
c. For such cells, fluorochrome is used to stain a protein of interest so that incident laser
light of the appropriate wavelength allows the cells containing this protein to be detected.
d. Once the cell passes through the laser beam, a pulse of a photon is emitted.

102
www.byjusexamprep.com

e. The pulse is detected by the photomultiplier tube and converted to a voltage pulse, which
is further interpreted by the flow cytometer.
f. The higher the intensity of fluorescence, the higher the voltage pulse.

5. Data interpretation:
a. Once the cell is passed through the laser light different types of emission is detected
forward-scatter, side-scatter, and specific wavelengths of fluorescence emission. The data
for each of these are detected separately and plotted independently via histograms
and/or dot-plots.

b. Histogram compares a single parameter, where intensity is plotted on one axis and the
number of events is plotted on a different axis.

c. Dot-plots can compare more than one parameter simultaneously, where each event is
displayed as a single point and the intensity values of two or three channels are
represented on the various axes.

6. Sample preparation and Staining:

a. Single suspension cells are used for this determination to avoid clogging up of the system.
b. Cells are isolated and centrifuged at higher rpm.
c. Adherent cultured cells or cells present in solid organs are digested with the help of
chemical treatment using a specific enzymes or dissociated mechanically prior to flow
analysis.
d. To avoid unwanted clogging which may destroy the functionality of the machine
mechanical filtration is done.
e. The cells are then incubated in test tubes with fluorescently conjugated antibodies and
analysed through the flow cytometer machine.

Staining

a. Direct Staining: In direct immunofluorescence staining, cells are incubated with an


antibody directly conjugated to a fluorophore such as PerCP. In this only one, antibody
incubation step is done and eliminates the possibility of non-specific binding from a
secondary antibody.
b. Indirect Staining: In this staining method avidin-biotin system is used where the biotin-
conjugated antibody is detected with fluorophore-labelled avidin.
c. The fluorophore-conjugated secondary antibody detects the primary antibody which is
unconjugated, for example, conjugated antibodies available target the proteins.
d. Intracellular Staining: The intracellular staining procedure allows direct measurement of
antigens (cytokines or transcription factors) present inside the cell cytoplasm or nucleus.

103
www.byjusexamprep.com

e. Cell Stimulation, Fixing, And Permeabilization: After the staining procedure,


optimization is done which includes titering of antibodies, optimized fixation and
permeabilization of the cell.
7. Application and uses:

a. This mechanism is used to determine biological activity inside cells, such as the generation
of reactive oxygen species, mitochondrial membrane changes during apoptosis,
phagocytosis rates in labelled bacteria, native calcium content, and changing metal
content in response to drugs, etc.

b. This method is used to assess cell viability after the addition of pathogenic organisms or
drugs.

c. Any breach in cell membrane integrity can be determined using dyes that can enter the
punctured cell membrane.

d. Fluorescent probes such as bis-axonal bind to proteins present on the cell membrane and
help in determining the stage of necrosis in the cell. Alterations in cell shape, loss of
structures, cell detachment, condensation of the cytoplasm, cell shrinkage, phagocytosis
of cellular residues and the nuclear envelope can also be determined..

e. Biochemical modifications such as proteolysis, DNA denaturation, cell dehydration,


protein cross-linking, and a rise in the free calcium ions.

f. The variation in DNA content can also be assessed. The fluorescent dyes bind to DNA or
monoclonal antibodies, which can allow the detection of antigen expression.

g. Cellular pigments such as chlorophyll, DNA copy number variation, intracellular antigens,
enzymatic activity, oxidative bursts, glutathione, and cell adherence can be determined
using this technique.

8. Advantages of Flow cytometer:

a. High-speed analyses can be done easily depending on the flow rate


b. Measures number of cells at a time using multiple parameters
c. Can be used to determine micro populations
d. Easy to handle and portable equipment.

9. Disadvantages of Flow cytometer:

a. It is a very expensive and sophisticated instrumentation


b. Regular management and maintenance by a highly trained specialist and ongoing
maintenance by service engineers

104
www.byjusexamprep.com

c. A small blockage at fluidics may cause damage to the entire system.


d. Prior to use requires warm-up and laser calibration and cleaning are mandatory.
e. While performing tissue phenotype gets altered.
f. Very less information is obtained regarding intracellular distribution.

Fig. Schematic representation of Flow cytometry

IMMUNOFLUORESCENCE MICROSCOPY

Immunofluorescence (IF) microscopy is a widely based on the use of fluorophores to visualize the
location of bound antibodies. It is an example of immunostaining and is a form of
immunohistochemistry. This method includes the nature of the antigen, specificity and sensitivity of
the primary antibody, properties of the fluorescent label, permeabilization and fixation technique of
the sample, and fluorescence imaging of the cell.

Each process depends on the cell type, the antibody, and the antigen, there are steps common to
nearly all applications.

This technique is used on tissue sections, cultured cells, or individual cells using antibodies to analyze
the distribution of proteins, glycoproteins, and other antigen targets, including small biological and
non-biological molecules.

105
www.byjusexamprep.com

1. Principle of immunofluorescence:
a. Immunofluorescence is an assay used on biological samples to detect antigens in cellular
contexts using antibodies. The immunofluorescence is based on the antibodies and their
antigen of the biological samples (tissue and cells).
b. It is used to analyze the distribution of proteins, glycans, and small biological and non-
biological molecules.
c. The immunofluorescence assay is two types
i. Direct immunofluorescence assay: in this technique only marked primary antibody
incubated without second antibody is performed.
ii. Indirect immunofluorescence assay: under this technique tissue or cell fixation,
serum blocking, primary antibody incubation, marked second antibody incubation,
staining, result judgment and imaging are performed.
2. Instrumentation of immunofluorescence:

a. Light source: Xenon arc lamp or mercury-vapour lamp is common

b. A set of optical filters: Optical filters include a set of a:-


i. An excitation filter selects the wavelengths to excite a particular dye within the
specimen.
ii. A dichroic beam splitter/ dichroic mirror reflects light in the excitation band and
transmit light in the emission band.
iii. An emission filter controls the wavelengths of interest emitted by the fluorophores to
pass through.

c. Darkfield condenser: It provides a black background against which the fluorescent objects
glow.

3. Staining:

a. ATTO-TEC Fluorescent Dye Conjugates: Fluorescent markers from ATTO-TEC including


ATTO 425, ATTO 488, ATTO 532, ATTO 550, ATTO 594, ATTO 647N and ATTO 655 are used.
b. Fluorescein Dye Conjugates: the most commonly used fluorescent dye.
c. Rhodamine (TRITC) Dye Conjugates: Fluorescent chemical compounds (fluorone dyes).
d. Phycoerythrin (RPE) Dye Conjugates: Deep bright phycobiliprotein complex (Mw 250kDa)
isolated from red algae which possess extremely bright red-orange fluorescence and
shows high quantum yields.
e. Allophycocyanin (APC) Dye Conjugates: Allophycocyanin (APC) is a large protein (mw
~105kDa) from the light-harvesting phycobiliprotein family found in Cyanobacteria and
red algae.
f. AMCAConjugate:7-amino-3-((((succinimidyl)oxy)carbonyl)methyl)-4-methylcoumarin-6-
sulfonic acid is a multi-labelling fluorochrome.

106
www.byjusexamprep.com

4. Application and uses:

a. Fluorescence microscopy is widely used in diagnostic microbiology and in microbial


ecology.
b. Fluorescent antibody-stained specimen showing numerous, Toxoplasma sp. etc.
c. It is used to detect acid-fast bacilli (AFB) in sputum or CSF when stained with auramine
fluorescent dye.
d. It is used to detect of Trichomonas vaginalis, intracellular gonococci, and other parasites
when stained by acridine orange.
e. It is used to immune diagnosis of infectious diseases, using both direct and indirect
antibody techniques.

5. Advantages of Immunofluorescence:

a. One of the most popular methods for studying the dynamic behaviour exhibited in live-
cell imaging.
b. Through immunofluorescence individual proteins with a high degree of specificity can be
isolated without any reference to non-fluorescing material.
c. Highly sensitive to detect as few as 50 molecules per cubic micrometre.
d. Can detect different molecules stained with different colours,
e. Can track multiple types of molecules simultaneously.
f. Provide advantage over other optical imaging techniques, for both in vitro and in vivo
imaging.

6. Disadvantages:

a. Fluorophores used here can lose their ability to fluoresce as they undergo
photobleaching.
b. Photobleaching causes chemical damage to fluorophores from the electrons excited
during fluorescence.
c. Cells are highly susceptible to phototoxicity, particularly with short-wavelength light.
d. Fluorescent molecules may give rise to reactive chemical species when under illumination
which enhances the phototoxic effect.
e. It only allows observation of the specific structures which have been labelled for
fluorescence.

107
www.byjusexamprep.com

Fig. Schematic representation of immunofluorescence

108
www.byjusexamprep.com

3. BIOPHYSICAL METHODS

NUCLEAR MAGNETIC RESONANCE (NMR)

NMR is a spectroscopic technique that exploits the magnetic properties of nuclei. It detects the change
in nuclear spin energy in the presence of an external magnetic field as a result of absorption of
electromagnetic radiation in the radio-frequency region.

Its prominence as a biophysical technique lies in its ability to reveal the atomic structure of
macromolecules in solution, provided that highly concentrated solutions (1 mM, or 15 mg ml-1 for a
15-kD protein) can be obtained. NMR depends on the fact that certain atomic nuclei are intrinsically
magnetic; only a limited number of isotopes relevant to biochemistry display this unique property,
known as spin (1H, 13C, 14N, 31P etc.).

1. Principle:

Protons, electrons and neutrons possess a property called spin. Spin is expressed in multiples of
1/2 and can be + or –. Some atomic nuclei also have spin. If a particular nucleus is composed of p
protons and n neutrons, its total mass is p + n, its total charge is +p and its total spin will be a vector
combination of p + n spins each of magnitude 1/2. If the number of both the protons and neutrons
in a nucleus is even, then there is no overall spin.

All nuclei with an even mass number (total number of protons and neutrons in the nucleus) and an
even atomic number (number of protons in the nucleus) have thus a nuclear spin of zero. Any
atomic nucleus that possesses either odd mass number, odd atomic number or both will possess a
spin value.

The spinning of a proton generates a magnetic moment. This moment can take either of two
orientations or spin states (called α and β), upon application of an external magnetic field.

The energy difference between these states is proportional to the strength of the imposed
magnetic field. The α state has slightly lower energy and hence is slightly more populated because
it is aligned with the field. A spinning proton in an α state can be raised to an excited state (β state)
by applying a pulse of electromagnetic radiation of radio wave frequency (RF, pulse), provided the

109
www.byjusexamprep.com

frequency corresponds to the energy difference between the α and the β states. Radiowaves flick
the nucleus from the lower energy state to the higher state; the spin orientation will change from
α to β; i.e., resonance will be obtained. The nucleus, now eager to return to the lower, more stable
energy state; in doing so energy will be dissipated again and this is what is detected by the
spectrometer.

A resonance spectrum for a molecule can be obtained by varying the magnetic field at a constant
frequency of electromagnetic radiation or by keeping the magnetic field constant and varying
electromagnetic radiation.

Fig. The two allowed spin states for a 1H nuclei

110
www.byjusexamprep.com

Now in an applied magnetic field, not all atoms like for e, g. hydrogens and carbons in an ethanol or
propanol molecule resonate at exactly the same frequency. This variability exists due to the fact that
the hydrogens and carbons in a molecule are surrounded by electrons and exist in minutely different
electronic environments from one another. The flow of electrons around a magnetic nucleus
generates a small local magnetic field that opposes the applied field (diamagnetic anisotropy). Each
nucleus is surrounded by electrons, and in a magnetic field these will set up a tiny electric current.
This current will set, up its own magnetic field, which will oppose the magnetic field that we apply.
The electrons are said to shield the nucleus from the external magnetic field. If the electron
distribution varies from say,13C atom to 13C atom in an ethanol molecule, so does the local magnetic
field, and so does the resonating frequency of the 13C nuclei.
Thus, the changes in the distribution of electrons around a nucleus effect:
i. The local magnetic field that the nucleus experiences.
ii. The frequency at which the nucleus resonates.
iii. The chemistry of the molecule at that atom.
This variation in frequency is known as the chemical shift and it is denoted by δ. The chemical shift
of a nucleus depends on many factors, but the surrounding electron density is often the dominant
one. A high electron density causes a large shielding effect.

2. Working:
Let us consider ethanol; the carbon attached to the -OH group will have relatively fewer electrons
around it compared to the other carbon (oxygen atom is more electronegative and draws electrons
towards it, away from the carbon atom). The external magnetic field that this carbon nucleus feels
will, therefore, be slightly greater than the felt by the other carbon with more electrons. Since this
carbon is less shielded (deshielded) from the applied external magnetic field, it feels a stronger
magnetic field and there will be a greater energy difference between the two energy states of the
nucleus. The greater the energy difference, the higher is the resonance frequency. So, for ethanol,
it is expected the carbon with the OH group attached to resonate at a higher frequency than the
other carbon; the same is revealed by the 13C NMR spectrum.

111
www.byjusexamprep.com

These different frequencies, termed chemical shifts, are expressed in fractional units δ (parts per
million, or ppm) relative to the shifts of a reference compound, such as a water-soluble
tetramethylsilane (TMS), that is added with the sample. TMS is simply a silane (SiH4) derivative with
each of the hydrogen atoms replaced by methyl groups to give Si(CH3)4. Because of molecular
symmetry, all 12 protons of TMS absorb at the same frequency and all 4 carbons absorb at the same
frequency. The frequency of absorption for a nucleus of interest is measured relative to the
frequency of absorption of a TMS standard. For instance, the chemical shift of the 1H nuclei in the
1
H NMR Spectrum or 13C nuclei in the 13C NMR spectrum of TMS appears at = 0 ppm. Typically, it
increases from 0 on the right-hand side of the spectrum to 10 ppm on the left-hand side of a 1H NMR
spectrum or from 0 on the right-hand side to 200 ppm on the left-hand side of a 13C NMR spectrum.
The reason frequencies of absorption are recorded on the δ-scale relative to those of a standard
molecule since it makes the position of absorption independent of the spectrometer used to record
the spectrum independent of the strength of the magnetic field of the spectrometer.

Generally, a -CH3 proton typically exhibits a chemical shift (δ) of 1 ppm, compared with a chemical
shift of 7 ppm for an aromatic proton. The chemical shifts of most protons in protein molecules fall
between 0 and 9 ppm.

Fig. Expression of parameters associated with NMR Spectra

Apart from this standard One Dimensional NMR with which most protons in a variety of protein
samples can be resolved. Furthermore, sufficiently greater information can also be extracted by
examining how the spins on different protons affect their neighbours. This is made possible by
inducing a transient magnetization in a sample through the application of a radio-frequency pulse,
which will alter the spin on one nucleus and hence, enable examination of the effect on the spin of
a neighbouring nucleus. It reveals a Two- dimensional spectrum obtained by Nuclear Overhauser
Enhancement Spectroscopy (NOESY), which graphically displays pairs of protons that are in close

112
www.byjusexamprep.com

proximity, even if they are not close together in the primary structure. The basis for 2-D NMR is the
Nuclear Over Hauser effect (NOE), an interaction between nuclei that is proportional to the inverse
sixth power of the distance between them. Magnetization is transferred from an excited nucleus to
an unexcited one if they are less than about 5 Å apart. To sum up, the NOE effect thus provides a
means of detecting the location of atoms relative to one another in the three-dimensional structure
of the protein.

3. Applications:

a. Study of molecular structure.

i. Interactions between proteins and lipid bilayers in membranes. The structure of certain
membrane proteins has been related to their predicted biological function, examples of
such proteins include gramicidin A, bacteriorhodopsin and rhodopsin, phage coat proteins
and alamethicin.

ii. Structural studies of nucleic acids, DNA and RNA. Investigations of interactions between
various drugs and DNA and between binding proteins and DNA.

iii. Peptide and protein structural studies, e.g.lac repressor, antiviral proteins, etc.

b. Deduce changes to a particular chemical group under different conditions, such as the
conformational change of a protein from a disordered structure to an α helix in response to a change
in pH.

i. Protein folding studies e.g. ribonuclease A, cytochrome c, bamase, α-lactalbumin,


lysozyme, ubiquitin and Bovine pancreatic trypsin inhibitor (BPTI).

ii. NMR advantage- useful in studying molecular behaviour in solution. produces more useful
information than the constrained structures available from X-ray crystallographic studies.

c. Investigate certain types of kinetic changes: Study of enzyme kinetics both in vivo and in vitro. The
groups of enzymes studied include chymotrypsin, trypsin, papain, pepsin, thermolysin; adenylate,
creatinine and pyruvate kinases; alkaline phosphatase, ATPase and ribonuclease. Other examples
are glycogen phosphorylase, dihydrofolate reductase and triosephosphate isomerase.

d. Drug metabolism studies: Molecular modelling (in combination with X-ray diffraction data) to
elucidate drug action.

e. Isotope 31P has been used extensively in studies of phosphate metabolism. The relative and
changing concentrations of AMP, ADP and ATP can be measured and hence their metabolism is
studied in living cells and tissues.

e. Magnetic Resonance Imaging (MRI) used for the diagnosis and evaluation of diseases is based on
NMR spectroscopy principles.

113
www.byjusexamprep.com

UV-Visible SPECTROPHOTOMETRY (UV/Vis ABSORPTION SPECTROSCOPY)

UV/Vis absorption spectroscopy is based on the transitions of electrons from one molecular orbital to
another due to the absorption of electromagnetic radiation of UV and Visible region. These regions of
the electromagnetic spectrum and their associated techniques are the most widely used in
bioanalyses.

As a molecule absorbs energy, an electron is promoted from an occupied orbital to an unoccupied


orbital of greater potential energy. Generally, the transition of electrons occurs from the highest
occupied molecular orbital to the lowest unoccupied molecular orbital.

The orbitals are classified in the following manner:

a. Molecular orbitals with the lowest energy are the σ-orbitals.

b. The π-orbitals lie at high energy levels.

c. Non-bonding orbitals lie at even higher energies and contain a lone pair of electrons, and they
are stable, filled orbitals.

d. Anti-bonding orbitals are normally empty and have higher energy than bonding or non-bonding
orbitals.

1 Principle:

When electromagnetic radiation (light) passes through a compound, energy from the radiation is
used to promote an electron from a bonding or non-bonding orbital into one of the empty anti-
bonding orbitals. The energy gaps between these levels determine the wavelength of the
electromagnetic radiation absorbed, and these gaps are compound-specific. The larger the gap
between the energy levels, the greater the energy required to propel the electron to the higher
energy level; resulting in light of higher frequency, and therefore shorter wavelength, being
absorbed.

114
www.byjusexamprep.com

Absorption of electromagnetic radiation in the UV/Vis region (200 to 700 nm) can facilitate only a
limited number of the possible electron jumps. These jumps are
a. from π bonding orbitals to π anti-bonding orbitals (π to π*) and
b. from non-bonding orbitals, n, to π anti-bonding orbitals (n to π*).
This implies in order to absorb electromagnetic radiation in the UV/Vis region, the molecule must
contain either π bonds or atoms with non-bonding orbitals. Both n to π* and π to π* transitions require
the presence of an unsaturated functional group to provide the p-orbitals. Molecules containing such
functional groups and capable of absorbing Vis radiation are called Chromophores.
a. Molecules that show increasing degrees of conjugation require less energy for excitation and
resultantly the chromophore(s) absorb radiation of longer wavelengths (a phenomenon termed
as bathochromic shift). Molecules that contain conjugated systems, i.e., alternating single and
double bonds, will have their electrons delocalized due to overlap of the p-orbitals in the double
bonds. As the amount of delocalization in the molecule increases, the energy gap between the
π bonding orbitals and π anti-bonding orbitals gets smaller; and therefore, the light of lower
energy, and longer wavelength, is absorbed.
b. Conversely, a decrease in conjugation (e.g. protonation of aromatic ring nitrogen) causes a
hypsochromic shift to a lower wavelength.
c. Changes in peak maxima (increase or decrease in absorbance) may also occur. A hyperchromic
shift describes an increase and a hypochromic shift is a decrease in absorption maximum.

2. Instrumentation:
The material used in the optical parts of the instrument depends on the wavelength used.
a. In the ultraviolet region, usage of prisms, gratings, reflectors and cuvettes made of silica; and
above 350 nm wavelength borosilicate glass may be used.
b. In the visible region where the analyte (test sample) may not absorb but can be readily modified
chemically to produce a coloured product, coloured filters are used which absorb all but a
certain limited range of wavelengths. This limited range is known as the bandwidth of the filter.
The methods that use filter selectors and depend on the production of a coloured compound
are the basis of Colorimetry.
In general, colorimetric methods give moderate accuracy, since even the best filters (interference
types) do not possess particularly narrow bandwidths. The rationale here is to use two optically
matched cuvettes, one containing a blank in which all the materials are mixed except the sample
under test, an equivalent volume of solvent is added to this mixture, and the other containing the
coloured material to be measured. The analytical procedure also requires the zero to be reset
between each measurement as colorimeters, and some filters, are influenced by temperature
changes.
c. If the wavelengths are selected using prisms or gratings, then the technique is called
Spectrophotometry.

115
www.byjusexamprep.com

In both colorimetry and spectrophotometry, the usual protocol is to prepare a set of standards and
produce a concentration versus absorbance curve, which is linear because it is a Beer-Lambert plot.
Absorbances of unknowns are then measured and the concentration interpolated from the linear
region of the plot. Approximate concentration can be obtained from the following relationship:

Concentration = test absorbance/standard absorbance

Such an assumption for individual experiments is valid, provided that the Beer-Lambert relationship
has been established for that particular reaction previously.

The prime advantage of the spectrophotometer is the facility to scan the wavelength range over both
ultraviolet and visible and obtain Absorption Spectra; defined as plots of absorbance versus
wavelength. Such a plot shows the extent of absorbance (absorption peaks) at various wavelengths
for a standard, like reduced cytochrome c. Absorption spectra in the ultraviolet (200 to 400) and visible
(400 to 700) nm ranges arise owing to the kinds of electron transitions(jumps) listed previously. The
wavelengths of light absorbed are determined by the actual electronic transitions occurring and hence
specific absorption peaks could be related to known molecular substructures. Essentially, a
chromophore is a specific part of the molecule that independently gives rise to distinct parts of an
absorption spectrum.

Fig. UV/Vis Spectroscopy Instrument Design

116
www.byjusexamprep.com

3. Applications:

a. Qualitative analysis may be performed in the ultraviolet/visible region:

i. to identify certain classes of compound both in the pure state and in biological mixtures,
e.g., proteins, nucleic acids, cytochromes and chlorophylls.

ii. to indicate chemical structures and intermediates occurring in a system (for more precise
analysis, couple with infrared methods ).

b. Quantitative analysis may be performed on the basis of the fact that certain chromophores
absorb at specific wavelengths:

i. Examples include the aromatic amino acids in proteins and the heterocyclic bases in nucleic
acids.

ii. Proteins are generally measured at 280 nm and nucleic acids at 260 nm, carbohydrates at
~270nm. Corrections are usually necessary to account for interfering substances, e.g. an
A280/260 ratio for proteins in the presence of nucleic acid (ratio of absorbance between
the test sample and the remote or interfering sample).

iii. Algebraic techniques like R. A. Morton's and D. W. Stubb correction is used for the amount
of vitamin A in saponified oils.

c. In the case of amounts of substances with overlapping spectra, such as chlorophylls a and
b can be estimated if their extinction coefficients are known at two different wavelengths
(For n components, absorbance data are required at n wavelengths).

In certain assays, like those with moderate concentrations of biological macromolecules, for instance,
large DNA fragments measured at 260 nm, the phenomenon of Rayleigh light scattering may occur
which introduces an interference leading to error. This could be rectified by measuring the scattering
in a region of the spectrum where DNA does not absorb, for example at 330 to 430 nm.

CIRCULAR DICHROISM (CD)

Biophysical methods are usually employed for measurement of the optical rotation and Circular
Dichroism (CD) Spectroscopy is a prime example of one such biophysical method used for studying
optically active compounds, particularly the conformation of proteins and nucleic acid.

1. Working Principle:

The physical basis of CD is the utilization of circularly polarized light; a CD instrument records the
unequal absorption of left-handed and right-handed circularly polarized light.

117
www.byjusexamprep.com

Fig: A CD instrument

Polarisation of light:

a. Unpolarized light- The directions of oscillations randomly change in the same plane with time.

b. Plane polarized light- The magnetic and electric fields components oscillate in a definite plane
being perpendicular to each other.

c. Circularly polarized light- The magnetic field keeps oscillating but the electric field vector
changes direction in a rotary motion.

Origin of circularly polarized light:

a. It is obtained by superimposing two plane-polarized light of the same wavelength and amplitude
which are polarized in two perpendicular planes, but there is a phase difference of 90° between
them.

b. The wavelength range of 190nm-250nm (far- UV region) is used.

118
www.byjusexamprep.com

Fig: Linearly and circularly polarized light

2. Mode of working:

a. The right circularly (R) and left circularly polarized (L) light is incident on a molecule/sample.

b. Circularly polarized light when passed through a dichroic sample will be elliptically polarized.

c. This becomes possible since the circularly polarized components of the original linear polarized
light will now not be of equal magnitudes due to differential absorbance (i.e., circular dichroism).

d. Different deviation (in degrees) occurs in each as the molecule is chiral; giving 2 different color
illuminations; hence the name circular dichroism.

e. It is measured ellipticity (millidegrees or degrees) or the difference in absorption of left and right
circularly polarized light.

[Left circularly polarized light- Anti-clockwise rotation of electric vector

Right circularly polarized light- Clockwise rotation of electric vector]

f. Thus, an elliptically polarized light is obtained by superimposing two plane-polarized light


vibrating at the right angle to each other (phase difference of 90° between them) having the
same wavelength but unequal amplitude.

119
www.byjusexamprep.com

Fig: Mode of working of Circular Dichroism

So, CD = AL-AR

= ΔA = (εL-εR) cl

= Δεcl

[Δε - Molar circular dichroism]

Δε is typically <10 M-1cm-1

So, the CD signal is a very small difference between 2 large originals

The ellipticity is proportional to the difference in absorbance of 2 components (left and right circularly
polarized light). So, CD is equivalent to ellipticity.

θ = 2.303 (AL – AR) 180/4π

θ = 33(AL- AR) = 32.98 ΔA = 33ΔA

Unit of θ = Degree cm2 dmol-1

Hence, CD measures the ellipticity of the transmitted light (i.e the light that is not absorbed)

Ellipticity value can be both positive and negative [AL > AR: Positive and AL < AR: Negative]

120
www.byjusexamprep.com

Typical initial concentrations:

a. Protein concentration: Around 0.5mg/ml (adjustments made so as to produce the best data)

b. Cell path length: For any problem arising in the absorption, at such times cells with a shorter
path (0.1mm) with a correspondingly increased protein concentration and a longer scanning
time can be utilized.

c. Buffer concentration: It should be as low as possible around 5mM or even lower, while
maintaining protein stability. Generally, 10mM phosphate buffer is used in CD spectra, although
low concentrations of Tris, perchlorate or borate are also acceptable.

Sample preparation and measurement:

a. Additives, buffers and stabilizing compounds: Those compounds which absorbs in the region of
interest should be avoided.

b. Solvent selectivity: A large number of organic solvents like THF, CHCl3, CH2Cl2 cannot be used

c. Protein solution: The protein solution should only contain those chemicals necessary to
maintain protein stability/solubility and at the lowest concentration possible. The protein
should be pure devoid of any sort of contamination.

d. Lamp selectivity: In place of traditional Xe-arc lamps, high pressure short Xe lamps are used for
performing low UV-CD spectroscopy.

e. Contaminants: Any particulate matter (scattering particles) that adds a significant noise to the
CD spectra should be avoided. Solutions must be filtered to improve the signal to noise ration.

Some standard CD patterns for secondary structure

a. For α-helical proteins:

Negative peak at 222nm and 208nm

Positive peak at 193nm

b. For β-sheet proteins:

Positive peak at 195nm

Negative peak at 218nm

c. For random-coil:

Positive peak at 215nm

Negative peak at 195nm

121
www.byjusexamprep.com

Fig: Standard CD graphs of secondary structures

Enantiomers and Cd-spectra:

Two enantiomers (mirror image of each other) are in equal amount in a sample; then the resultant
CD-signal will be zero. Both the signals will cancel out each other in this racemic mixture.

Units of CD-data:

CD data is represented either in ellipticity (θ) or differential absorbance (ΔA)

𝑀𝑀
Mean Residue Weight (MRW) =
𝑁𝑁−1

M- Molecular mass of polypeptide

N- No. of amino acid residues

Mean Residue Ellipticity (MRE):


𝑀𝑀𝑀𝑀𝑀𝑀∗
[θ]MRE =
10∗𝑙𝑙∗𝑐𝑐

θλ = Observed ellipticity (in degrees)

122
www.byjusexamprep.com

l = Path length (in cm)

c = concentration (in g/mol)

3. Advantages of Circular Dichroism:

a. Relatively low concentrations/amount of sample is required for analysing

b. Microsecond time resolution

c. Timescale is much shorter thus allowing studying dynamic systems and kinetics

4. Limitations:

a. Certain buffer compounds in the sample strongly absorb in the Far-UV region and cause
interference.

b. Carbohydrates cannot be easily studied through CD.

c. Oxygen must be completely absent from the system to perform the experiment below 200nm

d. It does not provide atomic-level structural analysis.

e. It can be used only for qualitative analysis of data.

f. Not able to provide a detailed residue-specific information as in NMR and X-Ray Crystallography

5. Applications of CD-spectra:

a. Determine the protein’s secondary structure (at far-UV region: 180-240nm) and the protein’s
tertiary structure (at near-UV region: 280-380nm)

b. It is the best method for monitoring structural alterations due to pH, temperature and ionic
strength

c. Structural, kinetic and thermodynamic information about macromolecules can be derived from
CD spectra,

d. It can be used to estimate α-helix, β- sheet and random coil configuration.

e. It is used to determine the conformational changes due to protein-protein interactions, protein-


DNA and protein-ligand interactions.

f. It is used to measure the folding and unfolding state of proteins due to temperature changes.

123
www.byjusexamprep.com

ELECTRON SPIN RESONANCE SPECTROSCOPY (ESR)

It is a branch of absorption spectroscopy in which radiation having a frequency in the microwave


region is absorbed by a paramagnetic substance to induce a transition between the magnetic energy
level of the electron with unpaired spins.

Magnetic energy splitting is done by applying a static magnetic field. Absorption spectroscopies
operate at microwave frequency 104-106MHz.

Fig. An ESR machine in lab

1. Principle Of ESR:

a. ESR spectroscopy is based upon the absorption of microwave radiation by an unpaired electron
when it is exposed to a strong magnetic field.

b. The electronic energy levels of the atom or molecules will split into different levels. Such
excitation is called magnetic resonance absorption.

c. With an ESR instrument, a static/magnetic field and microwave are used to observe the
behaviour of unpaired electrons in the material being studied.

d. In principle, ESR finds paramagnetic centres (e.g., radicals) that may or may not be radiation-
induced.

e. A strong external magnetic field generates a difference between the energy levels of the
electron spins, ms = +½ and ms = –½, which results in resonance absorption of an applied
microwave energy figure below.

124
www.byjusexamprep.com

Fig. A strong external magnetic field generates a difference between the energy levels of the
electron spins, ms = +½ and ms = –½.

Working Principle of ESR:

i. The gap between the and energy states is widened until it matches the energy of the
microwaves this is done by increasing an external magnetic field.

ii. At this point, the unpaired electrons can move between their two spin states.

iii. Absorption lines are detected when the separation level of energy is equal to the energy of
the incident light.

iv. It is this absorption that is monitored and converted into a spectrum. (As shown in the
diagram below)

125
www.byjusexamprep.com

Fig. Working Principle of ESR

2. Applications Of ESR Spectroscopy:

a. Study of Free Radicals-

i. With the help of this, we can study free radicals. Even in very low concentrations, we can
study free radicals by using ESR SPECTROSCOPY.

ii. The structure of organic and inorganic free radicals can be identified.

iii. We can also investigate molecules in the triplet state.

iv. The spin-label gives the information about polarity of its environment.

b. Structural Determination-

i. In certain cases, ESR provides information about the shape of the radicals.

ii. The study of the behavior of electrons in a condition of the sample.

iii. ESR is used to observe and measure the absorption of microwave energy by unpaired
electrons in a magnetic field as and electrons energy levels.

c. ESR Is a characteristic of the following entities:

i. An atom has an odd number of electrons.

ii. Ions have partly filled inner electron shells.

iii. Free radicals having unpaired electrons etc.

Advantages:

i. With the help of ESR Spectroscopy, several types of irradiated food can be identified.

ii. It can detect paramagnetic ions and free radicals in a variety of materials.

126
www.byjusexamprep.com

X-RAY CRYSTALLOGRAPHY

X-ray crystallography is a method of determining the arrangement of atoms within a crystal. Based on
the concept of X-ray diffraction; upon a strike of an X-ray beam on a crystal, the beam may be
diffracted and from the angles and intensities of these diffracted beams a three-dimension picture of
the electron density within the crystal is obtained. From this electron density, the mean positions of
the atoms in the crystal can be determined.

1. Need Of X-Rays:

X rays, like light; are a form of electromagnetic radiation but have a much smaller wavelength. The
wavelengths of X-rays (typically around 0.1 nm) are of the same order of magnitude as the
distance between atoms or ions in a molecule or crystal. Therefore, the use of x-rays provides the
best resolution as the wavelength of x-rays is about the same length as that of a covalent bond.
When X- rays interact with a single particle, it scatters the incident beam uniformly in all
directions. However, when X-rays interact with a solid material, the scattered beams may add up
and reinforce each other to yield diffraction.

2. Crystal:

This technique requires that all molecules be precisely oriented (regularity of the material is
responsible for the diffraction of the beams), so obtaining good quality crystals of the material of
interest is a prerequisite. A crystal is built up of billions of small identical units called unit cells. The
unit cell is the smallest and simplest volume element and is representative of the whole crystal.

3. X-Ray Source:

X-rays for analysis are sourced by rotating anode generators (in-house) or synchrotron facilities.
In rotating anode generators, a rotating metal target is bombarded with high-energy (10–100keV)
electrons that knock out core electrons. Common targets are copper, molybdenum and
chromium, which have strong distinct X-ray emission at 1.54 A ̊, 0.71 A ̊ and 2.29 A ,̊ respectively.
In synchrotrons, electrons are accelerated in a ring, thus producing a continuous spectrum of X-
rays.

4. Experiment Strategy:

a. crystal formation. A laboratory grown crystal is mounted on a goniometer and exposed to X-


rays.

b. A beam of x-rays of wavelength 1.54 Å is produced by accelerating electrons against a copper


target.

c. A narrow beam of x-rays strikes the crystal. A part of the beam goes straight through the
crystal and the rest is scattered in various directions.

d. Finally, these scattered (or diffracted), x-rays are detected by an x-ray film, the blackening of
the emulsion being proportional to the intensity of the scattered x-ray beam, or by a solid-
state electronic detector.

127
www.byjusexamprep.com

5. Principle of X-ray diffraction:

The phenomenon of X-ray diffraction can be explained by drawing an analogy with the wave nature
of light. Whenever wave phenomena occur in nature, the interaction between waves can occur.
The superimposition of waves gives rise to the phenomenon of interference. Depending on the
displacement (phase difference) between two waves, their amplitudes either reinforce or cancel
each other out. If waves from two sources are in the same phase with one another, their total
amplitude is additive (constructive interference); and if they are out of phase, their amplitude is
reduced (destructive interference). Interference patterns result, with dark regions where light
waves are out of phase and bright regions where they are in phase. The interference gives rise to
dark and bright rings, lines or spots, depending on the geometry of the object causing the
diffraction. From the constructive interferences, we can determine dimensions in solid materials.

The same approach can be applied to calculate the distance between atoms in crystals. Instead of
visible light, which has a longer wavelength to interact with atoms, we use a beam of X-rays. When
a narrow beam of X-rays is directed at a crystalline solid, most of the
X-rays will pass straight through it. A small fraction, however, gets scattered by the atoms in the
crystal. The scattered waves reinforce one another at the film or detector if they are in phase there,
and cancel if out of phase. The way in which the scattered waves recombine is strictly governed by
the atomic arrangement.

6. Workflow:

a. The crystal is positioned in a precise orientation with respect to the x-ray beam and the film
and rotated.

b. The crystal is rotated so that the beam can strike the crystal from many directions. This
rotational motion results in an x-ray photograph consisting of a regular array of spots called
reflections.

c. The intensity of each spot is measured.

d. The next step is image reconstruction of the sample, say Hb protein sample from the observed
intensities.

e. The image is formed by applying a mathematical relation called a Fourier synthesis. For each
spot, this operation yields a wave of electron density whose amplitude is proportional to the
square root of the observed intensity of the spot. Each wave also has a phase that is, the timing
of its crests and troughs relative to those of other waves. The phase of each wave determines
whether the wave reinforces or cancels the waves contributed by the other spots. These phases
can be deduced from the diffraction patterns produced by electron-dense heavy-atom
reference markers such as uranium or mercury at specific sites in the protein.

128
www.byjusexamprep.com

f. This is followed by the calculation of an electron-density map, which gives the density of
electrons at a large number of regularly spaced points in the crystal. Interpret the electron-
density map to obtain the image of the crystal lattice. The fidelity of the image depends on the
resolution of the Fourier synthesis. The ultimate resolution of an x-ray analysis is determined
by the quality of the crystal; making perfect crystals is more art than science. For proteins, this
limiting resolution is about 2 Å.

Fig. A typical single crystal X-Ray Crystallography Setup

Top: Diagrammatic representation

Bottom: Instrumental setup

129
www.byjusexamprep.com

BRAGG’S LAW:

The interactions of x-rays with crystalline solids is expressed by Bragg’s Law which describes the
relationship between the angle at which a beam of x-rays of a particular wavelength diffracts from a
crystalline surface. For constructive interference, Bragg’s equation is applied as follows,

nλ= 2d sinθ

here, n is an integer, λ is wavelength of X-Ray and d is the spacing between the planes(inter-
planar distance)

θ is reflection angle between the incident ray and scattering planes

Let us consider a plane lattice crystal with interplanar distanced. Suppose a beam of X-rays of
wavelength λ is incident on the crystal at an angle θ, the beam will be diffracted in all possible atomic
planes. The path difference between any two diffracted waves is equal to the integral multiple of the
wavelength. In the figure below, ray P gets diffracted from the surface, while ray Q has to undergo
some path difference. The extra distance travelled by the ray Q will be (BC + CD).

From the diagram, either BC or CD is equal to d sin θ. So the path difference will be-

d sin θ + d sinθ = nλ

2d sinθ = nλ

Here, n is the order = 1, 2, 3 This is the Bragg’s law

130
www.byjusexamprep.com

7. Applications:

a. Protein crystallization-

Crystals of proteins are obtained by slow, controlled precipitation from an aqueous solution
under non-denaturing conditions. Ionic compounds (salts) precipitate proteins by a process
called salting out. Organic solvents also cause precipitation, but they often interact with
hydrophobic portions of proteins. and thereby denature them. The water-soluble polymer
polyethene glycol (PEG) is wide since it is a powerful precipitant and a weak denaturant

Slow precipitation can be achieved by adding denaturant to an aqueous solution of protein until-
the denaturant concentration is just below that required to precipitate the protein. Then water
is allowed to evaporate slowly, which gently raises the concentration of both protein and
denaturant until precipitation occurs. Whether the protein forms crystals or remains amorphous
depends on a protein concentration, temperature, pH and ionic strength.

X-ray Crystallography technique can reveal the precise three-dimensional positions of most
atoms in a protein molecule. Protein crystals display their biological activity, implicating that the
proteins have crystallized in their biologically active configuration. For instance, enzyme crystals
may display catalytic activity if the crystals are saturated with relevant substrates.

b. Fibre Diffraction-

Many important biological substances do not form crystals, like most membrane proteins and
fibrous materials like collagen, DNA and muscle fibres. Like crystals, fibres are composed of
molecules in an ordered form; albeit the order in fibre is one-dimensional rather than three
dimensional, as in a crystal. When irradiated by an X-ray beam perpendicular to the fiber axis,
fibers produce distinctive patterns that reveal their dimensions at the molecular level.

Historically, fibre diffraction was of central significance in enabling the determination of the
three-dimensional structure of DNA by Crick, Franklin, Watson and Wilkins.

Two classes of fibre diffraction patterns can be distinguished; crystalline fibres (e.g. A form of
DNA), and non-crystalline fibres (e.g. B form of DNA). In non-crystalline forms, the molecules
are arranged parallel to each other but in a random orientation around the common axis. The
diffraction intensity can be calculated via Fourier–Bessel transformation replacing the Fourier
transformation used in single-crystal diffraction.

c. Powder Diffraction -

Powder diffraction is a rapid method to analyse multi-component mixtures without the need
for extensive sample preparation. Instead of using single crystals, the solid material is analysed
in the form of a powder where, ideally, all possible crystalline orientations are equally
represented.

131
www.byjusexamprep.com

Points to Remember:

i. The amplitude of the wave scattered by an atom is proportional to its number of electrons,
the electron density. Since hydrogen atoms have very little electron density, they are not
usually determined experimentally by this technique.

ii. The detection of light beams is restricted to recording the intensity of the beam only. Other
properties, such as polarization, cannot be determined solely with this technique. The phase
of the light waves is even systematically lost in the measurement; a phenomenon termed
as phase problem.

SURFACE PLASMA RESONANCE (SPR)

Surface plasmon resonance (SPR) is a powerful and label-free spectroscopic technique. In this
technique, monitoring of noncovalent molecular interaction is done in real-time and non-invasive
fashion. It is a label-free technique so it does not need any tag like dye or any other specific reagent
for visible signal.

1. Working Principle:

a. Kretschmann Configuration-

This configuration is mostly used in the application of SPR where a metal film is usually taken of
silver and gold, is placed at the interface of two dielectric media and plain polarise light is made
to hit on to plate.

b. Evanescent waves and Total internal reflection-

When light travel occurs from higher refractive index medium to lower refractive index medium,
total internal reflection can take place in the medium of high refractive index, at the incident
angle Ɵ is greater than critical angle Ɵc.

a. Where, sin Ɵc=n1/n2

Formation of Evanescent waves occurs in the medium of lower refractive index, under the
condition of total internal reflection. With the distance to the interface of the media of higher
refractive index and media of lower refractive index, the amplitude of these waves’ decays
exponentially. The magnitude of the parallel wave vector of the evanescent wave, keven is
expressed as;

132
www.byjusexamprep.com

Where, λ is referred to as the wavelength of the incident light, n1 is the refractive index of a
medium with a higher refractive index and θ is the angle of incidence.

c. Surface Plasmon-

A surface plasmon is known as a quantum of plasma, which is found as a surface electromagnetic


wave whose propagation is confined to the interface of dielectric metal. The wave magnitude
of surface plasmon (Ksp, θSPR) is related to both medium and metal plates.

Where n2 stands for the refractive index of the medium of low refractive index and ng is the
refractive index of the metal film.

Surface Plasmon Resonance:

The Excited-state of the surface plasmon by the evanescent wave is known as surface plasmon
resonance. In this phenomenon intensity of reflected light decreases sharply. Decay of excited
surface plasmon results in energy conversion to photons. This is denoted as;

An angle, which is required for resonance is, θSPR, is related to n2 when n1 and n2 are fixed.

2. Mechanism:

a. In the phenomenon of surface plasmon response, a metal film is hit by polarised light at the
media interface with different refractive indexes.

b. After hitting oscillation of free electrons, known as surface plasmon is detected and collected.

c. The detection is done via Kretschmann configuration, in this type of arrangement light is focused
on a metal film through a glass prism and after that subsequent reflection is detected, the
arrangement shown in the figure below.

133
www.byjusexamprep.com

Fig. The figure shows incident light upon metal film through a glass prism and the reflected beam is collected
and analysed.

Plasmon is set to resonance with light at a certain incident angle which is also known as resonance
angle and this results in absorption of light at that angle. This resonance creates a dark line in the
reflected beam and these dark lines contain information. The angle of resonance can be obtained by
dip observation in SPR reflection intensity. If a shift occurs in the reflectivity curve, this represents that
a molecular binding event took place.

Fig. The excitation of surface plasmons results in a dark line in the reflected beam, and the angular position
of the dark line shifts as a molecule binding event takes place.

134
www.byjusexamprep.com

Fig. SPR Scanning Angle Response. SPR causes an intensity dip in the reflected light at the sensor surface. A
shift in the curve represents molecular binding.

3. Application:

The main application of SPR is to study the non-covalent interactions between molecules like
protein-DNA, Cell-protein, DNA-DNA, Protein-Protein, Protein-Carbohydrates, and other
macromolecule-micro molecule interactions.

SPR technology has successfully proven its application in drug discovery, ligand fishing and other
studies related to clinical immunology. This technique is a very specific technique so the interaction
to be studied needs to be very specific; it belongs to different parameters like kinetics of molecules,
the affinity of molecules, and concentration of molecule also play a significant role in results.

4. Advantage:

The use of surface plasma resonance is easy to the operatable technique in the analytical
laboratory. Many types of chemicals that interact between the covalent bond of macro and
macromolecules can be detected by this technique best part of this technique is that it does not
require labelling so it is easy to handle. In future, it will be applied to identify more biochemical
activities and be of immense use in drug discovery.

135
www.byjusexamprep.com

MASS SPECTROSCOPY

Mass spectroscopy is an analytical technique in which ionization and mass analysis of compounds are
employed to analyze and determine the formula, mass, and structure. A component of the mass
spectrometer known as a mass analyzer takes ionized masses and separates them based on charge to
mass ratios and outputs come to the detector where they are detected and later converted to a
digital output.
There are the following types of mass analysers are generally founds that can be used for ion
separation in mass spectroscopy.
a. MALDI-TOF Mass Spectrometry
b. Quadrupole Mass Analyzer
c. Time of Flight Mass Analyzer
d. Magnetic Sector Mass Analyzer
e. Electrostatic Sector Mass Analyzer
f. Quadrupole Ion Trap Mass Analyzers
g. Ion Cyclotron Resonance

a. Matrix-assisted laser desorption ionization-time of flight mass spectrometry (MALDI-TOF MS)


-

This is a commonly used Mass Spectroscopy technique. it is an ionization technique that softly
creates ions with minimal fragmentation for this laser energy is used. In MALDI-TOF, the ion is
protonated and protonated ions are accelerated by an electric field to by this ion have the same
kinetic energy as any other ions that have the same charge. This velocity depends on the mass-
to-charge (m/z) ratio and the time takes for the ion to reach a detector is measured. MALDI-TOF
analyses a wide variety of biomolecules, including peptides carbohydrates, and other
macromolecules.

b. Quadrupole Mass Analyzer-

In this type of Mass analyser, all the charged molecules are accelerated by DC bias and move
away from the centreline, the proportional rate being to their charge to mass ratio. If their
course goes off too far, they will hit the metal rods or the sides of the container and be absorbed.
After that, the DC bias acts as the magnetic field B of the mass spectra and it can be tuned to
specific charge to mass ratios and hitting to the detector.
In this type of electric field caused by, sinusoidal electric fields at 90 orientation and 90 degrees
phase shift this condition oscillates as a circle over time. So as an aftereffect the charged
particles fly down toward the detector, and particles will be travelling in a spiral, the diameter
of the spiral being determined by the charge to mass ratio of the molecule and the frequency
and strength of the electric field.

136
www.byjusexamprep.com

Fig. A quadrupole Mass analyser

c. Time of Flight (TOF) Mass Analyser-

In this type of analyser, ions are separated by time without using an electric field or magnetic.
Means TOF is like tas chromatography, but there is no phase like stationary/ mobile phase used,
and separation of ions is based on the kinetic energy and velocity.
In this process Ions having the same charges have equal kinetic energies; so that kinetic energy
of the ion in the flight tube is equal to the kinetic energy of the ion as it leaves the ion source:

And flight time or time it takes for the ion to travel the length of the flight tube is:

Where L denotes is the length of the tube and


v denotes the velocity of the ion
when Equation 1 Substituting for kinetic energy in Equation 2 for the time of flight:

At the time of analysis, length of tube L, ion source Voltage V are held constant the time of
flight is directly proportional to the root of the mass to charge ratio.

137
www.byjusexamprep.com

Fig. A TOF System

d. Magnetic Sector Mass Analyzer-

Like TOF analyser, in magnetic sector analysers process acceleration of ions occurs through a
flight tube and ion separation of occurs by charge to mass ratio. For separating the ions magnetic
sector and TOF magnetic field differences are used. Deflection in charges occurs when moving
charges enter a magnetic field, this deflection occurs to a circular motion of a unique radius in
a direction perpendicular to the applied magnetic field. in the magnetic field, Ions experience
equal two forces; one force is due to the magnetic field and the other is from centripetal force.

Where the equation can be rearranged

After substituting the equation to kinetic energy

Fig. Magnetic Sector separator

138
www.byjusexamprep.com

Because of certain ions m/z value will have a unique path radius which this value can be
determined only if magnetic field magnitude B and voltage difference V both acceleration are
held constant. When the passing of similar ions happens through the magnetic field, deflection
of all ions will happen to the same degree and all will follow the same trajectory path. And the
ions which are not selected by V and B values will collide with either side of the flight tube
wall or will not pass through the slit to the detector. This particular spectroscopy is used for
mass focusing, they focus on angular dispersions.

e. Electrostatic Sector Mass analyser-

This technique is similar to the time of flight analyser types in this separation of ion occurs by
using an electric field. This analyser has two curve plates with equal and opposite potential.
When ions will travel through the electric field it will be deflected and force on ion feel due to
the electric field which is equal to the centripetal force on ion. So by the reaction ions having
the same kinetic energy are focused and ions having different kinetic energy are get dispersed.

The main function of Electrostatic sector analyzers is energy focusers, in which ion beam is
focused for energy. This technique is employed when the instrument is individually and single
focusing. If both techniques are used together, then it is called a double-focusing instrument.,
because, at that time in the instrument, both the energies and the angular dispersions are
focused.

f. Quadrupole Ion traps Mass Analyser-

This analyser has the same principles as quadrupole analyser, for separation of the ions by mass
to charge ratios this analyser uses an electric field. it is made up of a ring electrode of a particular
voltage and cap electrodes are grounded to the end. entry of ions occurs between the area of
electrodes through one of the end caps. usually, a quadrupole ion trap runs a mass selective
ejection, where selectively it ejects the trapped ions in order of increasing mass by a gradual
increment of the applied radiofrequency voltage.

g. Ion Cyclotron Resonance (ICR)-

In Ion Cyclotron Resonance (ICR) technology an ion trap uses a magnetic field for trapping ions
into an orbit or inside it. In this type of analyser, no separation occurs all the ions of a specific
range are trapped inside, and an external electric field generates a signal. when a moving charge
enters a magnetic field, a centripetal force experiences which makes the ion in orbit.

139
www.byjusexamprep.com

4. STATISTICAL METHODS

1. Measures of central tendency:

A measure of central tendency is a summarized way to measure the attempts which describe a
whole set of data with a single value. This could be the middle or centre of its whole distribution.
Generally, a measure of central tendency is calculated via:
a. Mean
b. Median and
c. Mode.

a) Mean: Mean is also called average. It is the total sum of the value of each observation in data
obtained divided by the total number of observations. This is also known as the arithmetic
average.

Mean can be used to determine continuous or discrete numeric data.


For example:

11,12,13,14,15,16,17,18,19,20

Mean = 11+12+13+14+15+16+17+18+19+20/10 =155/10 = 15.5

b) Median: The median is the middle value in a dataset when the sequence is arranged in either
ascending or descending manner. It is usually used when the distribution is not uniform.

11, 13, 14, 12, 15,16,18,19, 20 15 is the median

11,12,17,15,19,13,16,14,20,18 19, 13 is the median

11, 11, 13, 13, 15, 16, 16, 18, 19 15 is the median
c) Mode: The most commonly occurring value in a distribution is called as mode.
Consider this dataset
14, 14, 14, 15, 16, 17, 17, 18, 18, 20, 20
numbers mentioned frequency of the number
14 3
15 1
16 1
17 2
18 1
20 2

140
www.byjusexamprep.com

2. Standard deviation and standard error:

In the field of medicine, biology, engineering, psychology, etc. In these studies, the standard deviation
(SD) and standard error are used to represent the statistical characteristics of sample data.

Let us consider following set of data

(n) 2, 4, 6, 7

Mean for the above set is

2+ 4+6+7/4 = 19

59/4 =4.75

Standard deviation for the following would be as follows


= (4.75-2)2+ (4.75-4)2+(4.75-6)2+ (4.75-7)2
= (2.75)2+ (0.75)2+(-1.25)2+ (-2.25)2
= 7.56+0.562+1.56+5.06
= 14.74

Variance = Squared differences from mean/ number of data points


=14.74/4
=3.68

Standard deviation (σ) = √3.68


= 1.91

Standard error (S.E) = σ/ √n


= 1.91/ √4
= 1.91/2
= 0.955

Hence, the standard deviation (σ) and standard error (S.E) for the n number of set i.e. 2, 4, 6, 7 is
1.91 and 0.955 respectively.

3. Probability Distribution:

The term probability is used to measure the uncertainty of any event. For instance, when a dice is
thrown there is chances to have, 1, 2, 3, 4, 5, 6 any of the numbers could come so the probability
chance is equal for each number mentioned. Simultaneously when a coin is tossed the probability of
head or tail is also equal. We can’t surely say which event will take place eventually unless and until
the dice or the tossed coin touches the ground surface.

141
www.byjusexamprep.com

Different types of the Probability distribution are as mentioned below.

a. Binomial probability distribution


b. Normal probability distribution
c. Poisson probability distribution

a. Binomial probability Distribution:

As the name suggests bi means two or twice which means the possible outcome could be two
for eg. Yes or no, head or tail or pass or fail etc. Let’s say we are tossing a coin the probability of
getting head is ½ and tail is also 1/2 (as there are only two events to take place while tossing a
coin). But if we want to get 6 heads we can calculate it using

B (x; n, P) = n Cx x P^x x (1-P) n-x.

The number of trials (n) is 10


The odds of success (“tossing a heads”) is 0.5 (So 1-p = 0.5)
x=6
P(x=6) = 10C6 * 0.5^6 * 0.5^4
= 210 * 0.015625 * 0.0625
= 0.205078125

b. Normal probability Distribution:

A normal distribution, is represented in form of the bell curve. It demonstrates naturally


occurring events. The bell curve is symmetrical. When the whole data is distributed half of the
data will fall to the left of the mean; half will fall to the right. This format of distribution is used
to calculate the heights of people, Measurement errors, Blood pressure, Points on a test, IQ
scores, Salaries etc.
The bell curve shows the equal mean, mode and median values for the data set available and
the total area of the curve is measured as 1. Hence, half of the values fall to the left of the
centre and exactly half the values are to the right.

Fig. A bell curve or a normal probability distribution pattern.

142
www.byjusexamprep.com

c. Poisson Probability of distribution:

A Poisson probability distribution is used to predict the probability of certain times the number
of events has happened. It gives us an idea about the number of fixed time intervals when the
event might have taken place.

Fig. A representation of Poisson Probability distribution

4. Sampling distribution:

A sampling distribution is a graphical representation of the statistics of data.

For e.g. Here’s a simple example of the theory: when you roll a single dice, the chances of getting
any number (1, 2, 3, 4, 5, or 6) are the same (1/6). If the mean is calculated for any roll it will be is
(1 + 2 + 3 + 4 + 5 + 6) / 6 = 3.5. if we draw a pictorial representation of the event it would look like
more or less uniform distribution. But when the number of the dice rolls is increased from 1 to 10
it will show an almost normal distribution pattern because the sample size increased.

143
www.byjusexamprep.com

Fig. Sampling distribution

5. Hypothesis Testing: Parametric And Non-Parametric Statistics:

a. From time to time researchers has proposed several explanations made on the basis of
limited evidence of further investigation to justify the truth.
b. Scientists generally provide basic scientific hypotheses on previous observations which is
not a justified explanation in the favour of the event or experiment taking place.
c. So from time to time, the analysis of the previously provided hypothesis is necessary to
check whether the current situation is in favour of the experiment.
d. Hypothesis testing is done on the basis of two parameters

i. Parametric statistics
ii. Non-parametric statistics

i. Parametric statistics: Parametric statistics: under this test number of parameters


is fixed. For performing a parametric test prior knowledge of the population
distribution is necessary. But in case the prior knowledge is not available then an
assumption to approximate a normal distribution pattern which is a possible way
that can help in justifying the original statement of theory.

144
www.byjusexamprep.com

ii. Non-parametric statistics: Non-Parametric tests has nothing to do with any sort
of assumption about the parameters for the considered population sample.
However, these parameters do not actually depend on the population. Under
these conditions set of parameters is not fixed. Due to this nonparametric tests
are also referred to as distribution-free tests.

Difference between Parametric and Non-Parametric statistics.

Characters Parametric statistics Non-parametric


statistics.

1. Assumption Based Independent


2. Probabilistic Dependent Independent
distribution
3. Central tendency Mean value Median value
4. Information about data for current Previous
examination knowledge is
required
5. Fixed parameters Required Not required
6. Distribution Dependent Independent

7. Variable and Variable Both variable


Attribute and attributes
dependency
8. Co-relation Person’s or Spearman’s
Coefficient Coefficient
Correlation correlation

Regression and Correlation in Biostatistics


1. Correlation:

a. To calculate the relationship of two variables that are linearly related to each other is called
Correlation.

b. In other words, correlation is to calculate changes at a constant rate.

c. It is a common tool used to describe simple relationships without any relevance to their cause
and effect.

d. It is unbothered about the other variables.

e. Correlation is measured with the help of a correlation coefficient (r) that ranges from -1 to +1.
Its statistical significance is indicated by the p-value. Hence, correlations are typically
denominated asr and p.
f. If r=0 it demonstrates a weak linear relationship.

145
www.byjusexamprep.com

g. If r= + indicates there is a positive correlation, the values of both variables increase at a similar
pace.
h. If r=-, indicating a negative correlation, the values of one variable increase whereas the values
of the other variable decrease.
i. The p-value indicates the population correlation coefficient is likely different from zero, based
on the observation made from the sampling.

Fig. A representation of positive, negative and no correlation relationship

To understand correlation let’s take an example of a research organization that is


researching in the effect of the use of fertilizer for the betterment of crop yield.

Fertilizer crop yield

2 4

1 3

3 4

2 3

4 6

5 5

3 5

Calculate the correlation coefficient of the following data.

146
www.byjusexamprep.com

Solution: let’s modify the data a

Fertilizer (x) crop yield (y) xy x2 y2

2 4 8 4 16

1 3 3 1 9

3 4 12 9 16

2 3 6 4 9

4 6 24 16 36

5 5 25 25 25

3 5 15 9 25

Ʃx= 20 Ʃx= 30 Ʃx= 43 Ʃx= 68 Ʃx= 136

r= n(Ʃxy) – (Ʃx) (Ʃy)


√[n(Ʃx2)-( Ʃx) 2][n (Ʃy2)-( Ʃy) 2]

r= 7(93)-(20)(30)
√[7(68)-(202)][7(136)-(302)]

r= 631-600
√476-400][952-900]

r= 51
√[76][52]

r= 51
√3952
r= 51
62.864

r= 0.811

hence, the correlation coefficient of the above mentioned data is 0.811 (positive correlation).

2. Regression:

a. Where the correlation describes the strength of an association between two variables. This is
completely symmetrical, the correlation between A and B is the same as the correlation
between B and A.

147
www.byjusexamprep.com

b. However, if the two variables are related vice versa to each other which change in one variable
will certainly affect the other one.
c. Let’s say, if y represents the dependent variable and x the independent variable, this
relationship is described as the regression of y on x. This can be represented by a simple
equation called the regression equation.
d. The regression equation represents the number of changes in y with any given change of x.
This can be used to construct a regression line on a scatter diagram, and in the simplest case,
this is assumed to be a straight line.
e. The positive and negative regression depends on the direction of the line slopes. When the
two sets of observations increase or decrease together (positive) the line slopes points
upwards from left to right. On the other hand when one set decreases and the other increases
the line slopes points downwards from left to right.

To understand regression let’s take the same example of a research organization which is
researching the effect of the use of fertilizer for the betterment of crop yield.

Fertilizer (x) crop yield (y)


2 4
1 3
3 4
2 3
4 6
5 5
3 5
Calculate the regression of the following data.

Solution: let’s modify the data

Fertilizer (x) crop yield (y) xy x2 y2


2 4 8 4 16
1 3 3 1 9
3 4 12 9 16
2 3 6 4 9
4 6 24 16 36
5 5 25 25 25
3 n=7 5 15 9 25
Ʃx= 20 Ʃx= 30 Ʃx= 43 Ʃx= 68 Ʃx= 136

Regression coefficient of x to y is calculated as follows:

148
www.byjusexamprep.com

x^ = 20/7= 2.85

bxy= 7x 43- 20x 30


7x136- (30)2
bxy= 7x 43- 20x 30
7x136- (30)2
bxy= 301-600
952-900
bxy= 301-600
952-900
bxy= -299/52
bxy= -5.75

Fig. A representation of Regression

Comparison between Coefficient and Regression analysis

Comparative values Correlation Regression


Relationship analysis Any two variables Dependency of the
independent variable on
dependent variable
Motto Association (present or absent) Determine the functional
between two variables relationship between two
variables
Objective Identification of numerical Estimation of the random
value that expresses the variable on the basis of
relationship between two another variable
variables

149
www.byjusexamprep.com

Nature of variable Dependent and independent is Dependent and independent is


not mentioned mentioned
Variability concern Interrelated (two variables) It allows to understand the
impact of a unit change in the
known variable on the
estimated variable
Nature of coefficient Symmetrical or mutual NA
Value range Positive, negative or zero < or > than 1
Association linear Linear
Relationship Confined to linear relationship Can be both linear and non-
linear
Application Limited application Wider application

Student ‘T’ distribution

a. Used for small samples (n<30)


b. Developed by W.S. Gossett

𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷𝐷 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚


t=
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚

1. Criteria for t-test:


a. Random samples are drawn from normal population
b. Samples are less than 30
c. In the case of two samples; some adjustments in degrees of freedom for ‘t’ are made
d. For testing the equality of two populations means, the population variances are regarded
as equal

2. Properties:
a. Shape of ‘t’ distribution curve varies with degrees of freedom (Degree of freedom is
defined as the size of the sample minus one).
b. The graph is similar to that of normal distribution. t-distribution has a greater spread than
a normal distribution.
c. The larger the no. of degrees of freedom the more closely t-distribution resembles
standard normal distribution
d. t-distribution is asymptotic to X-axis i.e. extends to infinity on either side

3. Applications:
a. To test the significance of a single mean when the population variance is unknown
b. To test the significance of an observed sample correlation coefficient or difference
between means of two samples (dependent sample or paired observation)
c. To test the significance of a single mean when the population variance is unknown.

150
www.byjusexamprep.com

There are three types of t-test:

a. One-sample t-test-
To compare the mean of a sample within a population mean.
Eg: the Mean Hb level of ten patients significantly differ from the mean value of the
general population.

b. Independent (Unpaired) –
To compare the mean of one sample with the mean of another independent sample.
This test is applied to independent observations, made on individuals of two different
groups or samples drawn from two populations.
Eg: Difference between mean Hb levels of males and females

c. Paired t-test-
To compare the values of one sample but on 2 occasions (i.e. before and after)
Eg: Difference in mean Hb level before and after 3 months nutritional intervention
To compare the effect of 2 drugs given in the same person

ANOVA (Analysis of Variance)

a. It is a collection of statistical methods used to compare means among more than two
groups; developed by Ronald Fischer in 1920.
b. When we have only 2 samples we can use a t-test to compare the means of the sample,
but it might become unreliable in the case of more than 2 samples.

1. Assumptions of ANOVA-
a. Samples are independently drawn
b. Each group sample is drawn from a normally distributed population
c. All populations have a common variance
d. The dependent variable should be continuous
e. Independent variable should consist of two or more categorical, independent groups
f. Sample sizes for the groups are equal and greater than 10
g. The effects of various components are additive

ANOVA determines whether any of the means are statistically different from each other.
Specifically; it tests the null hypothesis.

ANOVA measures the two sources of variation in the data and compares their relative sizes.
i. Variation between groups- For each data value look at the difference between
its group mean and overall mean

151
www.byjusexamprep.com

ii. Variation within groups- For each data value; we look at the difference
between that value and the mean of its group

ANOVA F-statistic is a ratio of these two:

𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣


F=
𝑊𝑊𝑊𝑊𝑊𝑊ℎ𝑖𝑖𝑖𝑖 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣

2. Types of ANOVA:

a. One-way ANOVA:
Single independent variable is involved.
Eg: The effect of pesticide (independent variable) on the oxygen consumption
(dependent variable) in an insect.

b. Two-way ANOVA:
Two independent variables are involved.
Eg: The effect of different levels of the combination of a pesticide (independent
variable) and an insect hormone (independent variable) on the oxygen consumption of an
insect.

Two-way ANOVA assumption:


i. The dependent variable should be continuous
ii. Two independent variables should each consist of two or more
categorical, independent groups
iii. All samples are drawn independently of each other
iv. All populations have a common variance
v. Sample sizes for the groups are equal and greater than 10.
vi. Each group sample is drawn from a normally distributed population
Eg: A researcher was interested in whether SBP was influenced by age group and socio-
economic status (SES). So, the dependent variable was SBP. Two independent variables were
age group and SES.

In ANOVA; it is a general rule to conduct a post-hoc analysis. Eg: Tukey’s HSD, Scheffe, Duncan,
LSD; to know which pairs differ significantly.

Difference between t-test and ANOVA:


Characters t-test ANOVA
1. Dependent Interval or ratio variable Interval or ratio variable
2. Independent Binary variable with only Categorical variable
two groups
3. Null hypothesis H0: 1=2 H0: 1=2=3….
HA: 12 HA: Mean are not equal

152
www.byjusexamprep.com

CHI-SQUARE (χ2) TEST

Chi-square test is a non-parametric test of significance that is applied to qualitative data. Other tests
of significance such as the “t’ Test, Z Test are parametric in nature and can only be used for
quantitative data like height, length, weight, percent composition etc.
But in biological research we also encounter qualitative data like health, drug response, adaptability,
intelligence etc. wherein it is not possible to make any dependable assumption about the distribution
from which the samples have been drawn. In such cases, the chi-square test comes to the rescue; for
instance, in genetic studies for testing the significance of overall deviation between the observed and
expected frequencies.
The test was formulated by Prof A. R. Fisher in 1870 and was introduced in its modern form by Karl
Pearson in the year 1900.

Determination of χ2:

Chi-square test is the test of significance of overall deviation square in the Observed and Expected
frequencies divided by Expected frequencies;

Where, O = Observed frequency in class E


E =Expected frequency in a class.
Σ= Summation

Ideally, the value of χ2 will be zero if O=E in each class but an error factor always exists; the observed
results are based on the degree of freedom (df) and the critical level of probability (5% or 1%). We can
find the expected value χ2 from the table (contingency table). This expected value can be compared
with the value calculated from the data. If the tabular value is lower than the calculated value then
the results are significant.

The value of χ2 thus depends on a number of classes, i.e., on the number of degrees of freedom and
the critical level of probability. A table of the association attributes is what constitutes a contingency
table.

The following steps are required to calculate the value of χ2:

a. A contingency table is made and the Observed frequency (0) in each class of one event, row-
wise i.e. Horizontally and then the numbers in each group of the other event, column-wise i.e.
Vertically are noted down.
b. The Expected frequencies (E) are calculated.
c. The difference between the observed and expected frequency in each cell (O - E) is assessed. If
the deviation is large, the square deviation (O - E)2 must also be large.

153
www.byjusexamprep.com

d. The χ2 value is calculated by applying the above formula; the value of χ2 will range from zero to
infinity.
e. The calculated value of χ2 is then compared with the table value of the given degrees of freedom
at either 5% or I % level of significance.
f. If the calculated value of χ2 is less than the tabulated value at a particular level of significance;
the difference between the Observed and the Expected frequencies is not significant, and could
have arisen due to fluctuations of sampling. On the other hand, when the calculated value is
more than the tabulated value, the difference between the Observed and the Expected values
are significant.

The number of degrees of freedom in a χ2 test is equal to the number of classes minus one. In a
contingency table, the degree of freedom is calculated in the following manner:
df = (r - 1) (c - 1)
Where r = Number of rows in a table
c = number of columns in a table.

For e.g., in a 3 x 3 contingency table, the degree of freedom is (3-1) (3-1) = 4. Likewise in a 3 x 4
contingency table, the degree of freedom is (3-1) (4-1) = 6.

The three basic prerequisites of χ2 test are:


(i) Data should be qualitative
(ii) Random sampling
(iii) Preferably, the observed frequency should not be less than five.

Utility Of Chi-square Test:


a. As an alternative test to find the significance of the difference in two or more than two samples
(binomial and/or multinomial). The Chi-square test is a useful test that can be applied to find
significance in the same type of data.
b. Compare the values of two binomial samples even if they are small. For e.g - while finding the
significance of the difference in the rate of fermentation in 5 control and 5 Alcohol
dehydrogenase absent samples of the same species, the significance of the difference in yield
of flowers in control and pesticide sprayed farmland etc.
c. Compare the frequencies of two multinomial samples. Through Chi-square we can measure the
probability of association between two discrete attributes. The Two events could be for e.g. iron
intake and Hb%, T4 injection and oxygen consumption, nutrition and intelligence, weight and
obesity etc. There are two possibilities, either the two discrete groups will influence each other
or they do not. The table is prepared by enumeration of qualitative data, since we want to know
the association between two sets of events, the table is also called the association table.
d. As a test of goodness of fit, χ2test can also be applied. The goodness of fit shows the closeness
of Observed frequency with those of the Expected, hence it helps to assess whether something
(Physical/Chemical parameter) did or did not have an impactful effect.

154
www.byjusexamprep.com

Solved Examples:
Example. In a monohybrid cross between tall (TT) and dwarf (tt), 1574 tall and 554 dwarfs were
obtained. Suggest if a ratio of 3: 1 is suitable or not.

Calculation. Total number = 1574 Tall + 554 Dwarf


= 2128.

Expected ratio = 1596 : 532


Observed ratio = 1574 : 554.

Putting the values in the formula:

= 0.303 + 0.909 = 1.212.


Here, d.f. = 2 – 1 = 1

Significance. At 5% level, at 1 degree of freedom, the table value of χ2 = 3.84 and the calculated value
of χ2 is 1.212. This indicates that the two series of frequencies, observed and expected is not in
agreement with the theoretical ratio of 3: 1.

Example. The RBC count in lac/mm3 and Hb% in gm/100 ml of 500 persons of a locality were recorded
as follows. Find if there is any significant relation between RBC count and Hb%. Find it by χ2 method.

Hb%
RBCs count Above normal Below normal Total
Above normal 85 76 160
Below normal 165 175 340
Total 250 250 500

155
www.byjusexamprep.com

Calculation.
Table 1:
Hb%

RBCs count Above normal Below normal Total

Above normal O = 85 O = 75
160

Below normal
O = 165 O = 175 340

Total 250 250 500

Based on the above data following table is prepared:

Table 2:
O E O–E (O – E)2

85 80 85 – 80 = 5 25

16 17 165 – 170 = –5 25
5 0

75 80 75 – 80 = –5 25

17 17 175 – 170 = 5 25
5 0

Here d.f. = (2 – 1) × (2 – 1) = 1.

Significance. At 5% level, on 1 d.f. the table value of χ2 = 3.84. The calculated value is very less i.e.,
0.90. It thus shows that Hb% and RBC count are independent of each other.
[Note: In reality, Hb% and RBCs count are not independent of each other. This may be due to
hypothetical data or sampling error.]

156
www.byjusexamprep.com

Mann-Whitney U-test:

When the conclusion is to be drawn about the assumption based (parametric) data then the Mann-
Whitney U test is done. This test allows ranging to determine the differences in medians (the middle
number in a series of data in either ascending or descending manner) between groups.

Let’s take consideration a company is paying the employ and keeping a record of payment on an
ordinal scale or differ based on gender i.e. either male or female.

In an alternate condition payment if measured in terms of dependent values (Numerical/continuous


(skewed) or ordinal) continuous scale, differed based on educational level. Here dependent value is
salary and the independent (nominal or binary) values are educational institutes i.e. colleges or
universities.

Mann Whitney U test is based on four assumptions.

1. Assumption 1: when the dependent variable is measured at the ordinal or continuous level.
For eg., Ordinal variables include a 7-point scale from strongly agree or strongly disagree, a 5-point
scaling such as Not very much to Yes, a lot.
Continuous variables include revision time (in terms of hours), intelligence (in terms of IQ score),
exam performance (measured from 0 to 100), weight (measured in kg/ gm etc.)

2. Assumption 2: the independent variable consists of two categorical, independent groups. Such
as groups: male or female), employment status: employed or unemployed), smoker groups: yes
or no.

3. Assumption 3: the independence of observations, which means that there is no relationship


between the observations of inter or intragroup. Let’s say there are different participants in each
group with no participant being in more than one group.

4. Assumption 4: When two variables are not normally distributed. This means the two
distributions such as the distribution of scores for both groups of the independent variable.

To understand more easily let us take a look in below mentioned image:

Fig. Graphical representation of variables not normally distributed

157
www.byjusexamprep.com

a. In the above-mentioned diagrams, the right image represents the distribution of scores for
'males' and 'females' that have the same shape.
b. But in the left image, the distribution of scores for 'males' are identical because they are 'on
top of each other' in the diagram.
c. The blue-coloured male is distributed underneath the red-coloured female distribution.
d. However, in the right image, even though both distributions have the same shape, they have
different locations.
e. By mentioning this it is considered as the distribution of one of the groups of the independent
variable has higher or lower values compared to the second distribution.
f. Females have higher values as compared to males.
g. It is not necessary to have a similar observation at all times during the analysis of the data.
They might not be identical but may have the same appearance (shape).
h. If they do have the same shape, SPSS Statistics is performed and then the Mann-Whitney U
test is done to compare the medians of your dependent variable for the two groups i.e. males
and females.
i. But, if the distributions have a different shape, the Mann-Whitney U test is used to
compare mean ranks.
j. Hence prior to using the Mann-Whitney U test, the SPSS Statistics is also performed to
determine the distributions have the same or different shapes. However, an addition of the
steps in the protocol will be made.

Kendall's test:

a. Unlike Mann-Whitney U test, Kendall's tau-b (τb) correlation coefficient (Kendall's tau-b, for
short) is a nonparametric measurement. It is measured between two variables measured on
an ordinal scale.
b. It is a nonparametric alternative to Pearson’s product-moment correlation, it is used when the
data fails in one or more of the assumptions test.
c. It is also considered an alternative to the Spearman rank-order correlation coefficient.
d. To understand more easily, let us consider an association between customer satisfaction and
delivery time. The delivery time is categorized into four parts next day, 2 working days, 3-5
working days, and more than 5 working days. The customer satisfaction was measured in
terms of the level of agreement i.e. I am satisfied with the time it took for my parcel to be
delivered. The level of the agreement had five categories: strongly agree, agree, neither agree
nor disagree, disagree and strongly disagree. To find the solution to this part we need to fulfil
the assumption.

e. The measurement of variables is done on the basis of an ordinal or continuous scale.

f. Ordinal scales measure non-numeric concepts like satisfaction, happiness, discomfort.


Elaborately, very satisfied, somewhat satisfied, neutral, somewhat unsatisfied, very
unsatisfied.

g. On the other hand, continuous scales are essentially intervals such as temperature e.g. 30
degrees or ratio variables such as weight, height.

158
www.byjusexamprep.com

5. RADIOLABELLING TECHNIQUES

When a particle fuses with a radioisotope and undergoes metabolic pathways, cells, tissues, another
life form, or organic framework is known as Radiolabelling. Through radiolabelling, one can track the
entry of particles through a specific path. The particle (termed as reactant) is 'marked' with explicit
particles by their isotope. Atoms with the same atomic number and different mass numbers are called
isotopes. Some isotopes have unstable nuclei which after a nuclear reaction emits radiation. These
isotopes are called radioisotopes.

1. Detection and measurement of different types of radioisotopes normally used in biology:

a. Isotopic labelling of Nucleic acid:

i. Nucleic acid incorporates nucleotides containing radioisotopes such as 32P, 33P, 35S or 3H.
Via these radioisotopes the solid specimen or nucleotides in the solution can be
analysed.

ii. Another component phosphorothioate is also used to detect DNA and RNA. In the
normal phosphate group, four oxygen atoms surround the central phosphorus. But in
the case of phosphorothioate one of the oxygen groups is replaced by sulphur.

iii. When 35S is introduced to DNA or RNA containing radioactive sulphur atoms are used
to link together the nucleotide.

b. Non-isotopic labelling of Nucleic acid:

i. The most used labels for the generation of non-radioactively DNA or RNA
hybridization probes are fluorophores and haptens, the latter meaning Biotin and
Digoxigenin.
ii. Fluorescent probes are detected directly after incorporation by fluorescence
spectroscopy.
iii. Nowadays, a plethora of fluorescent dyes is available with optical properties that
cover the whole UV-Vis spectrum and fit to common commercial light sources and
filter systems.
iv. The filter system uses dyes such as rhodamine, fluorescein and cyanine, ATTO or
Eterneon dyes which have high efficiency, photostability, hydrophilicity, and
size/bulkiness.
v. Fluorophores, Biotin and Digoxigenin are also used as indirect labels as their
visualization requires a secondary reporter molecule. The detection of Biotin, also
known as vitamin H, relies on its high affinity.
vi. Biotinylated hybridization probes are easily detected using Streptavidin carrying
either a reporter enzyme or a fluorescent dye.
vii. Endogenous biotin, Digoxigenin are also sometimes considered efficient agent for
detection.

159
www.byjusexamprep.com

Fig. A representation of direct and indirect detection of nucleic acid using hybridization probes

2. Incorporation of radioisotopes in biological tissues and cells:

a. The incorporation of radioisotopes such as fluorophores and haptens into DNA and RNA
hybridization probes is achieved by enzymatic labelling techniques. This modifies the
nucleotides as substitutes for their natural counterparts.

b. There are two mechanisms of incorporation firstly, One step procedure (the labelled
nucleotides are incorporated for immediate detection) secondly, Two-step procedure
(incorporation of a reactive group followed by coupling with the desired label.

c. To ensure optimal substrate properties of such modified nucleotides, their core structure is
derived from natural analogs with the desired moiety attached via a linker to one of the
available modifiable nucleotide positions (base, ribose, and phosphate).

d. Both, the position of linker attachment and the type of linker, are critical factors affecting
substrate properties and resulting labelling efficiency.

160
www.byjusexamprep.com

Fig. A representation of one-step and two-step probe labelling in nucleic acid.

3. Molecular imaging of radioactive material:

a. Bioluminescence imaging systems-

i. It is a sensitive, cost-effective and easy to use method.

ii. Superior signal to noise ratio, high sensitivity, short acquisition times.

iii. An excellent non-invasive tool to better understand the mechanisms of disease


biology.

iv. Accelerate you're in vivo imaging research studies or propel your drug discovery
development process.

161
www.byjusexamprep.com

Fig. A Bioluminescence image

b. Fluorescence imaging-

i. It is a powerful tool that allows visualization and quantification of biological targets,


pathways, and processes in your animal's models.

ii. Fluorescent materials such as proteins, dyes, or nanoparticles, that emits photons
when excited at a specific wavelength produces light is used.

iii. Monitoring of cellular or genetic activity, gene expression and disease progression
tracking, evaluation of the effect of new drugs are some of the functions that can be
studied using this imaging technique.

Fig. A Fluorescence image

162
www.byjusexamprep.com

4. Safety guidelines:

a. For any work with an open radioactive source, one should wear disposable gloves (latex or
nitrile gloves are generally suitable) a full-sleeved and full-length lab coat along close-toed
shoes.

b. For the ultimate protection of eyes safety glasses are to be used.

c. Petroleum-Based Hand Creams and other cosmetics items should be avoided as they may
increase glove permeability.

d. Food and beverages should be prohibited to take inside the lab.

e. Any sort of food, beverages, or medicines should not be kept in refrigerators, freezers or cold
rooms where radioactive materials are used or stored.

f. Lock the radiolabelled agents' in-stock materials and sealed sources in a secured container or
a secured storage area when not in use.

g. Cover the work surface with protective and absorbent bench paper to trap droplets of
contamination.

h. Make sure the area of operation is free of contamination before starting the experiment.

i. Mouth Pipetting should be avoided.

j. It is difficult to keep a microcentrifuge used for radioactive material work free of


contamination.

k. Contaminated microcentrifuges must be cleaned up after use to prevent contamination from


spreading to other tubes and to your gloves.

l. Wipe down the exterior of the tubes before placing them in the microfuge.

m. Use tubes with locking caps or with screwcaps (the type with O-rings).
n. The radiolabelled related activity should be performed in designated radioactive materials
(RAM) fume hood.

163
www.byjusexamprep.com

6. MICROSCOPIC TECHNIQUES

Microscopy is a technique used for the visualization of minuscule objects to the naked (unaided) eye
and the instrument designed for this purpose is known as a Microscope. There exist two
fundamentally different types of microscopes—the light microscope and the electron microscope.

LIGHT MICROSCOPY:

The Light microscope uses visible light as the source of illumination and creates an enlarged image of
the specimen based on the principles of lightwave transmission, absorption, diffraction, and
refraction. Modern light microscopes combine the power of three lenses to achieve magnification and
resolution; termed Compound Microscope.

1. Working of a Compound Light Microscope:

The substage condenser lens gathers the diffuse rays from the light source and illuminates the
specimen with a small cone of bright light. The light rays focused on the specimen by the
condenser lens are then collected by the microscope's objective lens. At this standpoint, we need
to consider two sets of light rays that enter the objective lens; those that the specimen has altered
and those that it hasn’t. The latter group consists of light from the condenser that, passes directly
into the objective lens, forming the background light of the visual field. The former group of light
rays comes from the different parts of the specimen and forms the image of the specimen. These
light rays are brought to focus by the objective lens to form a real, magnified image of the object
within the column of the microscope. The image formed by the objective lens is used as an object
by a second lens system, the ocular lens (the eyepiece), to form an enlarged, virtual image. A third
lens system located in the front part of the eye uses the virtual image produced by the ocular lens
as an object to produce a real image on the retina.

2. Principles of Light Microscopy:

a. Resolving Power-

Resolving power is the capacity of a microscope to distinguish images of two objects Iying very
close together and is inversely related to the limit of resolution (D)

b. Limit of Resolution-

Limit of resolution is defined as the minimum distance at which two objects appear as two
distinct objects or entities. It is calculated by Abbe’s equation:

164
www.byjusexamprep.com

Here, 0.61 is the constant representing the minimum detectable difference in contrast.

λ is the wavelength of illumination.

Numerical aperture or NA is the light-gathering capacity of the objective. NA depends on the


refractive index of the lens and sine of the semi-angle of the aperture:

Numerical aperture (NA) = n sin α

Where, n is the refractive index of air or liquid between the specimen and the lens,

sin α is the sine of the semi-angle of the aperture, and

α is the half-angle of the cone of light entering the objective lens from the specimen.

Sin α cannot exceed the value of 1 and the refractive index of the optical instrument is
constant, i.e. it is 1.6 for a 40X objective and 1.4 for the oil immersion lens (100X). The limit of
resolution for the microscope can therefore be calculated as follows:

i. From the above equation, it can be inferred that the limit of resolution is directly
dependent upon the wavelength of the light used.
ii. To increase the numerical aperture some microscope lenses are designed to be used
with a layer of immersion oil between the lens and the specimen. Immersion oil has a
higher reflective index than air. and, therefore, allows the lens to receive more of the
light transmitted through the specimen. Since the refractive index of immersion oil is
about 1.5, the maximum numerical aperture for an oil immersion lens is about 1.5 ×
0.94 = 14.
iii. The limit of resolution of a microscope using visible light as a source of illumination is
about 300 nm in air and 200 nm with an oil immersion lens (greater resolving power).
For the unaided human eye, it is 100 μm.

3. Preparation of Specimens for Bright-Field Light Microscopy:

Specimens to be observed with the light microscope are broadly divided into two categories:
whole mounts and sections.

a. A whole mount is an intact object, either living or dead, and can consist of an entire
microscopic organism such as a protozoan or a small part of a larger organism, e.g. human
cheek cell.
b. Tissues of plants and animals often have a high opacity which makes their examination
difficult under a microscope unless prepared as a very thin slice or a section. To prepare a

165
www.byjusexamprep.com

section, the cells are first killed by immersing the tissue in a chemical solution, called a
fixative. A good fixative rapidly penetrates the cell membrane and immobilizes all of its
macromolecular material so that the structure of the cell is maintained as close as possible
to that of the living state. The most common fixatives for the light microscope include
formaldehyde, alcohol, or acetic acid. After fixation, the tissue is dehydrated by transfer
through a series of alcohols and then usually embedded in paraffin (wax), which provides
mechanical support during sectioning. Paraffin is used as an embedding medium as it is
readily dissolved by organic solvents.
c. The specimen is then stained for better contrast and visualisation of the transparent
biological sample.

Utility:

i. The most commonly used form of microscopy for the study of cells.
ii. Measurement of various cell and/or cellular entities is termed Micrometry. The
measurements are expressed in micrometres (μm) and made with the help of an
ocular micrometre disc and a stage or object micrometre.

4. Types Of Light Microscope:

a. Phase-Contrast Microscopy:

When light passes through a living cell, the phase of the light wave changes according to the cell’s
refractive index, i.e., light passing through a relatively thick/dense part of the cell, such as the
nucleus, is retarded and its phase, shifted relative to light that has passed through an adjacent
thinner region of the cytoplasm. The phase-contrast microscope exploits the interference effects
produced when these two sets of waves recombine, thereby creating an image of the cell’s
structure. The specimen appears at different degrees of brightness and contrast. It is hence used
for the inspection of live and unstained cells, which are, in general, transparent to light. This
microscopy was developed by Prof. Fritz Zernike of the Netherlands.

The phase contrast is obtained with the help of the phase plate and the annular diaphragm by
separating the central (direct rays) from the diffracted rays.

Applications-

i. Phase-contrast microscopy is used to study living cells without any prior treatment like
fixation and staining; the cells in their normal condition appear and show their structures as
they exist in the physiological condition prevalent in an organism.
ii. The actual process of mitosis and meiosis can be studied in real-time. With such a study, the
duration of cell division and its component parts can be accurately determined.
iii. The behaviour of living protozoans towards various physical and chemical factors can be
studied directly.

166
www.byjusexamprep.com

iv. The effects of chemicals, exogenous application of any hormones etc. on the living cells
especially cells in vitro can be studied for their behaviour, mortality and other changes.
v. The cell cycle time can be determined using time-lapse cinematography. This determination
is most precise in comparison with the other indirect methods which are used for the
determination of the cell cycle.

vi. The processes of phagocytosis and pinocytosis can be observed in living cells. Other
processes related to the permeability of the plasma membrane can also be observed and
experimented with.

b. DIC Microscope:
The phase-contrast microscope has an optical handicap that results in loss of resolution, and
the image suffers from interfering halos and shading produced where sharp changes in
refractive index occur. As mentioned earlier, the phase-contrast microscope is a type of
interference microscope.
Other types of interference microscopes minimize these optical artefacts by achieving a
complete separation of direct and diffracted beams using complex light paths and prisms. A
rather improved type of phase-contrast is termed Differential Interference Contrast (DIC)
Microscope, or sometimes Nomarski interference after its developer delivers an image that has
an apparent three-dimensional quality. The contrast in DIC microscopy depends on the rate of
change of refractive index across a specimen. Consequently, the edges of structures, where the
refractive index varies markedly over a relatively small distance are seen with especially good
contrast.

Fig. An overview of the optical systems used in various light microscopes

167
www.byjusexamprep.com

c. Fluorescence Microscopy:

The fluorescence microscope is akin to an ordinary light microscope except that the
illuminating light is passed through two sets of filters - one to filter the light before it reaches
the specimen (primary excitation filter) and the other to filter the light emitted from the
specimen (secondary barrier or emission filter). The excitation filter passes only the wavelength
that excites the particular fluorescent dye, while the barrier filter blocks out this light and
passes only those wavelengths emitted when the dye fluoresces. Only fluorescent light emitted
by the fluorescently stained specimen is used to form an image. The wavelength that excites
the specimen and induces the fluorescence is not allowed to pass the filters placed between
the objective lens and the eye.

The microscope allows viewers to observe the location of certain compounds (called
fluorochromes or fluorophores). Fluorochromes absorb invisible, ultraviolet radiation and
release a portion of the energy in the longer, visible wavelengths, a phenomenon called
fluorescence. The light source (mercury arc lamp or xenon arc lamp) in a fluorescence
microscope produces a beam of ultraviolet light that travels through the filter system, which
blocks all wavelengths except one which is capable of exciting the fluorochrome. The beam of
monochromatic light is focused on the specimen containing the fluorochrome, which becomes
excited and emits light of a visible wavelength that is focused by the objective lens into an
image that can be seen by the viewer. Since the light source produces only ultraviolet (black)
light, objects stained with a fluorochrome appear brightly coloured against a black background,
providing sufficiently high contrast.

Utility-

There are numerous ways that fluorescent compounds can be used in cell and molecular
biology for the study of materials of interest.

i. One such widely popular application is wherein, a fluorochrome (such as rhodamine or


fluorescein) is covalently linked (conjugated) to an antibody to produce a fluorescent
antibody that can be used to determine the location of a specific protein within the cell,
the technique of Immunofluorescence.
ii. Fluorescently labelled proteins are used to study dynamic processes as they occur in a
living cell. For instance, a specific fluorochrome can be linked to a cellular protein, such
as actin or tubulin and the labelled protein injected into a living cell. A non-invasive
approach that has been widely employed utilizes green fluorescent protein or GFP from
the jellyfish Aequorea victoria.
iii. Fluorochromes can also be used to locate DNA/ RNA molecules that contain specific
nucleotide sequences e.g., study the sizes of molecules that can pass between cells as
indicators of transmembrane potentials or as probes to determine the free Ca2+
concentration in the cytosol.

168
www.byjusexamprep.com

d. Advent Of Super-Resolution Fluorescence Microscopes:

STORM (stochastic optical reconstruction microscopy) allows researchers to localize a single


fluorescent molecule within a resolution of less than 20 nm. In this technique, the specimen is
illuminated with light of different wavelengths, which has the effect of switching the
fluorescence activity of the labelled molecules on and off. During each cycle of illumination,
most of the labelled molecules remain dark, but a small fraction of them is randomly activated.
As a result, these individual, activated fluorescent molecules can be imaged, and their position
determined with very high accuracy. This process is then repeated for several imaging cycles,
resulting in the construction of a super-resolution image of the field of view.

e. Laser Scanning Confocal Microscopy:

A confocal microscope produces an image of a thin plane situated within a much thicker
specimen. In this type of microscope, the specimen is illuminated by a finely focused laser beam
that rapidly scans across the specimen at a single depth, thus illuminating only a thin plane (or
“optical section”) within the specimen.

Confocal microscopes are typically used with fluorescence optics. As stated earlier, short-
wavelength incident light is absorbed by the fluorochromes in a specimen and reemitted at a
longer wavelength. Light emitted from the specimen is brought to focus on a site within the
microscope that contains a pinhole aperture. Thus, the aperture and the illuminated plane in
the specimen are confocal (have the same focus). Light rays emitted from the illuminated plane
of the specimen can pass through the aperture, whereas any light rays that might arise from
above or below this plane are prevented from participating in image formation. As a result, any
out-of-focus points in the specimen is/are eliminated and become invisible.

169
www.byjusexamprep.com

A confocal microscope creates sharp images of a specimen that would otherwise appear
blurred when viewed with a conventional microscope. The image produced has less haze and
better contrast than that of a conventional microscope and represents a thin cross-section of
the specimen. Thus, allows better observation of finer details and builds three-dimensional
(3D) reconstructions of a volume of the specimen made possible through assembling a series
of thin slices taken along the vertical axis.

Fig. A typical confocal microscope setup

ELECTRON MICROSCOPY

Electron Microscope uses electrons as a source of illumination for observation. While the basic
principles of light and electron microscopes are the same, both differ in their construction, working
and procedures.

a. The light microscope uses visible light while the electron microscope uses electrons.
b. The position of the source of illumination is opposite. It is situated at the bottom of the
light microscope while at the top in the electron microscope.
c. The lens system for magnification consists of glass lenses in the light microscope and
electromagnetic coils in the electron microscope. The coils use electricity to function as
lenses.

170
www.byjusexamprep.com

d. The lenses are condenser, objective, and ocular in the light microscope. The electron
microscope. has. a condenser, an objective and a projector coil.
e. While the image can be seen with the. eye or recorded on a photographic film in the light
microscope, it can be observed on a fluorescent screen or recorded on a photographic
film in an electron microscope.

Electron microscopes are of two basic types: Transmission Electron Microscope And Scanning Electron
Microscope. The most commonly used type of electron microscope is called the transmission electron
microscope (TEM), as it forms an image from electrons that are transmitted through the specimen
being examined. Scanning electron microscope (SEM) is fundamentally different from TEM since it
produces images from electrons deflected from a specimen's outer surface (rather than electrons
transmitted through the specimen).

1. Working Of An EM (TEM):

Electron microscopes consist largely of a tall, hollow cylindrical column through which the electron
beam passes and an operating system containing a panel of dials that electronically control the
operation in the column. The top of the column contains the cathode, a tungsten wire filament

171
www.byjusexamprep.com

that is heated to provide a source of electrons. Electrons are drawn from the hot filament and
accelerated as a fine beam by the high voltage applied between the cathode and anode. Air is
pumped out of the column prior to operation so as to prevent premature scattering of the
electrons by collision with air molecules; a vacuum is produced through which the electrons
travel.

A beam of negatively charged electrons is then focused by electromagnetic lenses, located in the
wall of the column. The strength of the magnets is controlled by the current provided to them,
which is determined by the operating system dials.

The condenser lenses of an electron microscope are placed between the electron source and the
specimen, and they focus the electron beam on the specimen. The specimen is supported on a
small, thin metal grid (3 mm diameter) that is inserted with tweezers into a grid holder, which in
turn is inserted into the column of the microscope.

Electrons that have passed through the specimen are brought to focus on a phosphorescent
screen situated at the bottom of the column. Electrons striking the screen excite a coating of
fluorescent crystals, which emit their own visible light that is perceived by the human eye as an
image of the specimen.

2. Principle:

Image formation in the electron microscope depends on the differential scattering of electrons by
parts of the specimen.

Image formation occurs when the energy of the electrons is transformed into visible light through
the excitation of the chemical coating of the screen. Those electrons reach the fluorescent screen
from the bright spots while the areas where the electrons do not reach the screen form the dark
spots. The areas which scatter electrons are termed electron-dense. Electron scattering/dispersion
is due to the atomic nuclei which consist of protons and neutrons. The higher the atomic number,
the greater the dispersion. Since biological materials generally have a low atomic number, the
dispersion of electrons is poor. Very poor dispersion translates to very poor contrast in the image
formation.

In order to increase contrast (by inc. scattering), tissues are fixed and stained with solutions of
heavy metals. These metals penetrate into the structure of the cells and become selectively
complex with different parts of the organelles. Those parts of a cell that bind the greatest number
of metal atoms allow passage of the least number of electrons. The fewer electrons that are
focused on the screen at a given spot, the darker that spot. Photographs of the image are made
by lifting the viewing screen out of the way and allowing the electrons to strike a photographic
plate in position beneath the screen.

172
www.byjusexamprep.com

3. Magnification & Resolution:

By altering the current applied to the various lenses of the microscope, magnifications can be
varied (~ 1000 to 250,000 times).

Standard TEMs operate within a voltage range of 10,000 to 100,000 V. At 60,000 V, the wavelength
of an electron is approximately 0.05 Å. If this wavelength and the numerical aperture were
substituted into Abbe’s equation the limit of resolution would be about 0.03 Å. In practice, the
resolution attainable with a standard transmission electron microscope is less than its theoretical
limit. This is due to the spherical aberration of electron-focusing lenses, which requires that the
numerical aperture of the lens be made very small (generally between 0.01 and O.001). The
practical limit of resolution of standard TEMs is thus in the range of 3 to 5 Å. The actual limit when
observing cellular structure is typically in the range of 10 to 15 Å.

4. Specimen Preparation:

a. Fixatives kill the cells while preserving their structural appearance. The most widely
employed fixatives are glutaraldehyde for proteins and osmium tetraoxide for fixing of
lipoidal cellular membranes. Glutaraldehyde contains two aldehyde groups separated by
three methylene bridges. These two aldehyde groups and the flexible methylene bridge
greatly increase the cross-linking potential of glutaraldehyde over formaldehyde (used in
light microscopy). It penetrates rapidly and stabilizes proteins by forming Cross-links with the
amine group.
b. Once the tissue has been fixed, the water is removed by dehydration in alcohol. The demands
of electron microscopy require the sections to be extremely thin (electrons cannot pass
efficiently through relatively thicker sections).
c. Tissues to be sectioned are usually embedded in epoxy resins. Sections are cut by an
extremely sharp cutting edge made of cut glass or a finely polished diamond face, an
ultramicrotome knife. The sections coming off the knife-edge are floated in a water trough.
The sections are then picked up with the metal specimen grid and dried onto its surface.
d. The tissue is stained by floating the grids on drops of heavy metal solutions, like uranyl
acetate and lead citrate in positive staining and phosphotungstic acid for negative staining.
These heavy metal atoms bind to biological macromolecules and provide the atomic density
required to scatter the electron beam.

5. Special Applications Of TEM:

a. Shadow Casting:

It is a special technique for increasing the magnitude of contrast and involves keeping the
metal grid containing specimens in an evacuated vacuum chamber.

The chamber contains a filament composed of heavy metal (usually platinum) together with
carbon. The filament is heated to high temperatures, causing it to evaporate and deposit a

173
www.byjusexamprep.com

metallic coat over accessible surfaces within the chamber. As a result, the metal is deposited
on those surfaces facing the filament, while the opposite surfaces of the particles and the
grid space in their shadow remain uncoated. When the grid is viewed in the electron
microscope, the areas in shadow appear bright on the viewing screen, whereas the metal-
coated regions appear dark. This relationship is reversed on the photographic plate, which is
a negative of the image. The technique thus provides excellent contrast and produces a
three-dimensional effect which is absent in the normal two-dimensional images produced by
a standard TEM.

174
www.byjusexamprep.com

b. Freeze Fracturing:

The freeze-fracture technique is used to visualize the features of cell membranes. There are
four essential steps in making a freeze-fracture replica -- rapid freezing of the specimen,
fracturing the specimen, making the replica of the frozen fractured surface by vacuum-
deposition of platinum and carbon cleaning the replica to remove all the biological material.

i. Cells are frozen and immobilized at the temperature of liquid nitrogen (–196°C) In the
presence of a cryoprotectant like glycerol (antifreeze) to prevent distortion from ice
crystal formation.
ii. The frozen specimen is usually fractured with a liquid-nitrogen-cooled microtome blade,
which splits the bilayer into a monolayer and exposes the interior of the lipid bilayer and
its embedded proteins.
iii. The fracture plane is not observed directly; rather a replica or cast is made of the fractured
surface. Making the replica involves two steps -- shadowing and backing. An oblique,
unidirectional shadowing is carried out by evaporating a fine layer of heavy metal (such
as platinum) onto the specimen. A carbon layer is then deposited on the top of the metal
layer to impart sufficient strength to the replica (backing). The topographical features of
the frozen, fractured surface are thus converted into variations in the thickness of the
deposited platinum layer of the replica.
iv. After the replica has been made, the sample is brought to atmospheric pressure and
allowed to warm to room temperature. The biological material is removed from the
replica using sodium hypochlorite solution, chromic acid or other cleaning agents. After
washing, pieces of replica are mounted on grids for examination in the TEM.

In addition, an optional Freeze etching step, involving vacuum sublimation of ice, may be
carried out after fracturing. ‘Etching’ is defined as the removal of ice from the surface of the
fractured specimen by vacuum sublimation (freeze-drying), before making the replica.

Fig. Freeze facture replicas of different specimens

175
www.byjusexamprep.com

Fig. Freeze-facture process

SCANNING ELECTRON MICROSCOPY (SEM):

a. SEM is primarily used for studying surface structures.


b. Specimens to be examined in the SEM are fixed, passed through a series of alcohols, and then
dried by a process of critical-point drying. Once the specimen is dried, it is coated with a thin
layer of metal (gold, palladium), which makes it suitable as a target for an electron beam. In
the TEM, the electron beam is focused by the condenser lenses to simultaneously illuminate
the entire viewing field. In the SEM, electrons are accelerated as a fine beam that scans the
specimen. In the TEM, electrons pass through the specimen to form the image. In the SEM, the
image is formed by electrons that are reflected back from the specimen (back-scattered) or by
secondary electrons given off by the specimen after being struck by the primary electron beam.
These electrons strike a detector that is located near the surface of the specimen.

176
www.byjusexamprep.com

c. Image formation in the SEM is indirect. In addition to the beam that scans the surface of the
specimen, another electron beam synchronously scans the face of a cathode-ray tube,
producing an image similar to that seen on a television screen.
d. The electrons that bounce off the specimen and reach the detector control the strength of the
beam in the cathode-ray tube. The more electrons collected from the specimen at a given spot,
the stronger the signal to the tube and the greater the intensity of the beam on the screen at
the corresponding spot. The result is an image on the screen that reflects the surface topology
of the specimen because it is this topology (the crevices, hills, and pits) that determines the
number of electrons collected from the various parts of the surface.
e. Although resolution and magnification is not a forte of SEM, its ability to generate images with
a three-dimensional quality and remarkable depth (~500 times that of the light microscope) is
what makes SEM valuable in the research world.

177
www.byjusexamprep.com

7. ELECTROPHYSIOLOGICAL METHODS

PATCH-CLAMP TECHNIQUE
The study of ion movements and ion channels is undertaken by measuring changes in electrical
current. Measurement of electric current through a single channel molecule is possible via the patch-
clamp technique. First employed by Sakmann and Neher in 1976 for their study of neurons, the
introduction of the patch-clamp technique to the scientific world has revolutionized the study of cell
physiology by permitting high-resolution recording of the ionic currents (usually in pico Ampere)
flowing through a cell’s plasma membrane. A glass pipette with a narrow aperture(opening) is used to
establish tight contact with a small area, a patch of the cell membrane.

1. Principle:
The rationale behind this technique is quite simple; upon application of a small magnitude of
suction to the back of the pipette, the contact between pipette and membrane becomes so strong
that any ionic flow is restricted between the two. Thus, all the ions that flow when a single channel
opens must enter or flow into the pipette and the resultant electric current is measured with a
highly sensitive amplifier linked to the patch pipette.

Basic Strategy:

Cell Culture
|
Cell Harvesting
|
Cell Application
|
Manipulating the environment
|
Read and interpret the signal recorded

Fig. a. Patch Clamp experimental setup b. Various methods of patch clamp recording

178
www.byjusexamprep.com

2. The patch-clamp technique has two functional modes- Voltage clamp and Current Clamp.

a. VOLTAGE CLAMP

Most patch-clamp experiments are operated in the voltage-camp mode. Voltage clamp enables
the researcher to lock ('clamp') the cell membrane potential (voltage) at a chosen value which
makes it possible to measure voltage specific activity of ion channels. The voltage-gated ion
channels are primarily studied in this model. The voltage-clamp mode is used to control the
voltage of the membrane. It incorporates a patch-clamp amplifier that allows maintaining
(clamping) through a feedback circuit, a specified membrane voltage and simultaneous
measuring of the current across the membrane. Now during a voltage-clamp experiment, the
electronic feedback system of the amplifier measures the membrane voltage and compares It
to a pre-set voltage defined by the experimenter. When a current is activated, the voltage of
the membrane changes. To compensate for this change and bring the voltage to the pre-set
value, a current of equal magnitude but the opposite direction is injected through the pipette.

b. CURRENT CLAMP

The current-clamp mode is used to study how a cell responds when an electrical current enters
it. A Current clamp experiment records the membrane potential by injecting current pulse into
the cell and measuring the change in membrane potential in response to it. The current clamp
mode allows monitoring of different cellular activity, such as action potentials, excitatory and
inhibitory postsynaptic potentials, changes in membrane potentials due to activation of
electrogenic membrane transporters.

A patch-clamp recording of current reveals transitions between two conductance states of a


single ion channel:

● Channel closed (TOP)


● Channel open (BOTTOM)

179
www.byjusexamprep.com

3. Types Of Patch-Clamp Recording Techniques:

a. Cell-Attached Patch Clamp:

It is the classical patch-clamp arrangement geometry. To form the cell-attached mode, a


pipette tip is placed on the surface of the cell, forming a low resistance contact (seal) with its
membrane. Slight suction applied to the upper end of the pipette results in the formation of a
tight seal; such a seal has resistance in the range of giga-ohms and is hence called a ‘giga-seal’.
In this mode, recordings are made from the membrane area under the pipette, while the
interior of the cell remains intact. A metal electrode placed inside the glass pipette, containing
a salt solution resembling the membranal fluid, facilitates transduction of the ionic current into
the electrical current while a second electrode placed in the bath solution serves as ground. A
bath electrode is used to set the zero (ground) level. Electrodes made of Pt and Ag/AgCl are
generally used for their properties of low junction potentials and weak polarization. The tight
seal between the pipette and cell membrane isolates the membrane patch electrically, i.e. all
ions fluxing the membrane patch flow into the pipette and get recorded by an electrode
connected to a highly sensitive electronic amplifier.

180
www.byjusexamprep.com

Fig. Typical equipment used during classical (cell-attached) patch-clamp recording experiment.

b. Whole-Cell Patch-Clamp:

In Whole-cell mode, by application of a brief but strong suction, the plasma membrane is
ruptured so that the patch pipette can gain access to the cytoplasm, thus allowing
measurements of electrical potentials and currents from the entire cell and is, therefore, called
the whole-cell patch-clamp recording.
The whole-cell arrangement also allows diffusional exchange between the pipette and the
cytoplasm thereby producing a convenient way to inject substances into the interior of a
‘patched’ cell. As the internal volume of the recording pipette is much larger than that of the
cell, the pipette solution completely substitutes the intracellular solution. Depending on the
size and geometry of the cell and the patch pipette, the diffusion process can range from
several seconds to minutes.

In whole-cell experiments, the researcher can choose between two configurations. In the
current-clamp mode, one electrode is used to inject a known current, while the other electrode
is used to sense the potential inside the cell. In voltage-clamp mode, the potential inside the
cell is compared to a known voltage supplied by the performer and an amplifier is used to
supply whatever current is required to keep the cell potential at the specified voltage.

181
www.byjusexamprep.com

Fig. Whole-cell Patch Clamp

c. Perforated Patch Clamp:


The perforated-patch-clamp recording was designed to overcome the dialysis of cytoplasmic
constituents that occurs with conventional whole-cell recording. In perforated-patch clamp
recordings, antibiotics like nystatin, amphotericin and gramicidin, are used as perforants. These
perforants are included in the pipette solution and they form channels in the membrane
attached to the patch pipette. The pores allow certain monovalent ions to permeate, enabling
electrical access to the cell interior and able to efficiently blocking the dialysis of larger
molecules and other ions.

d. Inside-Out Patch Clamp:


After the formation of a tight seal between the membrane and the glass(patch) pipette, if small
pieces of the membrane can be pulled away from the cell without disrupting the seal; this yields
a condition where a small patch of a membrane with its intracellular surface is exposed. This
arrangement is called the inside-out patch-clamp recording. Such configuration is used to
investigate single-channel activity and offers the advantage of modifying the medium exposed
to the intracellular surface.

182
www.byjusexamprep.com

e. Outside-Out Patch Clamp:


When the pipette is retracted while it is in the whole-cell configuration, a membrane patch is
produced that has its extracellular surface exposed. This arrangement is called the outside-out
patch-clamp recording and is beneficial for studying how to channel activity is influenced by
extracellular chemical signals, like neurotransmitters.

4. Applications:
a. Revealing the molecular details of channel function
b. Identification of calcium, potassium and sodium channels
c. Study about the neuromuscular junction
d. Study about cardiomyocytes
e. Patch pipettes can also be used to study Signalling mechanisms at a cellular level, through
voltage clamp analysis.

ELECTROENCEPHALOGRAPHY (EEG)
Cerebral cortex, the outer brain layer, is believed to be the one largely responsible for the brain’s
cognitive abilities i.e., perception, memory, thinking, emotions, actions, and behaviours. Cortical
processes involve electrical signalling between neurons that change over many times in the 10
millisecond (0.01 s) range. Electroencephalography is, till date, the most widely available technology
with sufficient temporal resolution to monitor these quick dynamic changes. The electrophysiological
technique measures electrical activity of the brain as a function of voltage potentials between
different regions on the scalp. The neuronal activity is recorded with scalp electrodes placed
equidistantly on the head. The electrical activity recorded is a summation of the activity of a large
number of neurons (the cortical pyramidal neurons) in the cerebral cortex just beneath the skull.
Hans Berger, a German psychiatrist is the person generally credited with the first application of EEG
to human subjects in 1924.

183
www.byjusexamprep.com

The EEG graph provides useful information on both the normal activity of the brain in different states
(e.g., sleep and wakefulness) and abnormal activities resulting from a variety of brain abnormalities.
It is used primarily in the diagnosis of epilepsy (help identify the seizure locus), also useful in
ventilated unconscious patients to detect seizures since in such patients any external evidence of
seizure activity may not be present. Additionally, to confirm brain death, where it can show whether
the electrical activity of the brain has ceased or not, EEG can be employed.

1. Principle:

Oscillations of scalp voltage divulge important clues concerning brain functioning. For
instance, states of deep sleep, coma, or anaesthesia are majorly associated with very slow
EEG oscillations and larger amplitudes. A normal EEG recording is characterized by well-
defined rhythms that have specific frequencies. A typical EEG recording graph displays
voltages on the vertical domain (y-axis) and time on the horizontal (x-axis) domain, providing
a nearly accurate, real-time display of ongoing cerebral activity.
This technique uses the principle of differential amplification, in other words, recording
voltage differences between different points using a pair of electrodes that compares one
active exploring electrode site with another neighbouring or distant reference electrode for
taking measurements. Only through measuring differences in electrical potential are
discernible EEG waveforms generated. By convention, when the active exploring electrode
(termed G1, for Grid 1) is more negative than the reference electrode (G2), the EEG potential
is directed above the horizontal meridian (i.e., an upward wave), whereas if the opposite is
true, the EEG potential vector is directed below the horizontal meridian (downward
wave/potential).

2. Advantage:

EEG spatial resolution is poor, compared to modern brain functional imaging methods such as
PET and MRI. But these latter methods have very poor temporal resolutions (on the timescale
of seconds vs millisecond in EEG) and thus do not offer as detailed information about the rapid
neural dynamics as an EEG. The related technology, Magnetoencephalography (MEG),
consists of recordings of the magnetic field generated by brain current sources. Since
magnetic fields are less degraded by the head's biological filters than electrical activity, MEG
dipoles may produce more accurate locations for cerebral seizures than EEG. Hence, EEG can
be supplemented with a MEG recording for a more detailed and correct analysis.

184
www.byjusexamprep.com

Fig. Site of generation of electrical signals and recordings in an EEG diagnostic test.

3. Limitation:

The main challenge to an accurate EEG reading is biological artifacts. It is often possible that
the cerebral activity may be overwhelmed by other electrical activity generated by the body
or in the external environment (electrical artifacts). To be seen on the scalp surface, the
minute cerebrally generated EEG voltages have to pass through multiple biological filters that
both reduce signal amplitude and spread the EEG activity out more widely than its original
source. Cerebral voltages must navigate the brain, cerebrospinal fluid (CSF), meninges, the
skull, and skin (biological filters) prior to reaching the recording site where they can be
detected. Moreover, other biologically generated electrical activity (by scalp muscles, the
eyes, the tongue, and even heart) creates massive voltage potentials that frequently
overwhelm and diminish the already low-in-magnitude cerebral activity. Temporary

185
www.byjusexamprep.com

detachments of the recording electrodes (termed “electrode pop” artifact) can further erode
the EEG, or even imitate brain rhythms and seizures. Evidently, the biological and electrical
artifacts can interfere with an interpreter's ability to accurately identify and make distinction
between normal rhythms and pathological patterns.
Yet, EEG continues to be of relevance and hold its ground as a diagnostic tool amidst the wave
of newer technologies, for the reason it being a cost-effective, non-invasive approach to image
brain function and that too with an unparalleled millisecond functional accuracy.

186
www.byjusexamprep.com

Top: Characteristic EEG wave patterns


Bottom: An epileptic vs normal EEG

4. Clinical Applications:

The following clinical scenarios can benefit from EEG recording in various formats:

a. To tell the difference between epileptic seizures and other types of spells like psychogenic
non-epileptic seizures, syncope (fainting), sub-cortical movement disorders, and migraine
variants.
b. Seizures must be classified in order to be treated.
c. To identify the part of the brain that causes a seizure so that seizure surgery can be
planned.
d. Nonconvulsive seizures/nonconvulsive status epilepticus should be monitored.
e. To tell the difference between "organic" encephalopathy or delirium and fundamental
psychiatric symptoms like catatonia.
f. Anesthesia depth is being monitored.
g. In the case of carotid endarterectomy, as an indirect indicator of cerebral perfusion
h. To be used as a back-up test for brain death
i. In some cases, to prognosticate in individuals with coma the use of the quantitative EEG
in the treatment of basic mental, behavioral, and learning disorders remains debatable.

187
www.byjusexamprep.com

ELECTROCARDIOGRAM (ECG)

a. An ECG determines heart activity by measuring signals from electrodes Placed on the torso, arms
and legs.
b. The potential changes produced can be recorded by placing pair of electrodes over the
myocardium itself or suitable points on body surface.
c. The points that are commonly selected are right arm, the left arm and left leg. When these points
are connected to each other it forms an equilateral triangle, the heart is said to lie in the center
of the triangle.
d. The electrical potentials generated by Heart spread towards these points by volume conduction
principle (the effect of recording electrical potentials at a distance from their source generated.
In other words, the recording electrodes are not in direct contact with the nerve or muscle; there
is medium of some sort separating the two.
e. Placing a pair of electrodes and connecting these electrodes to a galvanometer or to a cathode
ray oscilloscope can record the potential changes.
f. The summated potential recorded this way is known as ECG recording.
g. The recordings are made on Standard, calibrated Paper.
h. Along the horizontal axis is the time scale, the smallest division is equal to 0.04 seconds.
i. And along the vertical scale the amplitude-voltage is recorded in millivolt(mV)—One small division
is equal to 0.1 Mv.

Normal Recording:

a. The recording shows three positive deflections and two negative deflections.
b. P, R and T are the positive deflections and Q, and S are the negative deflections.
c. PR and ST are segments
d. Rarely following a T wave, a U wave may be seen.

Causes For Various Waves:

Depolarization occurs in the four chambers of Heart: both atria first, and then both ventricles.

a. The sinoatrial (SA) node on the wall of right atrium initiates depolarization in right and left
atria, causing contraction, which is symbolized by the P wave on electrocardiogram
b. The SA node sends the depolarization wave to the atrioventricular (AV) node which – with
about a 100 ms delay to let the atria finish contracting –then causes contraction in both
ventricles, seen in the QRS wave. At the same time, the atria repolarize and relax.
c. The ventricles are re-polarized and relaxed at T wave. This process continues regulatory,
unless there is a problem in heart.

188
www.byjusexamprep.com

a. P-WAVES

i. P-Wave is due to atrial depolarization


ii. The duration of Wave is 0.08 seconds
iii. The amplitude is 0.1 to 0.3 mV.
iv. Clinical Importance -Increases in amplitude is suggestive of atrial hypertrophy (increase
in volume of an organ).

b. QRS COMPLEX

i. Represents the depolarization of the interventricular septum and the ventricles.


ii. The normal duration should not exceed 0.1 second.
iii. Normal height of R Wave is 1.2 to 1.3 mV.

c. T WAVE

i. It is due to ventricular repolarization.


ii. The amplitude of the wave is 0.1 to 0.3mV.

d. P-Q/R Interval

i. Denotes the time taken for the impulse to spread from the SA node to Purkinje fibers (are
located in the inner ventricular walls of the heart, just beneath the endocardium in a space
called the sub endocardium. The Purkinje fibers are specialized conducting fibers

189
www.byjusexamprep.com

composed of electrically excitable cells that are larger than cardiomyocytes with fewer
myofibrils and many mitochondria and which conduct cardiac action potentials more
quickly and efficiently then any other cells in the heart. It allows the hearts conduction
system to create synchronized contractions of its ventricles, and are, therefore, essential
for maintaining a consistent heart rhythm).
ii. Normal duration is 0.12 to 0.16
iii. Measured from the beginning of O wave to the beginning of Q/R wave.

e. Q-T Interval

i. It is the time interval between the beginning of Q wave and the end of the T wave.
ii. It Includes depolarization and Repolarization of the ventricular muscles.
iii. The Normal duration is 0.37 to 0.43 seconds.

Fig. A typical ECG Graph

190
www.byjusexamprep.com

COMPUTED TOMOGRAPHY (CT)

CT (computed tomography) is a medical imaging technology that uses tomography. Digital geometry
processing is used to build a three-dimensional representation of the inside of an item from a large
number of two-dimensional X-ray images taken around a single axis of rotation. The English tomos
(slice) and graphein (graph) are derived from the Greek words tomos (slice) and graphein (graph) (to
write). Computed tomography was originally known as the "EMI scan" since it was developed at an
EMI research facility, which is primarily known today for its music and recording industry. Later titles
for it included body section röntgenography and computerised axial tomography (CAT or CT scan)..

CT generates a vast amount of data that can be manipulated using windowing to depict different
structures based on their ability to block the X-ray/Röntgen beam. Modern scanners enable this
massive amount of data to be reformatted in many planes or even as volumetric (3D) representations
of structures, whereas formerly, images were only generated in the axial or transverse plane
(orthogonal to the long axis of the body). Application in diagnostics Since its development in the 1970s,
CT has been a key method in medical imaging to supplement Xrays and medical ultrasonography.
Despite the fact that it is still somewhat pricey, it is the gold standard in the diagnosis of a wide range
of disease types.

It's just recently been used for disease prevention and screening, such as CT colonography for people
who are at high risk of colon cancer. Despite the fact that a number of medical institutions offer full-
body scans to the general public, the practise is controversial due to a lack of proof of value, cost,
radiation exposure, and the possibility of identifying 'incidental' anomalies that could lead to
additional tests. CT scans are quite effective at detecting both acute and chronic problems in the lung
parenchyma, or inner workings. It's especially essential in this scenario because two-dimensional x-
rays don't reveal faults like these. A variety of methods are employed depending on the suspected
condition.

A CT pulmonary angiography is a medical test that determines whether or not a person has a
pulmonary embolism. Computed tomography is used to create a picture of the pulmonary arteries. It
is a preferred choice of imaging in the diagnosis of PE due to its less intrusive nature for the patient,
who just requires a cannula for the scan. Before requesting this test, the referring practitioner is likely
to order a D-dimer blood test and a chest X-ray to rule out any other possible differential diagnosis.
With subsecond rotation and multi-slice CT (up to 64 slices), high resolution and speed can be achieved
at the same time, allowing for good imaging of the coronary arteries (cardiac CT angiography)

CT is a sensitive method for diagnosing abdominal diseases. It's extensively used to determine a
cancer's stage and track its progression. It can also help you figure out what's causing your severe
stomach ache (especially of the lower quadrants, whereas ultrasound is the preferred first line
investigation for right upper quadrant pain). Renal stones, appendicitis, pancreatitis, diverticulitis,
abdominal aortic aneurysm, and intestinal blockage are just a few of the conditions that CT can identify
and diagnose. CT is also the first line of defence for detecting solid organ injury after a trauma.

191
www.byjusexamprep.com

Fig. CT Scan Process

The advantages and disadvantages of projection radiography-

For starters, CT completely eliminates the superimposition of images of structures outside the area of
interest. Second, CT's inherent high contrast resolution allows for the detection of alterations between
tissues with physical density differences of less than 1%. Third, data from a single CT imaging process
consisting of multiple contiguous or one helical scan can be shown as images in the axial, coronal, or
sagittal planes, depending on the diagnostic purpose. This is known as multiplanar reformatted
imaging.

MAGNETIC RESONANCE IMAGING (MRI)

Magnetic resonance imaging (MRI) or nuclear magnetic resonance imaging (NMRI) is a medical
imaging technology used to visualise the structure and function of the body. It is most widely
employed in Radiology. In any plane, it offers detailed views of the body. MRI has a far higher contrast
between the body's soft tissues than computed tomography (CT), making it particularly effective in
neurological (brain), musculoskeletal, cardiovascular, and oncological (cancer) imaging. Unlike CT, it
does not employ ionising radiation, instead relying on a strong magnetic field to align the nuclear
magnetization of (typically) hydrogen atoms in the body's water. Radiofrequency fields are used to
change the alignment of this magnetization in a systematic way, causing the hydrogen nuclei to form
a rotating magnetic field that can be detected by the scanner.

192
www.byjusexamprep.com

Additional magnetic fields can be used to alter this signal, allowing enough information to be gathered
to rebuild a body image. MRI is a relatively new technology, having only been around for almost 30
years. The first magnetic resonance image was published in 1973, and the first human research was
conducted on July 3, 1977. The knowledge gained from the research of nuclear magnetic resonance
was used to develop magnetic resonance imaging. Nuclear magnetic resonance imaging was the name
given to the method in its early years (NMRI). However, because the term nuclear was once connected
with ionising radiation exposure, it is now commonly referred to as MRI.

Non-medical devices that operate on the same principles are still referred to as NMRI by scientists.
Sometimes the term Magnetic Resonance Tomography (MRT) is used. Paul Lauterbur, one of the
pioneers of modern MRI, coined the term "zeugmatography" to describe the technology, which is
derived from the Greek word zeugmatos, which means "to unite." The term alluded to the interaction
of static, radiofrequency, and gradient magnetic fields that are required to generate an image, but it
was never accepted.

MRI physics, in a nutshell, I.e., the hydrogen nuclei (i.e. protons) found in abundance in the human
body in water molecules align with the strong main magnetic field when a person lies in a scanner.

The protons are then pushed out of alignment with the main field by a second electromagnetic field
that oscillates at radiofrequencies and is perpendicular to the primary field. As the protons drift back
into alignment with the main field, they release a detectable radiofrequency signal. The varied
architecture of the body can be shown because protons in different tissues of the body (e.g., fat vs.
muscle) realign at different speeds. Intravenous contrast chemicals can be used to enhance the
appearance of blood vessels, malignancies, and inflammation. In the case of arthrograms, MR pictures
of joints, contrast agents can also be injected directly into the joint.

MRI, unlike CT scanning, does not use ionising radiation and is considered a relatively safe method.
Due to the effects of the high magnetic field and powerful radiofrequency pulses, patients with certain
metal implants and cardiac pacemakers are unable to undergo an MRI scan. MRI may image every
region of the body, but it's especially beneficial for neurological diseases, muscle and joint disorders,
tumour evaluation, and showing anomalies in the heart and blood arteries. Applications MRI is utilised
in clinical practise to differentiate pathologic tissue (such as a brain tumour) from normal tissue. An
MRI scan has the advantage of being completely painless for the patient. In the radio frequency band,
it uses powerful magnetic fields and non-ionizing radiation.

In comparison, CT scans and standard X-rays expose patients to high amounts of ionising radiation,
which can increase the risk of cancer, particularly in a baby. While CT has good spatial resolution (the
ability to differentiate two structures separated by an arbitrarily small distance), MRI has similar
resolution but significantly superior contrast resolution (the ability to distinguish the differences
between two arbitrarily similar but not identical tissues). The extensive library of pulse sequences
included in modern medical MRI scanners, each of which is tailored to create picture contrast based
on the chemical sensitivity of MRI, is the foundation of this capacity.

193
www.byjusexamprep.com

Fig. MRI Scan pipeline

f-MRI (FUNCTIONAL MAGNETIC RESONANCE IMAGING)

1. Principle:

a. Functional magnetic resonance imaging can show which part of the brain is active, or
functioning, in response to the patient performing a given task, by recording the movement
of blood flow.
b. All atoms and molecules have magnetic resonance, emitting tiny radio wave signals with
movement, because they contain protons.
c. F-MRI is used in various behavior analysis by using blood oxygen level dependent- BOLD
d. By using Blood oxygen level dependent mapping of regions of a functioning brain is done from
the changes in blood oxygen.
e. The main method used to generate images is contrast imaging.
f. Hemoglobin in blood carries oxygen; oxyhemoglobin, around the brain and when it is used up,
it becomes deoxyhemoglobin. Where the oxygen is being used up shows the site of activity in
the brain.

194
www.byjusexamprep.com

g. The picture is made by monitoring the ratio of the tiny wave frequencies between these
two states whilst the patient carries out a task, e.g., Tapping a finger, which highlights the
area of the brain functioning to carry out this task.

Fig. A fMRI machine

fMRI Scan-

a. An fMRI scan is a functional magnetic resonance imaging scan.


b. It measures and maps the brain’s activity.
c. An fMRI scan uses the same technology as an MRI scan.
d. An MRI is a non-invasive test that uses a strong magnetic field and radio waves to create an
image of the brain.
e. The image an MRI scan produces is just of organs/tissue, but an fMRI will produce an image
showing the blood flow in the brain. By showing the blood flow it will display which parts of the
brain are being stimulated.

Improvement in fMRI-

a. fMRI has improved over the years


b. Example- A spin echo pulse and increased magnetic strength were included, they are basically
to produce better recorded images.
c. To control variables during measurement various precautions were taken.

2. FUNCTIONS OF fMRI:

a. fMRI imaging can do comparisons between different specimen groups both quantitatively and
qualitatively.
b. We can compare between: -

i. Active Brain and resting brain


ii. Developing Brain and aging brain
iii. Defective Brain and damaged brain.

195
www.byjusexamprep.com

a. Comparisons Done by fMRI with quantitation:

Several methods can be used for fMRI comparisons.

With a pseudo color scale in the images acquired. different levels of oxygen usage can be determined.

Example- Four-color scale

i. The highest level of oxygen uptake is indicated by red color


ii. An intermediate level of uptake of oxygen indicated by yellow color
iii. The normal level of oxygen is shown by green color
iv. Lower than normal level of oxygen is indicated by blue color

The volumes of the differently colored regions in brain can be determined by using these type of color
scales. (As shown in image of fMRI below)

b. Comparisons done by fMRI with qualitative evaluation:

i. During a particular type of movement when a brain gets activated can be determined by
fMRI.
ii. The activated areas during any activity might be nonspecific, so in that case investigators
must carefully consider each site of activation to determine.
iii. To do so the investigator must take at least 6 individuals for accurate comparisons
between activities of different individuals' brains.

196
www.byjusexamprep.com

3. BRAIN ACTIVITY MEASUREMENT by fMRI:

a. fMRI looks at blood flow in the brain to detect areas of activity.


b. The main source of energy for brain is glucose, but glucose is not stored in the brain. So, when
parts of the brain need energy to perform an action, more blood flows in to transport glucose
to the active areas, thus more oxygen-rich blood enters the area.
c. Example, when we are speaking there is glucose and oxygen-rich blood flowing to the part of
our brain designated to speaking.
d. The brain activity is mapped in squares called voxels (represent thousands of neurons). Color
is then added to the image to create a map of the brain.
e. A specific task is asked to perform to the patients during an fMRI scan. This increase oxygen-
rich blood flow to a certain part of the brain.

Tasks asked to perform-

i. Tap the thumb against their fingers


ii. Look at pictures
iii. Answer questions on a screen
iv. Think about actions based off a picture (ex: they see a picture of a chair and think about
actions like sit on the chair, buy a chair like this, design of chair), etc.
v. For the tasks where the patient is asked a question, most of the time the patient is told
to just think about the answer that way the speech part of the brain is not activated.

Fig. Images of brain during speech, finger tapping and listening

4. Advantages:

a. fMRI helps map a patient's brain before they go into brain surgery.
b. fMRI map help doctors to understand the regions of the brain linked to critical functions
such as speaking, walking, sensing, or planning.

197
www.byjusexamprep.com

c. scan provides about the makeup of an individual's brain to prevent serious injuries.
d. It also determines that if surgery is even a possible or not.
e. One more advantage is that the image produced by the fMRI scan is very high resolution.

5. Disadvantage:

a. fMRI studies of brain cancer, lesions and other brain pathologies of both humans and
animals are still to be explored.
b. Using fMRI, we face difficulty in cognitive studies interpretation. This can only be
overcome by imaging large numbers of samples
c. Getting an fMRI scan is a very expensive procedure.
d. A minor negative is the machine can only capture a clear image if the person being
scanned stays completely skill.
e. Only the blood flow can be looked not at individual neuron's activities.

Difference between MRI and fMRI-

Each of the 5-20 sequences in a normal MRI examination is chosen to deliver a certain type of
information about the subject tissues. The interpreting physician then synthesises this information.
Magnetic resonance imaging (fMRI) (fMRI) The brain's signal alterations as a result of changing
neuronal activity are measured using functional MRI (fMRI). The brain is scanned quickly but at a low
resolution (typically once every 2-3 seconds). The BOLD (blood-oxygen-level-dependent) effect occurs
when brain activity increases, causing changes in the MR signal via T2* variations. Increased brain
activity raises oxygen demand, which the vascular system compensates for by increasing the amount
of oxygenated hemoglobin compared to deoxygenated hemoglobin.

The vascular response causes a signal rise that is related to brain activity because deoxygenated
hemoglobin attenuates the MR signal. Current study is looking into the exact nature of the link
between brain activity and the BOLD signal. The BOLD effect also enables the creation of high-
resolution 3D maps of neural tissue's venous vasculature. While the BOLD signal is the most commonly
used tool for neuroscience research in humans, the flexible nature of MR imaging allows for the signal
to be sensitised to various characteristics of the blood supply.

198
www.byjusexamprep.com

POSITRON EMISSION TOMOGRAPHY (PET)

a. PET is an imaging technique.


b. It uses radioactive substances called radiotracers.
c. It consists of camera, detector etc. (as shown in image of PRT machine)
d. Pet is a non-invasive, diagnostic imaging technique for measuring the functional activity of cells in
the human body.
e. It was the FIRST to give information about BRAIN function.
f. It works within positron.
g. Tracer is entered into the body.

Fig. PET Scan of Brain of normal human being and alcoholic, smoker, Non-smoker human being.

1. Principle Of PET SCAN:

a. Positron emission occurs when the proton-rich isotope decays and the proton decays to
a neutron, a positron, and a neutrino. After traveling a short distance, the positron
emitted encounters an electron from the surrounding environment.
b. The two particles combine and annihilate each other resulting in the emission of two
gamma rays in opposite directions of 0.511 MeV/511KeV each. (As shown in diagram
below)

199
www.byjusexamprep.com

2. Principle Of Image Acquisition:


POSITRON EMISSION: The Image acquisition is based on the external detection in the coincidence
of emitted gamma-rays, and a valid annihilation event requires a coincidence within 12
nanoseconds between two detectors on opposite sides of the scanner. For accepted coincidences,
lines of response connecting the coincidence detectors are drawn through the object and used in
the image reconstruction. Any scanner requires that the radioisotope, in the field of view, does
not redistribute during the scan.
3. Methodology:
a. A Radioisotope attached to a drug is injected into the body as a tracer. Then, Gamma rays
are emitted and detected to form a three-dimensional image.
b. PET scan by using radiolabeled molecular probes as different tissue uptakes at different rates.
c. For scanning, a short-lived radioactive tracer isotope is injected into the living subject into
blood circulation generally.
d. Into a biologically active molecule each tracer atom is chemically incorporated.
e. While the active molecule becomes concentrated in tissues, we want to scan there is a time
duration for which we must wait, and then the subject is placed in the scanner.
f. The molecule used for this purpose is F-18 labeled (a sugar).
g. During the scanning process, as the tracer decays a record of tissue concentration is formed.
h. When the radioisotope undergoes beta decay it emits a positron.
i. The emitted positron travels in tissue for a short distance. during which time it loses kinetic
energy until it interacts with an electron.
j. The encounter annihilates both electron and positron.
k. These are detected when they reach a scintillator in the scanning device, creating a burst of
light which is detected by a photomultiplier tube. (As shown in picture below)

Fig. A PET scan methodology

200
www.byjusexamprep.com

4. PET Applications:
a. This technique is used to see and measure changes in the metabolic process.
b. It reveals the size, shape, position, and functions of organs.
c. It is used to diagnose cancer and spread of cancer.
d. Its use is to visualize the chemical composition of various regions, the flow of blood, etc.
e. In imaging tumors and metastasis.
f. It is also used to understand the function of our brain, heart, and help in the development of
drugs.
g. With the help of PET different biochemical processes also how proteins express can be
detected.
h. It provides very deep information at the molecular level even before it is visible anatomically.
5. Advantages:
a. Dementias can be diagnosed with PET like such as Alzheimer's disease.
b. Various neuronal diseases can be diagnosed with this Parkinson’s disease, Huntington’s
disease
c. Brain disorder like Epilepsy can also be diagnosed.
d. Various specific surgical sites can be detected before performing any surgery of brain.
e. Any kind of blood clotting in brain or bleeding can also be determined.
f. Any lesions in lungs or masses can be detected by this technique.
6. Limitations:
a. The tracers are radioactive so the elderly and pregnant are unable to use them due to risks
posed by radiation.
b. The use of PET has another disadvantage in that it uses very costly Cyclotrons which are
needed to produce the short-lived radionuclides for PET scanning.

201
www.byjusexamprep.com

8. METHODS IN FIELD BIOLOGY

1. Population density and size:

a. The size of population is generally expressed as number of individuals in a population.


b. The population size at any given place is determined by the process of birth, death,
immigration (arrival of new animal) and emigration (leaving of animal).
c. Population density is number of individuals per unit area per unit volume of environment.
d. Both population size and density describe the current status and possible effects that can
cause alteration in the population potentiality
e. Larger the populations more stable it is whereas smaller the populations likely to have greater
genetic variability.
f. Smaller population have more potentiality to adapt to changes in the environment through
natural selection.
g. A member of a low-density population with sparsely spreaded out organism faces trouble in
finding a mate to reproduce whereas it is vice versa at high-density population.

2. Methods of estimating population density of animals and plants:

a. Quadrat method:

i. Quadrat is a square frame which is traditionally used in ecology to isolate a standard unit
of area to study the distribution of an item over a large area.
ii. Once the quadrat is placed, researchers count the number of individuals within the
boundaries.
iii. A quadrat sample ensures the numbers recorded are representative for the overall
habitat.
iv. At the end, the data can be used to estimate the population size and population density
within the entire habitat.

Fig. A quadrat.

202
www.byjusexamprep.com

The following expression can be used to determine population using quadrat method
N = (A/a) x n
where:
N = estimated total population size
A = total study area
a = area of the sampling quadrat
n = average number of individuals per quadrat (population density)

E.g., Let’s take an example of population of sunflowers in a given area of land. The size of a considered
has an area of 0.50 square meters and the entire study area is 100 square meters. The average number
of sunflowers per quadrat is 5. Estimate the number of sunflowers in the entire study area.
Solution: with the help of quadrat equation:
N = (A/a) x n
where:
N = estimated total population size =?
A = total study area = 100
a = area of the sampling quadrat = 0.05
n = average number of individuals per quadrat (population density) =5
N = (100/0.50) x 5
N = 200 x 5
N = 1000
There are approximately 1000 sunflowers in the 100 square meter area of land.

b. Mark-recapture method:

i. For mobile organisms (mammals, birds, or fish etc) mark-recapture method is often used
to determine population size.
ii. Under this method a sample of animals are captured and marked using tags, bands, paint,
or other body markings.
iii. Once they are marked they are then released back in the environment and allowed to mix
with the rest of the population.
iv. After some time a new sample is collected which may or may not contain the previously
marked individuals.
v. Those who are unmarked are marked and those who are marked previously are
documented and then the sample is released again.

203
www.byjusexamprep.com

vi. The marked and unmarked species are documented as in form of ratio which is further
used to estimate presence of individual in the total population.

This can be mathematically expressed by using following equation.

N = (MxC) / R

Where,

N = estimated Number of individuals in the population


M = number of individuals captured and Marked
C = total number Captured the second time (with and without a mark)
R= number of individuals Recaptured (those with a mark)

E.g., Let's consider a population of snails. The population comes out once the sprinkler is emerged it
comes out and most of the time they remain hidden. 20 of them were caught and marked after a
week, they had a chance to disperse into the population, one person catches 15 out of which 6 have
marks on them, calculate the estimated size of the population?

Solution: using the above-mentioned formula;

N = (MxC) / R

Where,

N = estimated Number of individuals in the population = ?


M = number of individuals captured and Marked = 20
C = total number Captured the second time (with and without a mark) = 15
R= number of individuals Recaptured (those with a mark) = 6

N = (20x 15)/6
N= 300/ 6
N= 50

c. Species distribution:

i. Species distribution or dispersion is also one of the factor that is known to determine the
number and density of individuals in an area.
ii. It also refers to how the individuals in a population are distributed in space at a given time.
iii. Species are distributed in uniform, random, and clumped pattern respectively.

204
www.byjusexamprep.com

Fig. Uniform, random and clumped distribution patterns

3. Ranging Patterns:

Ranging is a process of imaging an intermediate point on a straight line between two ends. Ranging
can be done by naked eye as direct observation or by line ranger as indirect observation.

In case of ranging done by naked eye, it is done with the help of these three ranging ways:

a. Direct Ranging:

i. In direct ranging an intermediate ranging rods are fixed on a straight line from one end to
another.
ii. It is usually used in intervisible ends.

Fig. Direct ranging using naked eye

iii. Ranging can also be done using line ranger.


iv. A line ranger is a instrument which joins the intermediate points using two plane mirrors.
v. Instead of plane mirrors isosceles prism can also be used so that it reflects the incident
rays.

205
www.byjusexamprep.com

Fig. 4. Direct ranging using line ranger

b. Indirect or Reciprocal Ranging:

i. Suppose in some cases the two ends are not visible, but the ranging is to be performed in
that case indirect ranging is preferred.
ii. In case the land is irregular (high ground or a hill or if the ends are too long) the
intermediate points cannot be fixed using naked eyes then it is fixed using indirect
techniques also known as reciprocal ranging.

Fig. Indirect ranging using

c. Remote ranging

i. Remote sensing is one of the emerged and most widely used techniques for detecting and
monitoring the physical characteristics of an area.
ii. Remote sensing uses reflection and emission radiation at a distance by satellite or aircraft.
iii. Special cameras are used collect remotely sensed images of the Earth’s surface and allow
seeing much more as compared to standing on the ground.
iv. Ships are implanted with sonar systems which create images of the ocean floor without
actually going to the bottom of the ocean.
v. Some satellites have cameras which takes the images of temperature changes in the
oceans.

206
www.byjusexamprep.com

4. Sampling methods in the study of behaviour:

Ethology is the study of animal behaviour in response to the natural and environmental stimuli. This
study can be carried out natural habitat or in the laboratory. Ethology can be helpful for other
behavioural studies such as ecology, environmental science, neurology, physiology, psychology and
evolution etc.

Behavioural Sampling Methods-

a. Ad libitum: one person stays with the animals and records the behiour for a certain period of
time.
b. Focal animal: one person stays focused on one specific animal of the group and studies the
behaviour.
c. Instantaneous sampling: one person studies the behaviour of all animals of the group
d. All occurrences: all the behaviour of all the animal in a group is studied
e. Sequence: one person studies the chain of sequence in animal
f. One-zero sampling: studies the occurrence of specific behaviour at a given unit of time.

5. Sampling methods in habitat characterization:

a. Ground based sampling:

Ground base sampling includes percent cover, height, density, and biomass of trees, shrubs, grasses,
forbs, and dead wood. Based on the occurrence the ground-based sampling method involves:

i. Random sampling: It is a type of probability sampling where everyone in the entire target
population has an equal chance of being selected. Random samples are marked with names
or numbers and randomly selected and describes the data of habitat element (minimum 3
experimental area).
ii. Stratified Sampling: Different type of animal that make up the target population and works
proportionally for the function of ecosystem and are arranged accordingly.
iii. Opportunity Sampling : As per the convenience organisms are picked from a population
that is available at the time of experiment.
iv. Systematic Sampling : A list of all members of population is made and a decision is made to
select a particular organism for this type of sampling.

b. Remote Sensing method:

Some crucial details of experimental organisms such as habitat availability, efficiency of the organisms
are collected through remote sensing method. It helps in classifying areas based on a variety of
physiographic characteristics. It also helps in documenting site-specific information, such as winter
roosts, vernal pools, and breeding grounds. The methods include:-

207
www.byjusexamprep.com

i. Aerial Photography: Generally, Monitoring and inventorying programs uses aerial


photography for creating maps. High resolution cameras are used for locating the location
with approx accuracy. This system is often used in Geographic Information Systems (GISs).
ii. Orthophoto maps: for monitoring large scale projects (covering 8,000-400,000 ha). These
are actually corrected aerial photos along with topographic information mentioned in it.
iii. Satellite imagery: it is one of the popular methods for describing gross vegetation types
over large geographic areas. LANDSAT program is majorly used for detailing of satellite
imagery. The LANDSAT program is a series of satellites present in the orbit around the
earth and collects environmental data about the earth’s surface.

Remote Sensing Methods-

When the physical characters of a n area is measured on the basis of reflected and emitted radiation
at a distance, this technique is known as Remote sensing. Remote sensing is done by satellite or
aircraft. Highly specialized cameras are used to collect remotely sensed images. These camera take
the picture in such a way that large area is captured and one can see when standing on the ground.
The images of ocean floor is taken with the help of sonar system and ships. With the help of these
instruments the temperature of the ocean can also be measured. With the help of remote sensing
one can easily locate large forest fires, tracking of clouds, progress of city development over a period
of time, discovery and mapping of rugged topography of the ocean floor.

There are two type of remote sensing mechanism

a. Active remote sensing


b. Passive remote sensing

The difference between Active and passive remote sensing is the use of instruments. The
instruments are used to operate their own source of emission or light in active remote sensing and
in case of passive the instrument uses the reflected emissions.

a. Active remote sensing:


An instrument in active sensor directs the signal to the object and then checks the signal received.
They employ microwave radiation as they have nothing to do with weather condition. In the active
remote sensing mechanism, the transmitted light or waves is unbothered by distance, height,
climatic condition or atmosphere. Some of the known devices that are used for remote sensing
under this head are:

i. Radar: based on radio signals. It has antenna that emits the impulses. During an obstacle
between the path the energy flow from radar scatters back to the sensor. Since the
distance and time of the emission transmittance is unbothered one can easily estimate the
distance of the target material.

208
www.byjusexamprep.com

ii. Lidar: it determines the distance with the help of light. It actively implies the transmitting
light impulses and checks the quantity retrieved. The target object and its location is
measured by multiplying time by the speed of light.
iii. Laser altimeter: it measures the elevation with the help of lidar.
iv. Ranging instrument: it helps to estimate the range with the help of one or two identical
devices on different platforms that are sending the signals to each other.
v. Sounder : it determines whether the weather condition vertically by emitting impulse, in
case it falls to the active category.
vi. Scatterometer: is a specific device that is used to measure the reverted radiation.

b. Passive remote sensing:


Passive remote sensing depends on natural energy that are reverted by the target object. This
device is completely dependent on the strong sunrays otherwise the improper sunrays will not
reflect and the device will fail to capture.

It consists of multispectral or hyper spectral sensor that measures the desired quantity with multiple
band combination. The dependent band width are visible, IR (infra red), NIR (near infrared
radiation), TIR (total internal reflection), microwave.

Some of the known devices that are used for remote sensing under this head are

i. Spectrophotometer: analyzes the spectral bands


ii. Radiometer: determines the radiation emitted by the object in particular band such as
visible, infra red, microwave etc.
iii. Spectroradiometer: analyzes several band ranges
iv. Hyper spectral radiometer: it is one of the instrument that has high resolution and can differ
many narrow spectral bands such as visible, NIR, MRI (mid infra red radiation) regions.
v. Imaging radiometer: produces images by scanning the object
vi. Sounder: senses the atmospheric conditions
vii. Accelerometer: detects the changes from speed per unit of time

Fig. Active and passive remote sensing

209
www.byjusexamprep.com

210

You might also like