6 Big Data Analytics Lab Manual
6 Big Data Analytics Lab Manual
VISION
➢ To achieve high quality in technical education that provides the skills and attitude to adapt to
the global needs of the Information Technology sector, through academic and research
excellence.
MISSION
➢ To equip the students with the cognizance for problem solving and to improve the teaching
learning pedagogy by using innovative techniques.
➢ To strengthen the knowledge base of the faculty and students with motivation towards
possession of effective academic skills and relevant research experience.
➢ To promote the necessary moral and ethical values among the engineers, for the betterment
of the society.
PROGRAMME EDUCATIONAL OBJECTIVES (PEOs)
To create and sustain a community of learning in which students acquire knowledge and learn
to apply it professionally with due consideration for ethical, ecological and economic issues.
To provide knowledge-based services to satisfy the needs of society and the industry by
providing hands on experience in various technologies in core field.
To make the students to design, experiment, analyze, interpret in the core field with the help
of other multi-disciplinary concepts wherever applicable.
To educate the students to disseminate research findings with good soft skills and become a
successful entrepreneur.
After the completion of the course, B.E Artificial Intelligence and Data Science (AI&DS), the
graduates will have the following Program Specific Outcomes:
1. Fundamentals and critical knowledge of the Computer System: Able to understand the
working principles of the computer System and its components, Apply the knowledge to
build, asses, and analyze the software and hardware aspects of it.
3. Applications of Computing Domain & Research: Able to use the professional, managerial,
interdisciplinary skill set, and domain specific tools in development processes, identify the
research gaps, and provide innovative solutions to them.
PROGRAM OUTCOMES (POs)
Engineering Graduates should possess the following:
2. Problem analysis: Identify, formulate, review research literature, and analyze complex
engineering problems reaching substantiated conclusions using first principles of
mathematics, natural sciences, and engineering sciences.
5. Modern tool usage: Create, select, and apply appropriate techniques, resources, and
modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
6. The engineer and society: Apply reasoning informed by the contextual knowledge to
assess societal, health, safety, legal and cultural issues and the consequent responsibilities
relevant to the professional engineering practice.
8. Ethics: Apply ethical principles and commit to professional ethics and responsibilities and
norms of the engineering practice.
11. Project management and finance: Demonstrate knowledge and understanding of the
engineering and management principles and apply these to one’s own work, as a member
and leader in a team, to manage projects and in multi-disciplinary environments.
12. Life- long learning: Recognize the need for, and have the preparation and ability to engage
in independent and life-long learning in the broadest context of technological change.
GENERAL LABORATORY INSTRUCTIONS
1. Students are advised to come to the laboratory at least 5 minutes before (to the starting time),
those who come after 5 minutes will not be allowed into the lab.
2. Plan your task properly much before to the commencement, come prepared to the lab with the
synopsis / program / experiment details.
3. Student should enter into the laboratorywith:
a. Laboratory observation notes with all the details (Problem statement, Aim, Algorithm,
Procedure, Program, Expected Output, etc.,) filled in for the lab session.
b. Laboratory Record updated up to the last session experiments and other utensils (if any) needed
in the lab.
c. Proper Dress code and Identity card.
4. Sign in the laboratory login register, write the TIME-IN, and occupy the computer system allotted
to you by the faculty.
5. Execute your task in the laboratory, and record the results / output in the lab observation note
book, and get certified by the concerned faculty.
6. All the students should be polite and cooperative with the laboratory staff, must maintain the
discipline and decency in the laboratory.
7. Computer labs are established with sophisticated and high-end branded systems, which should be
utilized properly.
8. Students must keep their mobile phones in SWITCHED OFF mode during the lab sessions.
Misuse of the equipment, misbehaviors with the staff and systems etc., will attract severe
punishment.
9. Students must take the permission of the faculty in case of any urgency to go out; if anybody
found loitering outside the lab / class without permission during working hours will be treated
seriously and punished appropriately.
10. Students should LOG OFF/ SHUT DOWN the computer system before he/she leaves the lab after
completing the task (experiment) in all aspects. He/she must ensure the system / seat is kept
properly.
List of Experiments
1. Install, configure and run python, numPy and Pandas.
2. Install, configure and run Hadoop and HDFS.
3. Visualize data using basic plotting techniques in Python.
4. Implement NoSQL Database Operations: CRUD operations, Arrays using MongoDB.
5. Implement Functions: Count – Sort – Limit – Skip – Aggregate using MongoDB.
6. Implement word count / frequency programs using MapReduce.
7. Implement a MapReduce program that processes a dataset.
8. Implement clustering techniques using SPARK.
9. Implement an application that stores big data in MongoDB / Pig using Hadoop / R.
BIG DATA ANALYTICS LAB
Table of Contents
EXPERIMENT: 1
Install, configure and run python, numpy and pandas.
PROGRAM:
AIM: To Installing and Running Applications On python, numpy and pandas.
How to Install Anaconda on Windows?
Anaconda is an open-source software that contains Jupyter, spyder, etc that are used for large data
processing, data analytics, heavy scientific computing. Anaconda works for R and python
programming language. Spyder(sub-application of Anaconda) is used for python. Opencv for python
will work in spyder. Package versions are managed by the package management system called conda.
To begin working with Anaconda, one must get it installed first. Follow the below instructions to
Download and install Anaconda on your system:
Download and install Anaconda:
Head over to anaconda.com and install the latest version of Anaconda. Make sure to download the
―Python 3.7 Version‖ for the appropriate architecture.
Select Installation Type: Select Just Me if you want the software to be used by a single User
import pandas as pd
dataset1 = pd.read_csv("crime.csv")
dataset1
dataset1.head()
dataset1.tail()
dataset1.head(10)
dataset1.tail(10)
type(dataset1)
pandas.core.frame.DataFrame
dataset1.shape
#helps to find how many times values in a particular column has repeated
dataset1['Robbery'].value_counts()
dataset1.skew()
dataset1.var()
dataset1.kurtosis()
print(dataset1.dtypes)
NUMPY
Numpy is the core library for scientific and numerical computing in Python. It provides high
performance multi dimensional array object and tools for working with arrays.
Numpy main object is the multidimensional array, it is a table of elements (usually numbers) all of
the same type indexed by a positive integers.
import numpy
arr = numpy.array([1, 2, 3, 4, 5])
print(arr)
import numpy as np
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(arr)
import numpy as np
print(np. version )
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
print(arr)
print(type(arr))
type(): This built-in Python function tells us the type of the object passed to it. Like in above
code it shows that arr is numpy.ndarray type.
To create an ndarray, we can pass a list, tuple or any array-like object into the array() method, and it
will be converted into an ndarray:
Dimensions in Arrays
A dimension in arrays is one level of array depth (nested arrays).
0-D Arrays
0-D arrays, or Scalars, are the elements in an array. Each value in an array is a 0-D array.
1-D Arrays
These are the most common and basic arrays.
2-D Arrays
An array that has 1-D arrays as its elements is called a 2-D array.
These are often used to represent matrix or 2nd order tensors.
#Create a 2-D array containing two arrays with the values 1,2,3 and 4,5,6:
import numpy as np
arr = np.array([[1, 2, 3], [4, 5, 6]])
print(arr)
3-D arrays
An array that has 2-D arrays (matrices) as its elements is called 3-D array.
These are often used to represent a 3rd order tensor.
#Create a 3-D array with two 2-D arrays, both containing two arrays with the values 1,2,3 and 4,5,6:
import numpy as np
arr = np.array([[[1, 2, 3], [4, 5, 6]], [[1, 2, 3], [4, 5, 6]]])
print(arr)
#Get third and fourth elements from the following array and add them.
import numpy as np
arr = np.array([1, 2, 3, 4])
print(arr[2] + arr[3])
Output:
OUTPUT:
Record Notes
EXPERIMENT: 2
Install, Configure and Run Hadoop and HDFS
PROGRAM:
AIM: To Installing and Running Applications On Hadoop and HDFS.
HADOOP INSTALATION IN WINDOWS
1. Prerequisites
Hardware Requirement
* RAM — Min. 8GB, if you have SSD in your system then 4GB RAM would also work.
* CPU — Min. Quad core, with at least 1.80GHz
2. JRE 1.8 — Offline installer for JRE
3. Java Development Kit — 1.8
4. A Software for Un-Zipping like 7Zip or Win Rar
* I will be using a 64-bit windows for the process, please check and download the version supported
by your system x86 or x64 for all the software.
5. Download Hadoop zip
* I am using Hadoop-2.9.2, you can use any other STABLE version for hadoop.
Once we have Downloaded all the above software, we can proceed with next steps in installing the
Hadoop.
2. Unzip and Install Hadoop
After Downloading the Hadoop, we need to Unzip the hadoop-2.9.2.tar.gz file.
Now we can organize our Hadoop installation, we can create a folder and move the final extracted
file in it. For Eg. :-
Please note while creating folders, DO NOT ADD SPACES IN BETWEEN THE FOLDER
NAME.(it can cause issues later)
I have placed my Hadoop in D: drive you can use C: or any other drive also.
3. Setting Up Environment Variables
Another important step in setting up a work environment is to set your Systems environment
variable.
To edit environment variables, go to Control Panel > System > click on the ―Advanced system
settings‖ link
Alternatively, We can Right click on This PC icon and click on Properties and click on the
―Advanced system settings‖ link
Or, easiest way is to search for Environment Variable in search bar and there you GO…
Now as shown, add JAVA_HOME in variable name and path of Java(jdk) in Variable Value.
Click OK and we are half done with setting JAVA_HOME.
Now as shown, add HADOOP_HOME in variable name and path of Hadoop folder in Variable
Value.
Click OK and we are half done with setting HADOOP_HOME.
Note:- If you want the path to be set for all users you need to select ―New‖ from System Variables.
3.3 Setting Path Variable
Last step in setting Environment variable is setting Path in System Variable.
Once DATA folder is created, we need to create 2 new folders namely, namenode and datanode
inside the data folder
These folders are important because files on HDFS resides inside the datanode.
4.2 Editing Configuration Files
Now we need to edit the following config files in hadoop for configuring it :-
(We can find these files in Hadoop -> etc -> hadoop)
* core-site.xml
* hdfs-site.xml
* mapred-site.xml
* yarn-site.xml
* hadoop-env.cmd
4.2.1 Editing core-site.xml
Right click on the file, select edit and paste the following content within <configuration>
</configuration> tags.
Note:- Below part already has the configuration tag, we need to copy only the part inside it.
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
4.2.2 Editing hdfs-site.xml
Mr. Rajesh S Asst Professor MUSE Page 32
BIG DATA ANALYTICS (21AD62) LAB 2025-2026
Right click on the file, select edit and paste the following content within
<configuration></configuration>tags.
Note:- Below part already has the configuration tag, we need to copy only the part inside it.
Also replace PATH~1 and PATH~2 with the path of namenode and datanode folder that we created
recently(step 4.1).
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>C:\hadoop\data\namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>C:\hadoop\data\datanode</value>
</property>
</configuration>
4.2.3 Editing mapred-site.xml
Right click on the file, select edit and paste the following content within <configuration>
</configuration> tags.
Note:- Below part already has the configuration tag, we need to copy only the part inside it.
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
4.2.4 Editing yarn-site.xml
Right click on the file, select edit and paste the following content within <configuration>
</configuration> tags.
Note:- Below part already has the configuration tag, we need to copy only the part inside it.
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
4.2.5 Verifying hadoop-env.cmd
Right click on the file, select edit and check if the JAVA_HOME is set correctly or not.
We can replace the JAVA_HOME variable in the file with your actual JAVA_HOME that we
configured in the System Variable.
set JAVA_HOME=%JAVA_HOME%
OR
Before starting hadoop we need to format the namenode for this we need to start a NEW Command
Prompt and run below command
hadoop namenode –format
Note:- This command formats all the data in namenode. So, its advisable to use only at the start and
do not use it every time while starting hadoop cluster to avoid data loss.
5.2 Launching Hadoop
Now we need to start a new Command Prompt remember to run it as administrator to avoid
permission issues and execute below commands
start-all.cmd
Note:- We can verify if all the daemons are up and running using jps command in new cmd window.
6. Running Hadoop (Verifying Web UIs)
6.1 Namenode
Open localhost:50070 in a browser tab to verify namenode health.
6.2 Resourcemanger
Open localhost:8088 in a browser tab to check resourcemanager details.
6.3 Datanode
Open localhost:50075 in a browser tab to checkout datanode.
OUTPUT:
Record Notes
PROGRAM:
AIM: To create an application that takes the Visualize Data Using Basic Plotting Techniques.
import pandas as pb
import matplotlib.pyplot as plt
import seaborn as sns
crime=pb.read_csv('crime.csv')
crime
plt.plot(crime.Murder,crime.Assault);
sns.scatterplot(crime.Murder,crime.Assault,hue=crime.Murder,s=100);
plt.figure(figsize=(12,6))
plt.title('Murder Vs Assault')
sns.scatterplot(crime.Murder,crime.Assault,hue=crime.Murder,s=100);
plt.bar(crime_bar.index,crime_bar.Robbery);
sns.barplot('Robbery','Year',data=crime);
OUTPUT:
Record Notes
EXPERIMENT: 4
Implement no sql Database Operations: Crud Operations, Arrays Using MONGODB.
PROGRAM:
AIM: To Create a operations for crud and arrays without no sql datasbase.
TITLE: Basic CRUD operations in MongoDB.
CRUD operations refer to the basic Insert, Read, Update and Delete operations.
Inserting a document into a collection (Create)
➢ The command db.collection.insert()will perform an insert operation into a collection of a
document. ➢ Let us insert a document to a student collection. You must be connected to a database
for doing any insert. It is done as follows:
db.student.insert({
regNo: "3014",
name: "Test Student",
course: { courseName: "MCA", duration: "3 Years" },
address: {
city: "Bangalore",
state: "KA",
country: "India" } })
An entry has been made into the collection called student.
Updating a document in a collection (Update) In order to update specific field values of a collection
in MongoDB, run the below query. db.collection_name.update()
Note that after running the remove() method, the entry has been deleted from the student collection.
OUTPUT:
AIM: To create function operations for sort, limit, skip and aggregate.
1. COUNT
How do you get the number of Debit and Credit transactions? One way to
do it is by using count() function as below
> db.transactions.count({cr_dr : "D"});
or
2. SORT
Definition
$sort
Sorts all input documents and returns them to the pipeline in sorted order.
The
$sort
$sort
takes a document that specifies the field(s) to sort by and the respective
sort order. <sort order> can have one of the following values:
Value
Description
1
Sort ascending.
-1
Sort descending.
{ $meta: "textScore" }
Sort by the computed textScore metadata in descending order. See
Text Score Metadata Sort
for an example.
If sorting on multiple fields, sort order is evaluated from left to right. For
example, in the form above, documents are first sorted by <field1>. Then
documents with the same <field1> values are further sorted by <field2>.
Behavior
Limits
You can sort on a maximum of 32 keys.
Sort Consistency
MongoDB does not store documents in a collection in a particular order.
When sorting on a field which contains duplicate values, documents
containing those values may be returned in any order.
If consistent sort order is desired, include at least one field in your sort that
contains unique values. The easiest way to guarantee this is to include the
_id field in your sort query.
db.restaurants.insertMany( [
{ "_id" : 1, "name" : "Central Park Cafe", "borough" : "Manhattan"},
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "borough" :
"Queens"},
{ "_id" : 3, "name" : "Empire State Pub", "borough" : "Brooklyn"},
{ "_id" : 4, "name" : "Stan's Pizzaria", "borough" : "Manhattan"},
{ "_id" : 5, "name" : "Jane's Deli", "borough" : "Brooklyn"},
])
db.restaurants.aggregate(
[
{ $sort : { borough : 1 } }
]
)
In this example, sort order may be inconsistent, since the borough field
contains duplicate values for both Manhattan and Brooklyn. Documents
are returned in alphabetical order by borough, but the order of those
documents with duplicate values for borough might not the be the same
across multiple executions of the same sort. For example, here are the
results from two different executions of the above command:
db.restaurants.aggregate(
[
{ $sort : { borough : 1, _id: 1 } }
]
)
Examples
Ascending/Descending Sort
For the field or fields to sort by, set the sort order to 1 or -1 to specify an
ascending or descending sort respectively, as in the following example:
db.users.aggregate(
[
{ $sort : { age : -1, posts: 1 } }
]
)
3. LIMIT
$sort
Sorts all input documents and returns them to the pipeline in sorted order.
The $sort stage has the following prototype form:
{ $sort: { <field1>: <sort order>, <field2>: <sort order> ... } }
$sort takes a document that specifies the field(s) to sort by and the
respective sort order. <sort order> can have one of the following values:
Value Description
1 Sort ascending.
-1 Sort descending.
{ $meta: Sort by the computed textScore metadata in
"textScore" } descending order. See Text Score Metadata Sort
for an example.
If sorting on multiple fields, sort order is evaluated from left to right. For
example, in the form above, documents are first sorted by <field1>. Then
documents with the same <field1> values are further sorted by <field2>.
Behavior
Limits
You can sort on a maximum of 32 keys.
Sort Consistency
MongoDB does not store documents in a collection in a particular order.
When sorting on a field which contains duplicate values, documents
containing those values may be returned in any order.
Mr. Rajesh S Asst Professor MUSE Page 56
BIG DATA ANALYTICS (21AD62) LAB 2025-2026
If consistent sort order is desired, include at least one field in your sort
that contains unique values. The easiest way to guarantee this is to include
the _id field in your sort query.
Consider the following restaurant collection:
db.restaurants.insertMany( [
{ "_id" : 1, "name" : "Central Park Cafe", "borough" : "Manhattan"},
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "borough" : "Queens"},
{ "_id" : 3, "name" : "Empire State Pub", "borough" : "Brooklyn"},
{ "_id" : 4, "name" : "Stan's Pizzaria", "borough" : "Manhattan"},
{ "_id" : 5, "name" : "Jane's Deli", "borough" : "Brooklyn"},
])
The following command uses the $sort stage to sort on the borough field:
db.restaurants.aggregate(
[
{ $sort : { borough : 1 } }
]
)
In this example, sort order may be inconsistent, since the borough field
contains duplicate values for both Manhattan and Brooklyn. Documents are
returned in alphabetical order by borough, but the order of those documents
with duplicate values for borough might not the be the same across multiple
executions of the same sort. For example, here are the results from two
different executions of the above command:
{ "_id" : 3, "name" : "Empire State Pub", "borough" : "Brooklyn" }
{ "_id" : 5, "name" : "Jane's Deli", "borough" : "Brooklyn" }
{ "_id" : 1, "name" : "Central Park Cafe", "borough" : "Manhattan" }
{ "_id" : 4, "name" : "Stan's Pizzaria", "borough" : "Manhattan" }
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "borough" : "Queens" }
{ "_id" : 5, "name" : "Jane's Deli", "borough" : "Brooklyn" }
{ "_id" : 3, "name" : "Empire State Pub", "borough" : "Brooklyn" }
{ "_id" : 4, "name" : "Stan's Pizzaria", "borough" : "Manhattan" }
{ "_id" : 1, "name" : "Central Park Cafe", "borough" : "Manhattan" }
{ "_id" : 2, "name" : "Rock A Feller Bar and Grill", "borough" : "Queens" }
While the values for borough are still sorted in alphabetical order, the order
of the documents containing duplicate values for borough
(i.e. Manhattan and Brooklyn) is not the same.
To achieve a consistent sort, add a field which contains exclusively unique
values to the sort. The following command uses the $sort stage to sort on
both the borough field and the _id field:
db.restaurants.aggregate(
[
{ $sort : { borough : 1, _id: 1 } }
]
)
Since the _id field is always guaranteed to contain exclusively unique
values, the returned sort order will always be the same across multiple
executions of the same sort.
Examples
Ascendin
g/Descen
ding Sort
For the field or fields to sort by, set the sort order to 1 or -1
to specify an ascending or descending sort respectively, as
in the following example:
db.users.aggregate(
[
{ $sort : { age : -1, posts: 1 } }
]
)
4. SKIP
$skip
Skips over the specified number of documents that pass into
the stage and passes the remaining documents to the next
stage in the pipeline.
The
$skip
stage has the following prototype form:
OUTPUT:
Record Notes
EXPERIMENT: 6
Implement Word Count/ Frequency Programs Using Map Reduce.
PROGRAM:
AIM: To count a given number using map reduce functions.
Hadoop Streaming API for helping us passing data between our Map and Reduce code
via STDIN (standard input) and STDOUT (standard output).
Note : Change the file has execution permission (chmod +x /home/hduser/mapper.py)
Change the file has execution permission (chmod +x /home/hduser/reducer.py
Mapper program
mapper.py
import sys
# input comes from STDIN (standard input)
for line in sys.stdin:
line = line.strip() # remove leading and trailing whitespace
words = line.split()# split the line into words
# increase counters
for word in words:
# write the results to STDOUT (standard output);
# what we output here will be the input for the
# Reduce step, i.e. the input for reducer.py
# tab-delimited; the trivial word count is 1
print '%s\t%s' % (word, 1)
Reducer program
"""reducer.py"""
from operator import itemgetter
import sys
current_word = None
current_count = 0
word = None
hduser@ubuntu:~$ echo "foo foo quux labs foo bar quux" | /home/hduser/mapper.py | sort -k1,1 |
/home/hduser/reducer.py
bar 1
foo 3
labs 1
quux 2
OUTPUT:
Record Notes
PROGRAM:
The python program reads the data from a dataset ( stored in the file data.csv- wine quality).
The data mapped is stored in shuffled.pkl using mapper.py.
The contents of shuffled.pkl are reduced using reducer.py
Mapper Program
import pandas as pd
import pickle
data = pd.read_csv('data.csv')
#Slicing Data
slice1 = data.iloc[0:399,:]
slice2 = data.iloc[400:800,:]
slice3 = data.iloc[801:1200,:]
slice4 = data.iloc[1201:,:]
def mapper(data):
mapped = []
map1 = mapper(slice1)
map2 = mapper(slice2)
map3 = mapper(slice3)
map4 = mapper(slice4)
shuffled = {
3.0: [],
4.0: [],
5.0: [],
6.0: [],
7.0: [],
8.0: [],
}
for i in [map1,map2,map3,map4]:
for j in i:
print("Data has been mapped. Now, run reducer.py to reduce the contents in
shuffled.pkl file.")
Reducer Program
import
pickle
file= open('shuffled.pkl','rb')
shuffled = pickle.load(file)
def reduce(shuffled_dict):
reduced = {}
for i in shuffled_dict:
reduced[i] = sum(shuffled_dict[i])/len(shuffled_dict[i])
return reduced
final = reduce(shuffled)
print("Average volatile acidity in different classes of wine: ")
for i in final:
print(i,':',final[i])
OUTPUT:
EXPERIMENT: 8
PROGRAM:
# Loads data.
dataset = spark.read.format("libsvm").load("data/mllib/sample_kmeans_data.txt")
OUTPUT:
Record Notes
PROGRAM:
If you type fluidPage() in the R console, you will see that the method returns a tag <div
class=‖container-fluid‖></div>.
• selectInput() – This method is used for creating a dropdown HTML that has various
choices to select.
• numericInput() – This method creates an input area for writing text or numbers.
• radioButtons() – This provides radio buttons for the user to select an input.
Layout methods
The various layout features available in Bootstrap are implemented by R Shiny. The components are:
Panels
These are methods that group elements together into a single panel. These include:
• absolutePanel()
• inputPanel()
• conditionalPanel()
• headerPanel()
• fixedPanel()
Layout functions
These organize the panels for a particular layout. These include:
• fluidRow()
• verticalLayout()
• flowLayout()
• splitLayout()
• sidebarLayout()
Output methods
These methods are used for displaying R output components images, tables and plots. They are:
Server function
After you have created the appearance of the application and the ways to take input values from the
user, it is time to set up the server. The server functions help you to write the server-side code for the
Shiny app. You can create functions that map the user inputs to the corresponding outputs. This
function is called by the web browser when the application is loaded.
It takes an input and output parameter, and return values are ignored. An optional session parameter is
also taken by this method.
library(shiny)
runExample(“01_hello”)
OUTPUT: