Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
56 views21 pages

Business Disaster Recovery Guide

The document discusses why organizations should have disaster recovery plans and the key components of such plans. It notes that threats like natural disasters, cyberattacks, and human errors can cause downtime and data loss, costing businesses revenue and reputation damage. An effective disaster recovery plan outlines how to backup data, protect equipment, assign roles, communicate with vendors, and get systems back online quickly to reduce downtime costs. The document provides a detailed example of the elements an organization should include in their disaster recovery plan, such as backup checks, asset inventories, and vendor contact information.

Uploaded by

Ashutosh Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views21 pages

Business Disaster Recovery Guide

The document discusses why organizations should have disaster recovery plans and the key components of such plans. It notes that threats like natural disasters, cyberattacks, and human errors can cause downtime and data loss, costing businesses revenue and reputation damage. An effective disaster recovery plan outlines how to backup data, protect equipment, assign roles, communicate with vendors, and get systems back online quickly to reduce downtime costs. The document provides a detailed example of the elements an organization should include in their disaster recovery plan, such as backup checks, asset inventories, and vendor contact information.

Uploaded by

Ashutosh Dubey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 21

MS -07/2019(II Sem.

)
1. Why should every organization have a disaster recovery plan to protect itself? What are
the main components of disaster recovery plan?

Ans. Threats are Real: Threats to your business's data come in all shapes and sizes. From
natural disasters to human error to cyber criminals. Today, Ransomware is a top cyber threat
that is wreaking havoc on IT systems. Preventative measures can only go so far when a disaster
actually strikes, and companies should have plans in place to get their systems back online
quickly.

The Cost of Downtime: Downtime can happen to any business, at any time, for any reason. For
many companies, downtime means lost revenue. But the cost of downtime spans beyond
monetary. Downtime can cost your business damage to its reputation and, as a result, loss of
customers. In some cases, downtime can be life or death, like when a hospital experienced an
extended outage of its Electronic Health Record. The longer you are offline, the more damage
your company will experience. Having a disaster recovery plan in place helps businesses to
outline parameters for getting back online while reducing the overall impact of downtime. This
can include having a failover plan to get up and running through a secondary site.

Data Security: The backbone of most companies is the data that they generate, collect and
store. Thus, protecting this data should be a top priority. Having a disaster recovery plan in
place helps businesses to avoid catastrophic data loss. By outlining strict regiments for the
backup and recovery of data, a disaster recovery plan ensures that your data isn't only safe
from an attack or outage, it is handled securely.

Human Error: Disasters are not always natural or cyber related. Often times downtime can be
caused by something as simple as human error. And while we can't eradicate human
error entirely, we can take steps to reduce it and ensure that recovery time is fast. By deploying
a disaster recovery plan, companies can make sure that employees are all on the same page
when it comes to handling a disaster. This can also help in mitigating the risk of a human error
disaster from occurring.
The Bottom Line: Though we try our best to prepare and prevent the worst from happening,
there are things that are out of our control. The best course of action is to have a plan in place
to reduce the impact of a disaster on your business. Companies that do not have a disaster
recovery plan in place often will never recover from the costs of downtime. Don't put your
business in that position. Find out more about how you can develop a DR plan.

While you don’t necessarily need to start boarding up our windows or raiding Publix for water
and batteries, you do need to start thinking proactively about how you’re preparing your
business for a potential major hurricane. In the past hurricanes have left businesses down for a
few days or weeks, cutting out phone service, electric and wiping out servers and workstations.
Ensuring that your assets, data and hardware are protected is only part of a disaster recovery
plan – the rest is determining a process for how quickly you can be back up and running. Rather
than scrambling to put the pieces back together after a major storm, it’s time to put a plan in
place. Here are the seven key elements of a business disaster recovery plan.

Communication plan and role assignments: When it comes to a disaster, communication is of


the essence. A plan is essential because it puts all employees on the same page and ensures
clearly outlines all communication. Documents should have all updated employee contact
information and employees should understand exactly what their role is in the days following
the disaster. Assignments like setting up workstations, assessing damage, redirecting phones
and other tasks will need assignments if you don’t have some sort of technical resource to help
you sort through everything.

Plan for your equipment: It’s important you have a plan for how to protect your equipment
when a major storm is approaching. You’ll need to get all equipment off the floor, moved into a
room with no windows and wrapped securely in plastic so ensure that no water can get to the
equipment. It’s obviously best to completely seal equipment to keep it safe from flooding, but
sometimes in cases of extreme flooding this isn’t an option.

Data continuity system: As you create your disaster recovery plan, you’ll want to explore
exactly what your business requires in order to run. You need to understand exactly what your
organization needs operationally, financially, with regard to supplies, and with communications.
Whether you’re a large consumer business that needs to fulfill shipments and communicate
with their customers about those shipments or a small business to business organization with
multiple employees – you should document what your needs are so that you can make the
plans for backup, business continuity and have a full understanding of the needs and logistics
surrounding those plans.

Backup check: Make sure that your backup is running and include running an additional full
local backup on all servers and data in your disaster preparation plan. Run them as far in
advance as possible and make sure that they’re backed up to a location that will not be
impacted by the disaster. It is also prudent to place that backup on an external hard drive that
you can take with you offsite, just as an additional measure should anything happen.

Detailed asset inventory: In your disaster preparation plan, you should have a detailed
inventory of workstations, their components, servers, printers, scanners, phones, tablets and
other technologies that you and your employees use on a daily basis. This will give you a quick
reference for insurance claims after a major disaster by providing your adjuster with a simple
list (with photos) of any inventory you have.

Pictures of the office and equipment (before and after prep): In addition to the photos that
you should have of individual inventory items, you’ll want to take photos of the office and your
equipment to prove that those items were actively in use by your employees and that you took
the necessary diligence to move your equipment out of harm’s way to prepare for the storm.

Vendor communication and service restoration plan: After a storm passes, you’ll want to begin
running as quickly as possible. Make sure that you include vendor communication as part of
your plan. Check with your local power provided to assess the likelihood for power surges or
outages while damage is repaired in the area. You’ll also want to include checking with your
phone and internet providers on restoration and access.

These considerations are a great foundation for a complete disaster recovery plan, but make
sure that you are paying attention to the details within each section of your plan. The logistics
of testing backups and performing as many backups as possible before the storm are also
important in addition to the grainy details of how you’ll communicate with vendors, account for
your assets and ensure that you’re back up and running as quickly as possible. If you’re a little
overwhelmed in considering these details you can engage an external resource to help you put
a disaster plan in place so that you’re prepared for any storms that might come our way for
hurricane season.

2. “Information handling in an organization should be systematic process”. Explain the


concept of information systems in management in the light of the statement.

Ans. The new ISO 9001, scheduled for publication in late 2015, introduces the term
“knowledge.” As knowledge was not addressed by the previous version of ISO 9001, the depth
of this topic and the approach to it are new. The international standard ISO/DIS 9001:2015—
“Quality management systems—Requirements”defines requirements for the handling of
organizational knowledge in the following four phases, which are analogous to the plan-do-
check-act (PDCA) cycle:

1. Determine the knowledge necessary for the operation of processes and for achieving
conformity of products and services.

2. Maintain knowledge and make it available to the extent necessary.

3. Consider the current organizational knowledge and compare it to changing needs and trends.

4. Acquire the necessary additional knowledge.

Resources are available to explain how organizations can implement these requirements and
what aspects they have to consider in this context.

By introducing the term “knowledge,” the new ISO 9001 aims to raise within organizations the
awareness of management and linking of know-how to position them for the future. As
knowledge is a very broad subjective area with individual definitions, each organization must
define the term for itself. Depending on the size and type of organizations, their approaches to
the topic of knowledge can be completely different. A large-scale car manufacturer, for
example, will define other focus areas than a legal firm or tax consultancy. The new
requirements do not aim at establishing bureaucratic information or documentation
management, but at ensuring a systematic process for handling organizational knowledge in
conformity with the quality management framework conditions.

The four phases that define the requirements for handling organizational knowledge provide
guidance, and establishing knowledge and competence goals at the start of the process makes
good sense. To do so, organizations should, for instance, determine knowledge of customer
expectations and requirements, and of particular production and service-provision processes.
Subsequently, they can plan how they can achieve the identified goals and objectives by means
of training, learning on the job, or e-learning.

In phase two, the organizations should determine specific methods to exchange knowledge in-
house and to maintain this knowledge. Possibilities include employees passing on their
experience from completed projects or failures to their colleagues in the style of “lessons
learned.” Employees leaving the company or refusing to share their experience and know-how
represent a major risk of loss of knowledge. Organizations wishing to avoid these risks can
collect and maintain the available know-how, for example through the use of wikis or other
dedicated knowledge-exchange resources.

In phase three, the organization must evaluate new knowledge, such as that communicated in
training, interview employees on their status of knowledge where appropriate, and identify
opportunities for improvement. Another major challenge involves monitoring changes in the
market or in technology and analyzing the extent to which they influence the knowledge that
the organization requires.

Once the organization identifies opportunities for improvement in certain areas, targeted
measures should be taken in phase four. Depending on the individual situation, companies may
further enhance their relations with clients, suppliers, and service providers, or improve their
mechanisms for keeping their knowledge secure. It may prove a good idea, for example, to
renew the validity of functions critical for knowledge or to improve the protection of existing
know-how by filing patents. In addition to continued in-house training, organizations can also
use external sources including newsletters, specialist magazines, memberships in associations,
or important partnerships to expand their knowledge. By introducing the subject of
"knowledge," the new ISO 9001 raises within organizations the awareness of sustainable and
future-oriented success factors.

Information systems (IS) are formal, socio-technical, organizational systems designed to collect,
process, store, and distribute information. In a socio-technical perspective, information systems
are composed by four components: task, people, structure (or roles), and technology.

A computer information system is a system composed of people and computers that processes
or interprets information. The term is also sometimes used in more restricted senses to refer to
only the software used to run a computerized database or to refer to only a computer system.

Information Systems is an academic study of systems with a specific reference to information


and the complementary networks of hardware and software that people and organizations use
to collect, filter, process, create and also distribute data. An emphasis is placed on an
information system having a definitive boundary, users, processors, storage, inputs, outputs
and the aforementioned communication networks.

Any specific information system aims to support operations, management and decision-making.
An information system is the information and communication technology (ICT) that an
organization uses, and also the way in which people interact with this technology in support of
business processes.

Some authors make a clear distinction between information systems, computer systems,
and business processes. Information systems typically include an ICT component but are not
purely concerned with ICT, focusing instead on the end use of information technology.
Information systems are also different from business processes. Information systems help to
control the performance of business processes.

Alter argues for advantages of viewing an information system as a special type of work system.
A work system is a system in which humans or machines perform processes and activities using
resources to produce specific products or services for customers. An information system is a
work system whose activities are devoted to capturing, transmitting, storing, retrieving,
manipulating and displaying information.
As such, information systems inter-relate with data systems on the one hand and activity
systems on the other. An information system is a form of communication system in which data
represent and are processed as a form of social memory. An information system can also be
considered a semi-formal language which supports human decision making and action.

3. “With so many readymade and customized software available – the need for a manager is
to learn to use them effectively rather than learn program them”. Do you agree?

Ans. When someone has an idea for a new function to be performed by a computer, how does
that idea become reality? If a company wants to implement a new business process and needs
new hardware or software to support it, how do they go about making it happen? In this
chapter, we will discuss the different methods of taking those ideas and bringing them to
reality, a process known as information systems development.

Programming:

As we learned in chapter 2, software is created via programming. Programming is the process


of creating a set of logical instructions for a digital device to follow using a programming
language. The process of programming is sometimes called “coding” because the syntax of a
programming language is not in a form that everyone can understand – it is in “code.”

The process of developing good software is usually not as simple as sitting down and writing
some code. True, sometimes a programmer can quickly write a short program to solve a need.
But most of the time, the creation of software is a resource-intensive process that involves
several different groups of people in an organization. In the following sections, we are going to
review several different methodologies for software development.

Systems-Development Life Cycle

The first development methodology we are going to review is the systems-development life
cycle (SDLC). This methodology was first developed in the 1960s to manage the large software
projects associated with corporate systems running on mainframes. It is a very structured and
risk-averse methodology designed to manage large projects that included multiple
programmers and systems that would have a large impact on the organization.

SDLC waterfall (click to enlarge)

Various definitions of the SDLC methodology exist, but most contain the following phases.

1. Preliminary Analysis. In this phase, a review is done of the request. Is creating a solution
possible? What alternatives exist? What is currently being done about it? Is this project
a good fit for our organization? A key part of this step is a feasibility analysis, which
includes an analysis of the technical feasibility (is it possible to create this?), the
economic feasibility (can we afford to do this?), and the legal feasibility (are we allowed
to do this?). This step is important in determining if the project should even get started.

2. System Analysis. In this phase, one or more system analysts work with different
stakeholder groups to determine the specific requirements for the new system. No
programming is done in this step. Instead, procedures are documented, key players are
interviewed, and data requirements are developed in order to get an overall picture of
exactly what the system is supposed to do. The result of this phase is a system-
requirements document.
3. System Design. In this phase, a designer takes the system-requirements
document created in the previous phase and develops the specific technical details
required for the system. It is in this phase that the business requirements are translated
into specific technical requirements. The design for the user interface, database, data
inputs and outputs, and reporting are developed here. The result of this phase is a
system-design document. This document will have everything a programmer will need
to actually create the system.

4. Programming. The code finally gets written in the programming phase. Using the
system-design document as a guide, a programmer (or team of programmers) develop
the program. The result of this phase is an initial working program that meets the
requirements laid out in the system-analysis phase and the design developed in the
system-design phase.

5. Testing. In the testing phase, the software program developed in the previous phase is
put through a series of structured tests. The first is a unit test, which tests individual
parts of the code for errors or bugs. Next is a system test, where the different
components of the system are tested to ensure that they work together properly.
Finally, the user-acceptance test allows those that will be using the software to test the
system to ensure that it meets their standards. Any bugs, errors, or problems found
during testing are addressed and then tested again.

6. Implementation. Once the new system is developed and tested, it has to be


implemented in the organization. This phase includes training the users, providing
documentation, and conversion from any previous system to the new system.
Implementation can take many forms, depending on the type of system, the number
and type of users, and how urgent it is that the system become operational. These
different forms of implementation are covered later in the chapter.

7. Maintenance. This final phase takes place once the implementation phase is complete.
In this phase, the system has a structured support process in place: reported bugs are
fixed and requests for new features are evaluated and implemented; system updates
and backups are performed on a regular basis.

The SDLC methodology is sometimes referred to as the waterfall methodology to represent


how each step is a separate part of the process; only when one step is completed can another
step begin. After each step, an organization must decide whether to move to the next step or
not. This methodology has been criticized for being quite rigid. For example, changes to the
requirements are not allowed once the process has begun. No software is available until after
the programming phase.

Again, SDLC was developed for large, structured projects. Projects using SDLC can sometimes
take months or years to complete. Because of its inflexibility and the availability of new
programming techniques and tools, many other software-development methodologies have
been developed. Many of these retain some of the underlying concepts of SDLC but are not as
rigid.

Rapid Application Development

The RAD methodology (Public Domain)

Rapid application development (RAD) is a software-development (or systems-development)


methodology that focuses on quickly building a working model of the software, getting
feedback from users, and then using that feedback to update the working model. After several
iterations of development, a final version is developed and implemented.

The RAD methodology consists of four phases:


1. Requirements Planning. This phase is similar to the preliminary-analysis, system-
analysis, and design phases of the SDLC. In this phase, the overall requirements for the
system are defined, a team is identified, and feasibility is determined.

2. User Design. In this phase, representatives of the users work with the system analysts,
designers, and programmers to interactively create the design of the system. One
technique for working with all of these various stakeholders is the so-called JAD session.
JAD is an acronym for joint application development. A JAD session gets all of the
stakeholders together to have a structured discussion about the design of the system.
Application developers also sit in on this meeting and observe, trying to understand the
essence of the requirements.

3. Construction. In the construction phase, the application developers, working with the
users, build the next version of the system.This is an interactive process, and changes
can be made as developers are working on the program. This step is executed in parallel
with the User Design step in an iterative fashion, until an acceptable version of the
product is developed.

4. Cutover. In this step, which is similar to the implementation step of the SDLC, the
system goes live. All steps required to move from the previous state to the use of the
new system are completed here.

As you can see, the RAD methodology is much more compressed than SDLC. Many of the SDLC
steps are combined and the focus is on user participation and iteration. This methodology is
much better suited for smaller projects than SDLC and has the added advantage of giving users
the ability to provide feedback throughout the process. SDLC requires more documentation and
attention to detail and is well suited to large, resource-intensive projects. RAD makes more
sense for smaller projects that are less resource-intensive and need to be developed quickly.

Agile Methodologies

Agile methodologies are a group of methodologies that utilize incremental changes with a focus
on quality and attention to detail. Each increment is released in a specified period of time
(called a time box), creating a regular release schedule with very specific objectives. While
considered a separate methodology from RAD, they share some of the same principles:
iterative development, user interaction, ability to change. The agile methodologies are based
on the “Agile Manifesto,” first released in 2001.

The characteristics of agile methods include:

 small cross-functional teams that include development-team members and users;

 daily status meetings to discuss the current state of the project;

 short time-frame increments (from days to one or two weeks) for each change to be
completed; and

 at the end of each iteration, a working project is completed to demonstrate to the


stakeholders.

The goal of the agile methodologies is to provide the flexibility of an iterative approach while
ensuring a quality product.

Lean Methodology

The lean methodology (click to enlarge)


One last methodology we will discuss is a relatively new concept taken from the business
bestseller The Lean Startup, by Eric Reis. In this methodology, the focus is on taking an initial
idea and developing a minimum viable product (MVP). The MVP is a working software
application with just enough functionality to demonstrate the idea behind the project. Once the
MVP is developed, it is given to potential users for review. Feedback on the MVP is generated in
two forms: (1) direct observation and discussion with the users, and (2) usage statistics
gathered from the software itself. Using these two forms of feedback, the team determines
whether they should continue in the same direction or rethink the core idea behind the project,
change the functions, and create a new MVP. This change in strategy is called a pivot. Several
iterations of the MVP are developed, with new functions added each time based on the
feedback, until a final product is completed.

The biggest difference between the lean methodology and the other methodologies is that the
full set of requirements for the system are not known when the project is launched. As each
iteration of the project is released, the statistics and feedback gathered are used to determine
the requirements. The lean methodology works best in an entrepreneurial environment where
a company is interested in determining if their idea for a software application is worth
developing.

Sidebar: The Quality Triangle

The quality triangle


When developing software, or any sort of product or service, there exists a tension between
the developers and the different stakeholder groups, such as management, users, and
investors. This tension relates to how quickly the software can be developed (time), how much
money will be spent (cost), and how well it will be built (quality). The quality triangle is a simple
concept. It states that for any product or service being developed, you can only address two of
the following: time, cost, and quality.

So what does it mean that you can only address two of the three? It means that you cannot
complete a low-cost, high-quality project in a small amount of time. However, if you are willing
or able to spend a lot of money, then a project can be completed quickly with high-quality
results (through hiring more good programmers). If a project’s completion date is not a priority,
then it can be completed at a lower cost with higher-quality results. Of course, these are just
generalizations, and different projects may not fit this model perfectly. But overall, this model
helps us understand the tradeoffs that we must make when we are developing new products
and services.

4. What are the factors that differentiate traditional logic and fuzzy logic? Explain the
concept of membership characteristic function and also describe the business applications of
fuzzy logic.

Ans. Fuzzy Logic is a class of logic systems that can range between 0 and 1, and does not
specifically correspond to any particular meaning.

There are numerous selections of T-Norms and S-Norms (AND functions and OR Functions) that
may fall into a Fuzzy Logic.

Specifically, Probability does not have anything to do with knowledge, but helps to reason
about prediction.

Fuzzy Logic can be used with Probability Theory, with a kind of fuzzy knowledge called
confidence, but it doesn’t apply to knowledge as a whole. To understand the relationship
between knowledge and fuzzy logic you must think of something like a Paraconsistent Fuzzy
Logic (see page 77) that handles unknowns alongside contradictions.

Degrees of truth are used in analysis of predicting past outcomes based on observational
evidence, and are usually based on statistical (or gut feel of) beliefs that have been formed
through inductive reasoning.

On the other hand, prediction of future occurrence and probabilities go hand in hand, and are
often more complicated than first considered.

A statistical prediction of a future occurrence would give information like:

 With 95% confidence

 It will rain 2.1 inches

 With a 0.5 inch deviation

Much of that is more complicated than straight-up probability analysis.

The analysis of probabilities would be something more set theoretic, like: of 6 rolls, there is
a 36⋅36⋅36⋅36⋅36⋅36=16436⋅36⋅36⋅36⋅36⋅36=164 probability that every role of the dice comes
up an even number.

Yes, an “algebra of probabilities” acts like a fuzzy logic, wherein each occurrence can be ANDed
(picking multiplication as the T-Norm) with the previous if the occurrences are independent of
each other.

I’m a statistician and I have some distrust about the utility of the fuzzy logic (but I don’t have a
strong opinion about it, I am open to change my mind): from what I’ve seen I think many fuzzy
logic problems can be reformulate in terms of statistical models and it seems that many authors
that have written about fuzzy logic didn’t have a good knoledge of statistics.

But I think there are some good ideas of fuzzy logic that are useful and there are good reason to
try to increase the cooperation between the two communities, since they are working on very
similar problems.
I think the difference between the traditional statistical confusion matrix and fuzzy confusion
matrix can be useful to understand my point.

Traditional Confusion Matrix:

Fuzzy Confusion Matrix:

In the classical confusion matrix the function real and predicted can assume, in the binary case,
just two values (0 or 1), while in the fuzzy functions can have values between 0 and 1.

In statistics, the reality is unique (0 or 1), bu the predicted probabilities (the output of all
statistical classificators) lie between 0 and 1.

In my thesis (before even knowing about fuzzy logic) I’ve tried an hybrid approach in the
validation phase (not in the testing one!), using a real function that can assume just 0 and 1, but
using the predicted probabilities in the predicted function, the result is a Probabilistic Confusion
Matrix, that can be seen as a special case of the Fuzzy Confusion Matrix.
Probabilistic Confusion Matrix:

The models chosen with the Probabilistic Confusion Matrix outperformed on the test set the
ones chosen with the Traditional Confusion Matrix in all the simulation. (I’ve tried other more
complex variations of this idea, but that’s the basis)

5. Write the short note on the following:-

(i) TCP/IP

TCP/IP, or the Transmission Control Protocol/Internet Protocol, is a suite of


communication protocols used to interconnect network devices on the internet. TCP/IP can
also be used as a communications protocol in a private network (an intranet or an extranet).

The entire internet protocol suite -- a set of rules and procedures -- is commonly referred to as
TCP/IP, though others are included in the suite.

TCP/IP specifies how data is exchanged over the internet by providing end-to-end
communications that identify how it should be broken into packets, addressed, transmitted,
routed and received at the destination. TCP/IP requires little central management, and it is
designed to make networks reliable, with the ability to recover automatically from the failure of
any device on the network.

The two main protocols in the internet protocol suite serve specific functions. TCP defines how
applications can create channels of communication across a network. It also manages how a
message is assembled into smaller packets before they are then transmitted over the internet
and reassembled in the right order at the destination address.
IP defines how to address and route each packet to make sure it reaches the right destination.
Each gateway computer on the network checks this IP address to determine where to forward
the message.

(ii) Warehouse management

A warehouse management system (WMS) is software and processes that allow organizations to
control and administer warehouse operations from the time goods or materials enter a
warehouse until they move out. Operations in a warehouse include inventory management,
picking processes and auditing.

For example, a WMS can provide visibility into an organization's inventory at any time and
location, whether in a facility or in transit. It can also manage supply chain operations from the
manufacturer or wholesaler to the warehouse, then to a retailer or distribution center. A WMS
is often used alongside or integrated with a transportation management system (TMS) or an
inventory management system.

Types of warehouse management systems

Warehouse management systems come in a variety of types and implementation methods, and
the type typically depends on the size and nature of the organization. They can be stand-alone
systems or modules in a larger enterprise resource planning (ERP) system or supply chain
execution suite.

They can also vary widely in complexity. Some small organizations may use a simple series of
hard copy documents or spreadsheet files, but most larger organizations -- from small to
medium-sized businesses (SMBs) to enterprise companies -- use complex WMS software. Some
WMS setups are designed specifically for the size of the organization, and many vendors have
versions of WMS products that can scale to different organizational sizes. Some organizations
build their own WMS from scratch, but it's more common to implement a WMS from an
established vendor.

A WMS can also be designed or configured for the organization's specific requirements; for
example, an e-commerce vendor might use a WMS that has different functions than a brick-
and-mortar retailer. Additionally, a WMS may also be designed or configured specifically for the
types of goods the organization sells; for example, a sporting goods retailer would have
different requirements than a grocery chain.

(iii) Group Decision Support Systems (GDSS).

A group decision support system (GDSS) is an interactive computer based system that facilitates
a number of decision-makers (working together in a group) in finding solutions to problems that
are unstructured in nature. They are designed in such a way that they take input from multiple
users interacting simultaneously with the systems to arrive at a decision as a group.

The tools and techniques provided by group decision support system improve the quality and
effectiveness of the group meetings. Groupware and web-based tools for electronic meetings
and videoconferencing also support some of the group decision making process, but their main
function is to make communication possible between the decision makers.

In a group decision support system (GDSS) electronic meeting, each participant is provided with
a computer. The computers are connected to each other, to the facilitator’s computer and to
the file server. A projection screen is available at the front of the room. The facilitator and the
participants can both project digital text and images onto this screen.

A group decision support system (GDSS) meeting comprises different phases, such as idea
generation, discussion, voting, vote counting and so on. The facilitator manages and controls
the execution of these phases. The use of various software tools in the meeting is also
controlled by the facilitator.

Components of Group Decision Support System (GDSS)

A Group decision support system (GDSS) is composed of 3 main components, namely


hardware, software tools, and people.

 Hardware: It includes electronic hardware like computer, equipment used for


networking, electronic display boards and audio visual equipment. It also includes
the conference facility, including the physical setup – the room, the tables and the
chairs – laid out in such a manner that they can support group discussion and
teamwork.

 Software Tools: It includes various tools and techniques, such as electronic


questionnaires, electronic brainstorming tools, idea organizers, tools for setting
priority, policy formation tool, etc. The use of these software tools in a group
meeting helps the group decision makers to plan, organize ideas, gather information,
establish priorities, take decisions and to document the meeting proceedings. As a
result, meetings become more productive.

 People: It compromises the members participating in the meeting, a trained


facilitator who helps with the proceedings of the meeting, and an expert staff to
support the hardware and software. The GDSS components together provide a
favorable environment for carrying out group meetings.

Features of Group Decision Support System (GDSS)

 Ease of Use: It consists of an interactive interface that makes working with GDSS
simple and easy.

 Better Decision Making: It provides the conference room setting and various
software tools that facilitate users at different locations to make decisions as a group
resulting in better decisions.

 Emphasis on Semi-structured and Unstructured Decisions: It provides important


information that assists middle and higher level management in making semi-
structured and unstructured decisions.

 Specific and General Support: The facilitator controls the different phases of the
group decision support system meeting (idea generation, discussion, voting and vote
counting etc.) what is displayed on the central screen and the type of ranking and
voting that takes place, etc. In addition, the facilitator also provides general support
to the group and helps them to use the system.
 Supports all Phases of the Decision Making: It can support all the four phases of
decision making, viz intelligence, design, choice and implementation.

 Supports Positive Group Behavior: In a group meeting, as participants can share


their ideas more openly without the fear of being criticized, they display more
positive group behavior towards the subject matter of the meeting.

Group Decision Support System (GDSS) Software Tools

Group decision support system software tools helps the decision makers in organizing their
ideas, gathering required information and setting and ranking priorities. Some of these tools
are as follows:

 Electronic Questionnaire: The information generated using the questionnaires helps


the organizers of the meeting to identify the issues that need immediate attention,
thereby enabling the organizers to create a meeting plan in advance.

 Electronic Brainstorming Tools: It allows the participants to simultaneously


contribute their ideas on the subject matter of the meeting. As identity of each
participant remains secret, individuals participate in the meeting without the fear of
criticism.

 Idea Organizer: It helps in bringing together, evaluating and categorizing the ideas
that are produced during the brainstorming activity.

 Tools for Setting Priority: It includes a collection of techniques, such as simple


voting, ranking in order and some weighted techniques that are used for voting and
setting priorities in a group meeting.

 Policy Formation Tool: It provides necessary support for converting the wordings of
policy statements into an agreement.

You might also like