Statistics play a very vital role in any domain.
It helps in collecting data, be it in any
field. Along with that, it also helps in analyzing data using statistical techniques.
Speaking of the present time, it has a lot of importance and application. Furthermore, if
we talk about the examples, here they are.
The government uses statistics to conduct planning in the economic sector. A
businessman looks forward to expanding his growth in the business world by taking
into the account the data and feedback. Similarly, politics makes use of statistics to
show their accomplishments. For showing research papers, scholars use statistic.
Therefore, the list and applications of statistics are endless.
Browse more Topics under Statistical Description Of Data
Textual and Tabular Representation of Data
Diagrammatic Representation of Data
Frequency Distribution
Histogram
Frequency Polygon
Cumulative Frequency Graph or Ogive
History
Statistics is a very well-known term in the history, be it ancient or medieval. However,
there are still a few unanswered questions. One such question is – “origin of the word
‘statistics’.” There are several views related to the same. One such view is that it has a
Latin origin and the word that it comes from is ‘status.’
On the contrary, another view speaks of its Italian origin and that it comes from
‘statista.’ According to scholars, the origin is German and the word it comes from is
‘statistik.’ Similarly, according to more suggestion, the origin is traced back to a
French word called ‘statistique.’
In the past, statistics was all about “collection” of data. Also, the goal was to maintain
the data for the welfare of the everyone in the area. According to various calculations,
there were several predictions that led to one or the other answer.
What is Statistics?
Statistics can come forward in two ways: singular and plural. In plural form, statistics
is quantitative as well as qualitative. In the plural sense, data is generally taken into
account keeping in mind the statistical analysis.
Singularly, it is more like a scientific method that helps in presenting, collecting, as
well as analyzing data. All of this brings some major characteristics into the limelight.
Statistics – Application
Here are some of the applications of statistics in different fields.
Business Management
Nobody really remembers the days when the manager was responsible for making all
the major decisions. There were several techniques used like trial and error and many
more. In the present time, several quantitative methods are used for drawing
inferences. Another component of statistics is the statistical decision.
Economics
Economics have statistical roots. More so, there is a big association between statistics
and economics. Some of the major areas that overlap with economics and statistic
are Index Numbers, Demand Analysis, Time series analysis, and so on. The branch of
economics that makes close interaction with statistics is called Econometrics.
Different methods of statistic are generally used for conducting surveys and analyzing
data. One of the primary use of statistic is in the field of Regression analysis. It plays a
major role in the world of economics as well. Regression analysis helps in projecting
the future demands of various goods and sales. All of them help in economic profit and
growth.
Statistics in Commerce and Growth
The business world is all set to expand itself using the best possible statistical
processes. Several things like wages, salaries, raw materials, etc need to be the best in
order to maximize the profit. Collection and consultations with experts are the pre-
requisites that ensure that profit comes directly into the bag.
Some of the important statistical methods in the field are sampling, series analysis,
statistical quality control, and so on.
Limitations
Limitations come a lot before directly applying the statistical methods. It is necessary
to be aware of it in order to move ahead. Some of the primary limitations of statistics
are:
Statistics is all about “aggregates.” Be it an individual or a statistician, they are
all a part of the aggregate.
It also deals with quantitative data. However, it is not a very difficult task to do
a conversion from qualitative to quantitative. All that is needed is the numerics and
description related to the qualitative data.
In order to propose specific projections, i.e. sales, price, quantity and so on,
there is a requirement of a set of conditions. So, if, by any chance, these conditions
turn out to be wrong or are violated, there is a chance that the projections and its
outcome will be inaccurate.
Statistical inferences make use of random sampling options. Hence, not
following the rules for sampling would be a very bad idea as it can lead to wrong
results. The conclusions coming off would have errors. So, the idea here is to
consult the experts before hopping into the sampling scheme, directly.
Solved Questions on Introduction to Statistics
Question: What are the different branches of statistics?
Answer: Basically, there are two branches of statistics. They are – Descriptive and
Inferential. Here is a brief
Statistics, the science of collecting, analyzing, presenting,
and interpreting data. Governmental needs for census data as
well as information about a variety of economic activities
provided much of the early impetus for the field of statistics.
Currently the need to turn the large amounts of data available
in many applied fields into useful information has stimulated
both theoretical and practical developments in statistics.
Data are the facts and figures that are collected, analyzed, and
summarized for presentation and interpretation. Data may be
classified as either quantitative or qualitative. Quantitative
data measure either how much or how many of something,
and qualitative data provide labels, or names, for categories of
like items. For example, suppose that a particular study is
interested in characteristics such as age, gender, marital
status, and annual income for a sample of 100 individuals.
These characteristics would be called the variables of the
study, and data values for each of the variables would be
associated with each individual. Thus, the data values of 28,
male, single, and $30,000 would be recorded for a 28-year-
old single male with an annual income of $30,000. With 100
individuals and 4 variables, the data set would have 100 × 4 =
400 items. In this example, age and annual income are
quantitative variables; the corresponding data values indicate
how many years and how much money for each individual.
Gender and marital status are qualitative variables. The labels
male and female provide the qualitative data for gender, and
the labels single, married, divorced, and widowed indicate
marital status.
Sample survey methods are used to collect data from
observational studies, and experimental design methods are
used to collect data from experimental studies. The area of
descriptive statistics is concerned primarily with methods of
presenting and interpreting data using graphs, tables, and
numerical summaries. Whenever statisticians use data from a
sample—i.e., a subset of the population—to make statements
about a population, they are performing statistical inference.
Estimation and hypothesis testing are procedures used to
make statistical inferences. Fields such as health
care, biology, chemistry,physics, education, engineering,
business, andeconomics make extensive use of
statisticalinference.
Methods of probability were developed initially for
the analysis of gambling games. Probability plays a key role in
statistical inference; it is used to provide measures of the
quality and precision of the inferences. Many of the methods
of statistical inference are described in this article. Some of
these methods are used primarily for single-variable studies,
while others, such as regressionand correlation analysis, are
used to make inferences about relationships among two or
more variables.
Get exclusive access to content from our 1768 First Edition with your
subscription.Subscribe today
Descriptive Statistics
Descriptive statistics are tabular, graphical, and numerical
summaries of data. The purpose of descriptive statistics is
to facilitate the presentation and interpretation of data. Most
of the statistical presentations appearing in newspapers and
magazines are descriptive in nature. Univariate methods of
descriptive statistics use data toenhance the understanding of
a single variable; multivariate methods focus on using
statistics to understand the relationships among two or more
variables. To illustrate methods of descriptive statistics, the
previous example in which data were collected on the age,
gender, marital status, and annual income of 100 individuals
will be examined.
Tabular methods
The most commonly used tabular summary of data for a
single variable is a frequency distribution. A frequency
distribution shows the number of data values in each of
several nonoverlapping classes. Another tabular summary,
called a relative frequency distribution, shows the fraction,
or percentage, of data values in each class. The most common
tabular summary of data for two variables is a cross
tabulation, a two-variable analogue of a frequency
distribution.
For a qualitative variable, a frequency distribution shows the
number of data values in each qualitative category. For
instance, the variable gender has two categories: male and
female. Thus, a frequency distribution for gender would have
two nonoverlapping classes to show the number of males and
females. A relative frequency distribution for this variable
would show the fraction of individuals that are male and the
fraction of individuals that are female.
Constructing a frequency distribution for a quantitative
variable requires more care in defining the classes and the
division points between adjacent classes. For instance, if the
age data of the example above ranged from 22 to 78 years, the
following six nonoverlapping classes could be used: 20–29,
30–39, 40–49, 50–59, 60–69, and 70–79. A frequency
distribution would show the number of data values in each of
these classes, and a relative frequency distribution would
show the fraction of data values in each.
A cross tabulation is a two-way table with the rows of the table
representing the classes of one variable and the columns of
the table representing the classes of another variable. To
construct a cross tabulation using the variables gender and
age, gender could be shown with two rows, male and female,
and age could be shown with six columns corresponding to
the age classes 20–29, 30–39, 40–49, 50–59, 60–69, and
70–79. The entry in each cell of the table would specify the
number of data values with the gender given by the row
heading and the age given by the column heading. Such a
cross tabulation could be helpful in understanding the
relationship between gender and age.
Graphical methods
A number of graphical methods are available for describing
data. A bar graph is a graphical device for depicting qualitative
data that have been summarized in a frequency distribution.
Labels for the categories of the qualitative variable are shown
on the horizontal axis of the graph. A bar above each label is
constructed such that the height of each bar is proportional to
the number of data values in the category. A bar graph of the
marital status for the 100 individuals in the above example is
shown in Figure 1. There are 4 bars in the graph, one for each
class. A pie chart is another graphical device for summarizing
qualitative data. The size of each slice of the pie is
proportional to the number of data values in the
corresponding class. A pie chart for the marital status of the
100 individuals is shown in Figure 2.
Figure 1: A bar graph showing the marital status of 100
individuals.Encyclopædia Britannica, Inc.
Figure 2: A pie chart for the marital status of 100 individuals.Encyclopædia
Britannica, Inc.
A histogram is the most common graphical presentation
of quantitative data that have been summarized in a frequency
distribution. The values of the quantitative variable are shown
on the horizontal axis. A rectangle is drawn above each class
such that the base of the rectangle is equal to the width of the
class interval and its height is proportional to the number of
data values in the class.
Statistics
QUICK FACTS
KEY PEOPLE
Karl Pearson
Sir Ronald Aylmer Fisher
Mollie Orshansky
Richard von Mises
P.C. Mahalanobis
Sir David Cox
Hans Rosling
W. Edwards Deming
Jerzy Neyman
Adolphe Quetelet
RELATED TOPICS
Mathematics
Probability and statistics
Simpson's paradox
Sabermetrics
Measure of association
Cluster analysis
Regression to the mean
Von Neumann–Morgenstern utility function
Statistical quality control
Measurement scale
Inferential statistics[edit]
Main article: Statistical inference
Statistical inference is the process of using data analysis to deduce
properties of an underlying probability distribution.[49] Inferential statistical
analysis infers properties of apopulation, for example by testing
hypotheses and deriving estimates. It is assumed that the observed data
set is sampled from a larger population. Inferential statistics can be
contrasted with descriptive statistics. Descriptive statistics is solely
concerned with properties of the observed data, and it does not rest on the
assumption that the data come from a larger population.
Applications[edit]
Applied statistics, theoretical statistics and mathematical
statistics[edit]
Applied statistics comprises descriptive statistics and the application of inferential
statistics.[62][63] Theoretical statistics concerns the logical arguments underlying
justification of approaches to statistical inference, as well as
encompassing mathematical statistics. Mathematical statistics includes not only
the manipulation of probability distributionsnecessary for deriving results related
to methods of estimation and inference, but also various aspects
of computational statistics and the design of experiments.
Statistical consultants can help organizations and companies that don't have in-
house expertise relevant to their particular questions.
Machine learning and data mining[edit]
Machine learning models are statistical and probabilistic models that capture
patterns in the data through use of computational algorithms.
Statistics in academy[edit]
Statistics is applicable to a wide variety of academic disciplines,
including natural and social sciences, government, and business. Business
statistics applies statistical methods ineconometrics, auditing and production and
operations, including services improvement and marketing research.[64] In the field
of biological sciences, the 12 most frequent statistical tests are: Analysis of
Variance (ANOVA), Chi-Square Test, Student’s T Test, Linear Regression,
Pearson’s Correlation Coefficient, Mann-Whitney U Test, Kruskal-Wallis Test,
Shannon’s Diversity Index, Tukey’s Test, Cluster Analysis, Spearman’s Rank
Correlation Test and Principal Component Analysis.[65]
A typical statistics course covers descriptive statistics, probability, binomial
and normal distributions, test of hypotheses and confidence intervals, linear
regression, and correlation.[66] Modern fundamental statistical courses for
undergraduate students focus on the correct test selection, results interpretation
and use of free statistics software [65].
Statistical computing[edit]
gretl, an example of an open source statistical package
Main article: Computational statistics
The rapid and sustained increases in computing power starting from the second
half of the 20th century have had a substantial impact on the practice of
statistical science. Early statistical models were almost always from the class
oflinear models, but powerful computers, coupled with suitable
numerical algorithms, caused an increased interest in nonlinear models (such
asneural networks) as well as the creation of new types, such as generalized
linear models andmultilevel models.
Increased computing power has also led to the growing popularity of
computationally intensive methods based on resampling, such as permutation
tests and the bootstrap, while techniques such as Gibbs sampling have made
use of Bayesian models more feasible. The computer revolution has implications
for the future of statistics with a new emphasis on "experimental" and "empirical"
statistics. A large number of both general and special purpose statistical
software are now available. Examples of available software capable of complex
statistical computation include programs such asMathematica, SAS, SPSS,
and R.
Statistics applied to mathematics or the arts[edit]
Traditionally, statistics was concerned with drawing inferences using a semi-
standardized methodology that was "required learning" in most sciences.[citation
needed]
This tradition has changed with the use of statistics in non-inferential
contexts. What was once considered a dry subject, taken in many fields as a
degree-requirement, is now viewed enthusiastically.[according to whom?] Initially derided by
some mathematical purists, it is now considered essential methodology in certain
areas.
In number theory, scatter plots of data generated by a distribution function
may be transformed with familiar tools used in statistics to reveal underlying
patterns, which may then lead to hypotheses.
Methods of statistics including predictive methods in forecasting are
combined with chaos theory and fractal geometry to create video works that
are considered to have great beauty.[citation needed]
The process art of Jackson Pollock relied on artistic experiments whereby
underlying distributions in nature were artistically revealed.[citation needed] With the
advent of computers, statistical methods were applied to formalize such
distribution-driven natural processes to make and analyze moving video art.
[citation needed]
Methods of statistics may be used predicatively in performance art, as in a
card trick based on a Markov process that only works some of the time, the
occasion of which can be predicted using statistical methodology.
Statistics can be used to predicatively create art, as in the statistical
or stochastic musicinvented by Iannis Xenakis, where the music is
performance-specific. Though this type of artistry does not always come out
as expected, it does behave in ways that are predictable and tunable using
statistics.
Specialized disciplines[edit]
Main article: List of fields of application of statistics
Statistical techniques are used in a wide range of types of scientific and social
research, including: biostatistics, computational biology, computational
sociology, network biology,social science, sociology and social research. Some
fields of inquiry use applied statistics so extensively that they have specialized
terminology. These disciplines include:
Actuarial science (assesses risk in the insurance and finance
industries)
Applied information economics
Astrostatistics (statistical evaluation of astronomical data)
Biostatistics
Chemometrics (for analysis of data from chemistry)
Data mining (applying statistics and pattern recognition to
discover knowledge from data)
Data science
Demography (statistical study of populations)
Econometrics (statistical analysis of economic data)
Energy statistics
Engineering statistics
Epidemiology (statistical analysis of disease)
Geography and geographic information systems, specifically
in spatial analysis
Image processing
Jurimetrics (law)
Medical statistics
Political science
Psychological statistics
Reliability engineering
Social statistics
Statistical mechanics
In addition, there are particular types of statistical analysis that have also
developed their own specialised terminology and methodology:
Bootstrap / jackknife resamplingStatistician is one of the top 10 fastest-
growing jobs in the US. Going by the rate at which the world is generating
and collecting data, it is no surprise that the expertise of those who can
effectively analyze this data is in great demand. Statistical analysis experts
help collect, study and extract relevant information from vast and complex
data. This information is then applied to validate and further research,
make sound business decisions and drive public initiatives.
Here are the top 6 applications of statistical analysis.
1. Research Interpretations and Conclusions
Statistics forms an important part of most sciences, helping researchers
test hypotheses, confirm (or reject) theories, and arrive at reliable
conclusions. The data generated from experiments and studies is never
straightforward — one has to take into account randomness and
uncertainty, eliminate coincidences and arrive at the most accurate
findings. Statistical analysis helps reduce or eliminate errors so that
researchers can confidently make conclusions that will then direct further
research.
Image credit: XKCD
2. Meta-Analysis of Literature Reviews
Before a researcher or scientist embarks on new research, it is customary
to perform a comprehensive literature search of all the available published
information on a specific topic. However, it is always difficult to make one
definitive conclusion from multiple studies, especially if the studies follow
different research methodologies, have been published in different journals
(leading to publication bias), or are spread over a large time range. A
statistical analysis of these studies helps extract the common truth
underlying all these studies, or uncover a hidden pattern or relationship.
Image credit: XKCD
3. Clinical Trial Design
One of the most important applications of statistical analysis is in
designing clinical trials. When a new drug or treatment is discovered, it has
to first be tested on a group/groups of people to understand its efficacy and
safety. A clinical trial involves selecting a population/sample size, defining
the time range over which to monitor the treatment, designing the phases,
and selecting parameters that will help decide how effective the treatment
is and if it is better than an existing one. Biostatisticians can take on the task
of performing a statistical analysis of the study, helping not only to design it
but also analyze and determine the outcomes.
4. Designing Surveys
Do people who go to the gym lead a healthier, happier life? How safe is the
city of New York? How effective is your HIV-awareness programme?
Questions like these that cannot be answered without the help of statistics.
Surveys require careful design and implementation, considerations about
the survey format, accounting for bias and fatigue, etc. Data collected from
surveys have to be carefully studied by statistical analysis experts who also
use their own discretion and experience to derive the most meaningful
information from a survey. Through surveys, governments can determine
the effectiveness of an initiative, businesses can understand the response
to a particular product, and social scientists can perform quantitative
research.
5. Epidemiological Studies
Epidemiological studies help determine the link between the cause and
effect of a disease, especially in outbreaks and epidemics. A statistical
analysis involves identifying the most likely cause of a disease — for
example, the link between smoking and lung cancer. This information is
used to develop public health policies and implement preventive healthcare
programmes. Data visualization and statistical analysis also played an
important role in understanding the Ebola epidemic in West Africa.
6. Statistical Modeling
Statistical modeling involves building predictive models based on pattern
recognition and knowledge discovery. It is used in environmental and
geographical studies, predicting election outcomes, survival analysis of
populations, and more. Meteorologists use statistical tools to help them
predict the weather. The line between statistical modelling and machine
learning is becoming increasingly blurry — Robert Tibshirani, a statistician
at Stanford called machine learning “glorified statistics”.
Here is an example of The Economist’s statistical model for predicting the
US mid-term elections.
Since statisticians are also paid among the highest salaries, not all
organizations can afford to have them in-house, full-time. Smaller
businesses, nonprofits, government agencies and advocates, researchers
and startups are increasingly outsourcing their statistical analysis work
to freelance statisticians, who can work on smaller budgets and time frames.
Multivariate statistics
Statistical classification
Structured data analysis (statistics)
Structural equation modelling
Survey methodology
Survival analysis
Statistics in various sports, particularly baseball – known as sabermetrics –
and cricket
Statistics form a key basis tool in business and manufacturing as well. It is used
to understand measurement systems variability, control processes (as
in statistical process control or SPC), for summarizing data, and to make data-
driven decisions. In these roles, it is a key tool, and perhaps the only reliable tool.
INTRODUCTION
By identifying statistical trends and trails, health care providers can monitor
local conditions and compare them to state, national, and international trends.
Health statistics provide empirical data to assist in the allocation of public
and private funds and help to determine how research efforts should be
focused. Whether considering disease incidence, accidents, cure rates,
physician or hospital fees, malpractice, mortality, drugs, treatments, or
medical devices, the primary source for statistical health data most often
appears on government, international organization, or professional
association web sites. Credibility is critical when using statistics; the
following web sites offer authoritative data.
Agency for Healthcare Research & Quality, Data &
Surveyshttp://www.ahcpr.gov/data/
The Agency for Healthcare Research & Quality is an agency of the U.S.
Department of Health and Human Services whose goal is to “improve the
quality, safety, efficiency, and effectiveness of health care for all Americans”
by aiding the decision making process. The Agency for Healthcare Research
& Quality plans to do this by providing information and evidence on
treatment efficacy, individual patient applicability, and cost. The range of
statistical offerings is wide covering safety net monitoring, medical
expenditure spending, healthcare cost and utilization, hospital statistics, and
HIV and AIDS statistics. The Healthcare Cost and Utilization section
provides treatment outcomes at the national, state, and local levels. Besides
recommendations on topics such as vision screening for children under five
years old or the relationship of fewer numbers of nursing staff to poor patient
outcomes, this site also offers prevention tools for the clinician to use in
selecting medical services by age and gender. This site is searchable by
keyword. A publication catalog is available online
(http://www.ahcpr.gov/news/pubcat/pubcat.htm), or a print copy can be
ordered using the online order form
(http://www.ahcpr.gov/news/pubcat/c_order.htm). Since many of the
documents included in the electronic catalog are still print based, a print copy
of the catalog would be useful to order reports.
Medical Expenditure Panel Survey, Data and
Publicationshttp://www.meps.ahcpr.gov/Data_Public.htm
A subset of the Agency for Healthcare Research and Quality, the Medical
Expenditure Panel Survey (MEPS) contains data covering both employer-
based health insurance and detailed household and individual data on health
status and health care usage. Data on specific medical conditions, location of
patient visitations, and general demographics are available. This data is
presented in a tabular format or by establishing parameters using their online
statistical tool. An example of an available data report is, “Health Insurance
Coverage and Income Levels for the U.S. Noninstitutionalized Population
under Age 65.” The reports can be searched by keyword and sorted using
variables such as title or year.
National Center for Health Care Statisticshttp://www.cdc.gov/nchs
With the goal of improving the health of the American people by guiding
actions and policies, the National Center for Health Care Statistics offers a
wide range of healthcare statistics. This web site is searchable and offers
reports organized by popular topics. Data is arranged by geographic area,
gender, or disease. In addition to specific medical conditions, trends in causes
of death in the elderly, the changing profile of nursing home residents, or the
latest data of emergency room visits are available. Some historical statistical
documents, such as the Vital Statistics series going back to the early 1960's,
can also be viewed.
National Trauma Data Bank
(NTDB)http://www.facs.org/trauma/ntdb.html
Sponsored by the American College of Surgeons, this large data bank
contains information from over 730,000 cases in 36 states representing 27%
of U.S. Level I and II trauma centers. Topical data on injuries associated with
the greatest average stay in the ICU, the number of deaths caused by motor
vehicle collisions, gunshot wounds, accidental falls, or the types of trauma
seen in various age groups can be used in the medical decision making
process. An annual report is available in PDF format for those that wish to
browse a hard copy. Available data can be downloaded and displayed in a
variety of graphical formats.
Statistics Canada http://www.statcan.ca/start.html
Chock full of demographic information about Canada's economy,
environment, people, and government, this web resource includes health
related statistics. Specific health measures are provided in tabular format
under the categories of determinants, resources and use, and status.
Numerous reports are available for download, for example, the Health
Reports series, the Canadian Community Health Survey, and the Guide to
Health Statistics. The Statistical Profile of Canadian Communities is a
searchable data set where two communities can compare demographic scores
side-by-side and observe changes between the 1996 and 2001. This web site
is useful for comparison with similar U.S. health statistics.
U.S. Census Bureau, Health Insurance
Datahttp://www.census.gov/hhes/www/hlthins.html
The U.S. Census Bureau collects health insurance data from the Current
Population Survey (CPS). Health insurance data is also collected from the
Survey of Income and Program Participation (SIPP).
The CPS is useful for viewing a snapshot of health insurance coverage at any
point in time. It is aimed at providing overview information at the state and
national level. The Annual Social and Economic Supplement to the CPS
surveys individuals in about 78,000 households and includes health insurance
questions in the survey. The CPS Annual Social and Economic Supplement is
probably the most widely used data set on health insurance coverage in the
United States.
The SIPP is a longitudinal survey which tracks specific persons and their
health insurance coverage over 3 or 4 years. The SIPP is most useful for
analyzing health insurance changes over time. A specific example of what is
available at this web site is a table charting low income, uninsured children
listed by state.
World Health Organization Statistical Information System
(WHOSIS) http://www3.who.int/whosis/menu.cfm
For an international perspective, the World Health Organization has gathered
health related statistics from 192 countries. Data is arranged by topic, disease,
or geographic region for ease of access. Their Core Health Indicators provide
a quick and easy comparison between various countries on a number of
health indicators such as life expectancy, birth rate, mortality rate, and health
care expenditures. For example, one can find the estimated number of cases
of Hepatitis A by continent as well as the number of new cases per year.
Go to:
SUMMARY
Healthcare providers have reliable and multifaceted information from several
authoritative sources through their Internet connections. Within the reach of
the mouse and keyboard, these rich resources can illuminate a clinician's
local experiences, offer comparisons in healthcare delivery and outcomes,
and provide validated evidence-based practices.
Go to:
CONTRIBUTOR INFORMATION
Barbara A. Bartkowiak,
Brian J. Finnegan,
Articles from Clinical Medicine & Research are provided here courtesy
of Marshfield Clinic
Formats:
Article
|
PubReader
|
ePub (beta)
|
PDF (70K)
|
Citation
Share
Facebook
Twitter
Google+
Save items
Add to FavoritesView more options
Similar articles in PubMed
Non-governmental health statistics on the World Wide Web.[Med Ref Serv Q. 2003]
WWW review.[Optom Vis Sci. 1999]
National electronic Library for Health (NeLH)[BMJ. 1999]
Editorial: big data for health.[IEEE J Biomed Health Inform. 2...]
Hypertension and medical informatics.[J Natl Med Assoc. 2003]
See reviews...See all...
Links
PubMed
Recent Activity
ClearTurn Off
Health Statistics
Health Statistics
Clinical Medicine & Research. 2004 Aug; 2(3)189
See more...
Support CenterSupport Center
Earthquake statistics are facts recorded in the aftermath of seismic events. They are the concern
of seismologists, geologists, engineers, government officials, insurers, and statisticians among
others. Earthquakes provide special opportunties to learn about the makeup of the solid Earth.
I47] show. Statistics and statisticians are involved because of the large amount of and many
forms of data that become available following an earthquake as well as the related scientific and
social questions arising. Statistical methods have played an important role in seismology for
many years in part because of the pathbreaking efforts of Harold Jeffreys (see Bolt [9]).
Concerning Jeffreys’ work, Hudson [30] has written: ‘‘The success of the Jeffreys–Bullen travel
time tables was due in large part to Jeffreys’ consistent use of sound statistical methods.’’ In
particular, Jeffreys’ methods were robust and resistant, i.e., dealt with nongaussian distributions
and outliers. Bolt [8] extended them to the linear regression case. Statistics enters for a variety of
reasons. For example, the basic quantity of concern may be a probability model or a risk.
Further, the data sets are often massive and of many types. Also there is a substantial inherent
variability and measurement error. In response, these days seismologists and seismic engineers
continually set down stochastic models. Consider, for example, the Next Generation of
Attenuation (NGA) (Stewart [64]). Such models need to be fitted, assessed, and revised. Inverse
problems with the basic parameters defined indirectly need to be solved (O’Sullivan [53]; Stark
[63]). Experiments need to be designed. In many cases researchers employ simulations and
massive databases of such have been developed (see Olsen and Ely [52]). It can be noted that
new statistical techniques often find immediate application in seismology particularly and in
geophysics generally. In parallel, problems arising in seismology and earthquake engineering
have led to the development of new statistical techniques. Seismology underwent the ‘‘digital
revolution’’ in the nineteen-fifties and continually poses problems exceeding the capabilities of
the day’s computers. Its participants have turned up a variety of empirical laws (Kanamori [37]).
These prove useful for extrapolation to situations with few data (e.g., Huyse [32], Zhuang [78],
and Amorese [3]). Physical theories find important ` application (Aki and Richards [1]). The
subject matter developed leads to hazard estimation (Wesnowski [76]); improved seismic
(Naeim [48]; Mendes-Victor [47]); earthquake prediction (Zechar [77]; Lomnitz [42]; Harte and
Vere-Jones [28]; Luen a nd S
Statistics refers to both quantitative data, and the classification of such data in
accordance with probability theory and the application to them of methods such as
hypothesis testing. Health statistics include both empirical data and estimates related to
health, such as mortality, morbidity, risk factors, health service coverage, and health
systems.
The production and dissemination of health statistics is a core WHO activity mandated
to WHO by its Member States in its Constitution. WHO programmes compile and
disseminate a broad range of statistics that play a key role in advocacy for health
issues, monitoring and evaluation of health programmes and provision of technical
assistance to countries.
tark [43]); determination