Ombo 401 - WSD
Ombo 401 - WSD
Objectives
After going through this unit,you will be able to:
recognize the work system in an organization and its essential elements;
calculate the RPN (Risk Priority Number) for problem solving;
state various work methods and the problem solving approach towards it;
identify the relationship between work design and productivity;
differentiate between various models and their comparison.
Structure
1.1 Introduction
1.2 Definition of work system
1.3 Special Cases
1.4 Work System Framework
1.5 Work System Life Cycle Model
1.6 Work System Method
1.7 A problem-solving approach
1.8 Work Design & Productivity
1.9 Productivity Models
1.10 Models of National Economy
1.11 The Work Place Design
1.12 Keywords
1.13 Summary
1.1 INTRODUCTION
Among the goals of economic policy is a rising standard of living, and it is generally understood that
the means to that end is rising productivity. Productivity relates the quantity of goods and services
produced, and the income generated because of that production, to the amount of labor (e.g., hours
worked or number of workers) required producing it. The most used measure of the living standard
of a nation, is simply the ratio of that income to the total population, without regard to how the
income is actually distributed. If a relatively small share of a nation’s population works, there will be
a large difference between the level of productivity and that measure of the national standard of
living.
1
Productivity varies over time, and it varies across countries as well. The link between productivity
and living standards is not a direct one; therefore, countries with a high level of productivity may not
necessarily have the highest standard of living. Gross domestic product (GDP) per capita can rise in
the absence of an increase in productivity if (1) employees increase the number of hours they work
(hours per employee); (2) the share of the labor force that is employed rises (i.e., the unemployment
rate drops); or (3) the share of the population that is in the labor force rises (presuming that the
share of any new jobseekers who get jobs is at least as large as the share of those already in the
labor force who have jobs).
The standard measure of the production of goods and services for a nation is gross domestic product
(GDP). GDP measures the total value of goods and services produced within a nation’s borders.
Productivity is a measure of how much work is required to produce it. The most basic unit of labor is
the hour, thus productivity can be measured as GDP divided by the total number of hours worked.
Productivity may also be measured as the average contribution of each employee to total
production, or simply GDP divided by employment. The broadest measure of the living standard of a
nation is GDP divided by the total population. Per capita GDP says nothing about how those
national resources are distributed.
A work system is a system in which human participants and/or machines perform processes and
activities using information, technology, and other resources to produce products/services for
internal or external customers. Typical business organizations contain work systems that procure
materials from suppliers, produce products, deliver products to customers, find customers, create
financial reports, hire employees, coordinate work across departments, and perform many other
functions.
The work system concept is like a common denominator for many of the types of systems that
operate within or across organizations. Operational information systems, service systems, projects,
supply chains, and ecommerce web sites can all be viewed as special cases of work systems.
An information system is a work system whose processes and activities are devoted to processing
2
information. A service system is a work system that produces services for its customers. A project is
a work system designed to produce products and then go out of existence. A supply chain is an inter-
organizational work system devoted to procuring materials and other inputs required to produce a
firm’s products. An ecommerce web site can be viewed as a work system in which a buyer uses a
seller’s web site to obtain product information and perform purchase transactions. The relationship
between work systems in general and the special cases implies that the same basic concepts apply
to all of the special cases, which also have their own specialized vocabulary. In turn, this implies that
much of the body of knowledge for the current information systems discipline can be organized
around a work system core.
Many specific information systems exist to produce products/services that are of direct value to
customers, e.g., information services and Internet search. Other specific information systems exist
to support other work systems, e.g., an information system that helps sales people do their work.
Many different degrees of overlap are possible between an information system and a work system
that it supports. For example, an information system might provide information for a non-
overlapping work system, as happens when a commercial marketing survey provides information to
a firm’s marketing managers. In other cases, an information system may be an integral part of a
work system, as happens in highly automated manufacturing and in ecommerce web sites. In these
situations, participants in the work system are also participants in the information system, the work
system cannot operate properly without the information system, and the information system has
little significance outside of the work system.
3
describing possible changes, and tracing how those changes might affect other parts of the work
system.
This slightly updated version of the work system framework replaces “work practices” with
“processes and activities.”
4
Participants: are people who perform the work. Some may use computers and IT
extensively, whereas others may use little or no technology. When analyzing a work system,
the more encompassing role of work system participant is more important than the more
limited role of technology user (whether or not particular participants happen to be
technology users).
Information includes codified and non-codified information used and created as participants
perform their work. Information may or may not be computerized. Data not related to the
work system is not directly relevant, making the distinction between data and information
secondary when describing or analyzing a work system. Codified knowledge recorded in
documents, software, and business rules can be viewed as a special case of information.
Technologies: include tools (such as cell phones, projectors, spreadsheet software, and
automobiles) and techniques (such as management by objectives, optimization, and remote
tracking) that work system participants use while doing their work.
Products and services are the combination of physical things, information, and services that
the work system produces. This may include physical products, information products,
services, intangibles such as enjoyment and peace of mind, and social products such as
arrangements, agreements, and organizations.
Customers: are people who receive direct benefit from products and services the work
system produces. They include external customers who receive the organization’s products
and/or services and internal customers who are employees or contractors working inside the
organization.
5
Infrastructure: includes human, informational, and technical resources that the work system
relies on even though these resources exist and are managed outside of it and are shared
with other work systems. For example, technical infrastructure includes computer networks,
programming languages, and other technologies shared by other work systems and often
hidden or invisible to work system participants.
Strategies: include the strategies of the work system and of the department(s) and
enterprise(s) within which the work system exists. Strategies at the department and
enterprise level may help in explaining why the work system operates as it does and
whether it is operating properly.
The dynamic view of a work system starts with the work system life cycle (WSLC) model, which
shows how a work system may evolve through multiple iterations of four phases: operation and
maintenance, initiation, development, and implementation. The names of the phases were chosen
to describe both computerized and non-computerized systems, and to apply regardless of whether
application software is acquired, built from scratch, or not used at all.
6
This model encompasses both planned and unplanned change. Planned change occurs through a full
iteration encompassing the four phases, i.e., starting with an operation and maintenance phase,
flowing through initiation, development, and implementation, and arriving at a new operation and
maintenance phase. Unplanned change occurs through fixes, adaptations, and experimentation that
can occur within any phase. The phases include the following activities:
Initiation
Vision for the new or revised work system
Operational goals
Allocation of resources and clarification of time frames
Economic, organizational, and technical feasibility of planned changes
Development
Detailed requirements for the new or revised work system (including requirements for
information systems that support it)
As necessary, creation, acquisition, configuration, and modification of procedures,
documentation, training material, software, and hardware
Debugging and testing of hardware, software, and documentation
Implementation
Implementation approach and plan (pilot? phased? big bang?)
Change management efforts about rationale and positive or negative impacts of changes
Training on details of the new or revised information system and work system
Conversion to the new or revised work system
7
Acceptance testing
As an example of the iterative nature of a work system’s life cycle, consider the sales system in a
software start-up. The first sales system is the CEO selling directly. At some point the CEO can’t
do it alone, several salespeople are hired and trained, and marketing materials are produced
that can be used by someone less expert than the CEO. As the firm grows, the sales system
becomes regionalized and an initial version of sales tracking software is developed and used.
Later, the firm changes its sales system again to accommodate needs to track and control a
larger sales force and predict sales several quarters in advance. A subsequent iteration might
involve the acquisition and configuration of CRM software. The first version of the work system
starts with an initiation phase. Each subsequent iteration involves deciding that the current sales
system is insufficient; initiating a project that may or may not involve significant changes in
software; developing the resources such as procedures, training materials, and software that are
needed to support the new version of the work system; and finally, implementing the new work
system.
The pictorial representation of the work system life cycle model places the four phases at the
vertices of rectangle. Forward and backward arrows between each successive pair of phases
indicate the planned sequence of phrase and allow the possibility of returning to a previous
phase if necessary. To encompass both planned and unplanned change, each phase has an
inward facing arrow to denote unanticipated opportunities and unanticipated adaptations,
thereby recognizing the importance of diffusion of innovation, experimentation, adaptation,
emergent change, and path dependence.
The work system life cycle model is iterative and includes both planned and unplanned change.
It is fundamentally different from the frequently cited system development life cycle (SDLC),
which describes projects that attempt to produce software or produce changes in a work
system. Current versions of the SDLC may contain iterations but they are basically iterations
within a project. More important, the system in the SDLC is a basically a technical artifact that is
being programmed. In contrast, the system in the WSLC is a work system that evolves over time
through multiple iterations. That evolution occurs through a combination of defined projects
and incremental changes resulting from small adaptations and experimentation. In contrast with
control-oriented versions of the SDLC, the WSLC treats unplanned changes as part of a work
system’s natural evolution.
8
1.6 WORK SYSTEM METHOD
The work system method (WSM) was developed as a semi-formal systems analysis and design
method that business professionals (and/or IT professionals) can use for understanding and
analyzing a work system at whatever level of depth is appropriate for their particular concerns. It
evolved iteratively starting in around 1997. At each stage, the then current version was tested by
evaluating the areas of success and the difficulties experienced by MBA and EMBA students trying to
use it for a practical purpose. A version called “work-centered analysis” that was presented in a
textbook has been used by a number of universities as part of the basic explanation of systems in
organizations, to help students focus on business issues, and to help student teams
communicate.Results from analyses of real world systems by typical employed MBA and EMBA
students indicate that a systems analysis method for business professionals should be much more
prescriptive than approaches such as Checkland’s soft system methodology, but less complex than
high-precision notations and diagramming tools for IT professionals. While not a straitjacket, it must
be at least somewhat procedural and must provide vocabulary and analysis concepts while at the
same time encouraging the user to perform the analysis at whatever level of detail is appropriate for
the task at hand.
9
For business professionals
In contrast to systems analysis and design methods for IT professionals who need to produce a
rigorous, totally consistent definition of a computerized system, the work system method:
encourages the user to decide how deep to go
makes explicit use of the work system framework and work system life cycle model
makes explicit use of work system principles
makes explicit use of characteristics and metrics for the work system and its elements
includes work system participants as part of the system (not just users of the software)
includes codified and non-codified information
includes IT and non-IT technologies
suggests that recommendations specify which work system improvements rely on IS
changes, which recommended work system changes don’t rely on IS changes, and which
recommended IS changes won’t affect the work system’s operational form.
Pain Reduction
It seems kind of obvious — if you’re in pain, you won’t be as productive. But you may not even
realize that your work space may be the source of your pain. Back pain, headaches, sore wrists
and forearms are just a few symptoms that can crop up if you don’t have your workspace set up
properly.
Back pain, for example, can be caused by poor posture or a cheap office chair. Think about how
much time you spend in that chair. Likely, it’s more than 20 hours per week. It’s worth the
investment to get a chair that properly supports your body. If you work from home or your
10
employees do a lot of brainstorming, consider including a lounge chair space and a table with
comfy chairs for collaboration.
On the other hand, if you’re standing most of the day, you may need to consider a floor mat or
thick rug. The following office has the perfect location for pacing without strain on the knees or
lounging to gather inspiration.
Sore wrists and forearms are often caused by typing too much. Since typing less is probably not
an option, you can try an ergonomic keyboard. It takes a little getting used to but can
dramatically improve the comfort of typing. You may also consider an updated mouse as older
models take a lot more pressure to click on the buttons.
Stress Reduction
Stress is another major factor that is often easily reduced by designing your work space for
productivity. In crowded offices, a key contributor to stress is a lack of personal space. Look for
ways to create space like putting up dividers between people or arranging desks so that your
back is to the back of someone instead of facing them. Below is an example of an excellent set
up, from the offices in which employees have a private space without dividers.
11
Space provision for stress reduction
Personalization
Being able to personalize your work space to make it feel more like your own area with your
own individual touch has been a proven method for improving productivity. While some
managers frown upon too much decorating, the fact remains that people perform better when
they feel they have created their own space. A personalized workspace reduces stress and
makes the staff feel more connected to what they are doing and the organization as a whole.
The productivity formula is relatively simple: recruit quality workers sustain a high level of
employee satisfaction, and efficiency will improve. A great deal of research supports this
equation and points to the role that design plays in augmenting the variables. Workers
repeatedly cite design-related factors when explaining their reasons for choosing a job or staying
with a certain company, and managers recognize that employee satisfaction and productivity
rise in aesthetically appealing workplaces. For executives, the task of redesigning office space to
reflect evolving needs might seem daunting, but interiors experts can make the transition
smooth and graceful. Here below mentioned is a typical example of manufacturing work place
system design.
12
Example of manufacturing workplace system design
(Source: American Society for internal designs - ASID)
13
Dimensions of productivity model comparisons (Saari 2006b)
The principle of model comparison becomes evident in the figure. There are two dimensions in the
comparison. Horizontal model comparison refers to a comparison between business models. Vertical
model comparison refers to a comparison between economic levels of activity or between the levels
of business, industry and national economy.
At all three levels of economy, that is, that of business, industry and national economy, a uniform
understanding prevails of the phenomenon of productivity and of how it should be modeled and
measured. The comparison reveals some differences that can mainly be seen to result from
differences in measuring accuracy. It has been possible to develop the productivity model of
business to be more accurate than that of national economy for the simple reason that in business
the measuring data are much more accurate (Saari 2006b).
Business Models
There are several different models available for measuring productivity. Comparing the models
systematically has proved most problematic. In terms of pure mathematics, it has not been possible
to establish the different and similar characteristics of them so as to be able to understand each
model as such and in relation to another model. This kind of comparison is possible using the
productivity model which is a model with adjustable characteristics. An adjustable model can be set
with the characteristics of the model under review after which both differences and similarities are
identifiable.
A characteristic of the productivity measurement models that surpasses all the others is the ability
to describe the production function. If the model can describe the production function, it is
applicable to total productivity measurements. On the other hand, if it cannot describe the
production function or if it can do so only partly, the model is not suitable for its task. The
productivity models based on the production function form rather a coherent entity in which
differences in models are fairly small. The differences play an insignificant role, and the solutions
that are optional can be recommended for good reasons. Productivity measurement models can
differ in characteristics from another in six ways.
First, it is necessary to examine and clarify the differences in the names of the concepts.
Model developers have given different names to the same concepts, causing a lot of
14
confusion. It goes without saying that differences in names do not affect the logic of
modeling.
Model variables can differ; hence, the basic logic of the model is different. It is a question of
which variables are used for the measurement. The most important characteristic of a
model is its ability to describe the production function. This requirement is fulfilled in case
the model has the production function variables of productivity and volume. Only the
models that meet this criterion are worth a closer comparison. (Saari 2006b)
Calculation order of the variables can differ. Calculation is based on the principle of Ceteris
paribus stating that when calculating the impacts of change in one variable all other
variables are hold constant. The order of calculating the variables has some effect on the
calculation results, yet, the difference is not significant.
Theoretical framework of the model can be either cost theory or production theory. In a
model based on the production theory, the volume of activity is measured by input volume.
In a model based on the cost theory, the volume of activity is measured by output volume.
Accounting technique, i.e. how measurement results are produced, can differ. In calculation,
three techniques apply: ratio accounting, variance accounting and accounting form.
Differences in the accounting technique do not imply differences in accounting results but
differences in clarity and intelligibility. Variance accounting gives the user most possibilities
for an analysis.
Adjustability of the model. There are two kinds of models, fixed and adjustable. On an
adjustable model, characteristics can be changed, and therefore, they can examine the
characteristics of the other models. A fixed model cannot be changed. It holds constant the
characteristic that the developer has created in it.
Based on the variables used in the productivity model suggested for measuring business, such
models can be grouped into three categories as follows:
Productivity index models
15
PPPV models
PPPR models
In 1955, Davis published a book titled Productivity Accounting in which he presented a
productivity index model. Based on Davis’ model several versions have been developed, yet, the
basic solution is always the same (Kendrick & Creamer 1965, Craig & Harris 1973, Hines 1976,
Mundel 1983, Sumanth 1979). The only variable in the index model is productivity, which implies
that the model cannot be used for describing the production function. Therefore, the model is
not introduced in more detail here.
PPPV is the abbreviation for the following variables, profitability being expressed as a function of
them: Profitability = f (Productivity, Prices, and Volume)
The model is linked to the profit and loss statement so that profitability is expressed as a
function of productivity, volume, and unit prices. Productivity and volume are the variables of a
production function and using them makes it is possible to describe the real process. A change in
unit prices describes a change of production income distribution.
PPPR is the abbreviation for the following function: Profitability = f (Productivity, Price Recovery)
In this model, the variables of profitability are productivity and price recovery. Only the
productivity is a variable of the production function. The model lacks the variable of volume, and
for this reason, the model cannot describe the production function. The American models of
REALST (Loggerenberg&Cucchiaro 1982, Pineda 1990) and APQC (Kendrick 1984, Brayton 1983,
Genesca&Grifell, 1992, Pineda 1990) belong to this category of models but since they do not
apply to describing the production function (Saari 2000) they are not reviewed here more
closely.
16
Calc. order of variables 1. Distribution 1. Volume 1. Volume 1. Volume
2. Productivity 2. Productivity 2. Productivity 2. Productivity
3. Volume 3. Distribution 3. Distribution 3. Distribution
Accounting technique, All changes All changes Distribution; All Changes
alternatives variance accounting form variance acc. accounting form
1.Variance accounting accounting Productivity;
2. Ratio accounting Ratio acc;
3. Accounting form Volume; Account
form
Adjustability alternatives Adjustable Fixed Fixed Fixed
1. Adjustable
2. Fixed
PPPV models measure profitability as a function of productivity, volume and income distribution
(unit prices).
Such models are
The table presents the characteristics of the PPPV models. All four models use the same
variables by which a change in profitability is written into formulas to be used for measurement.
These variables are income distribution (prices), productivity and volume. A conclusion is that
the basic logic of measurement is the same in all models. The method of implementing the
measurements varies to a degree, depending on the fact that the models do not produce similar
results from the same calculating material.
Even if the production function variables of profitability and volume were in the model, in
practice the calculation can also be carried out in compliance with the cost function. This is the
case in models C & T as well as Gollop. Calculating methods differ in the use of either output
volume or input volume for measuring the volume of activity. The former solution complies with
the cost function and the latter with the production function. It is obvious that the calculation
produces different results from the same material. A recommendation is to apply calculation in
accordance with the production function. According to the definition of the production function
17
used in the productivity models Saari and Kurosawa, productivity means the quantity and quality
of output per one unit of input.
Models differ from one another significantly in their calculation techniques. Differences in
calculation technique do not cause differences in calculation results but it is rather a question of
differences in clarity and intelligibility between the models. From the comparison it is evident
that the models of Courbois& Temple and Kurosawa are purely based on calculation formulas.
The calculation is based on the aggregates in the loss and profit account. Consequently, it does
not suit to analysis. The productivity model Saari is purely based on variance accounting known
from the standard cost accounting. The variance accounting is applied to elementary variables,
that is, to quantities and prices of different products and inputs. Variance accounting gives the
user most possibilities for analysis. The model of Gollop is a mixed model by its calculation
technique. Every variable is calculated using a different calculation technique. (Saari 2006b)
The productivity model Saari is the only model with alterable characteristics. Hence, it is an
adjustable model. A comparison between other models has been feasible by exploiting this
characteristic of this model.
Output Measurement
Conceptually speaking, the amount of total production means the same in the national economy and
in business but for practical reasons modeling the concept differs, respectively. In national economy,
the total production is measured as the sum of value added whereas in business it is measured by
18
the total output value. When the output is calculated by the value added, all purchase inputs
(energy, materials etc.) and their productivity impacts are excluded from the examination.
Consequently, the production function of national economy is written as follows:
In business, production is measured by the gross value of production, and in addition to the
producer’s own inputs (capital and labor) productivity analysis comprises all purchase inputs
such as raw-materials, energy, outsourcing services, supplies, components, etc. Accordingly, it is
possible to measure the total productivity in business which implies absolute consideration of all
inputs. Productivity measurement in business gives a more accurate result because it analyses all
the inputs used in production. (Saari 2006b)
The productivity measurement based on national accounting has been under development
recently. The method is known as KLEMS, and it takes all production inputs into consideration.
KLEMS is an abbreviation for K = capital, L = labour, E = energy, M = materials, and S = services. In
principle, all inputs are treated the same way. As for the capital input this means that it is
measured by capital services, not by the capital stock.
The problem of aggregating or combining the output and inputs is purely measurement
technical, and it is caused by the fixed grouping of the items. In national accounting, data need
to be fed under fixed items resulting in large items of output and input which are not
homogeneous as provided in the measurements but include qualitative changes. There is no
fixed grouping of items in the business production model, neither for inputs nor for products,
but both inputs and products are present in calculations by their own names representing the
elementary price and quantity of the calculation material. (Saari 2006b)
For productivity analyses, the value of total production of the national economy, GNP, is
calculated with fixed prices. The fixed price calculation principle means that the prices by which
quantities are evaluated are hold fixed or unchanged for a given period. In the calculation
complying with national accounting, a fixed price GNP is obtained by applying the so-called basic
year prices. Since the basic year is usually changed every 5th year, the evaluation of the output
and input quantities remains unchanged for five years. When the new basic-year prices are
introduced, relative prices will change in relation to the prices of the previous basic year, which
has its certain impact on productivity
19
Old basic-year prices entail inaccuracy in the production measurement. For reasons of market
economy, relative values of output and inputs alter while the relative prices of the basic year do
not react to these changes in any way. Structural changes like this will be wrongly evaluated.
Short life-cycle products will not have any basis of evaluation because they are born, and they
die in between the two basic years. Obtaining good productivity by elasticity is ignored if old and
long-term fixed prices are being used. In business models this problem does not exist, because
the correct prices are available all the time. (Saari 2006b)
Workplace design can be used and is being used as a strategic instrument for changing
workplace functioning. There is, however, no “architectural determinism”: Although design is
important, design alone cannot determine workplace culture and workplace functioning. Similar
spatial solutions may be deployed in quite different ways.
Workplace design will always be local and needs to have both top-leader support and employee
participation to achieve a fitting design, and to get support for necessary changes.
The foundations of the science of ergonomics appear to have been laid within the context of the
culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th
20
century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One
outstanding example of this can be found in the description Hippocrates gave of how a surgeon's
workplace should be designed and how the tools he uses should be arranged. The archaeological
record also shows that the early Egyptian dynasties made tools and household equipment that
illustrated ergonomic principles. It is therefore questionable whether the claim by Marmaris, et
al., regarding the origin of ergonomics, can be justified.
In the 19th century, Frederick Winslow Taylor pioneered the "scientific management” method,
which proposed a way to find the optimum method of carrying out a given task. Taylor found
that he could, for example, triple the amount of coal that workers were shoveling by
incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was
reached. Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the
"time and motion study". They aimed to improve efficiency by eliminating unnecessary steps
and actions. By applying this approach, the Gilbreths reduced the number of motions
in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350
bricks per hour.
Hot Work Permit: Hot work permits are used when heat or sparks are generated by work such as
welding, burning, cutting, riveting, grinding, drilling, and where work involves the use of pneumatic
hammers and chippers, non-explosion proof electrical equipment (lights, tools, and heaters), and
internal combustion engines.
Three types of hazardous situations need to be considered when performing hot work:
The presence of flammable materials in the equipment.
The presence of combustible materials that burn or give off flammable vapors when heated;
and
The presence of flammable gas in the atmosphere, or gas entering from an adjacent area,
such as sewers that have not been properly protected. (Portable detectors for combustible
gases can be placed in the area to warn workers of the entry of these gases.)
Cold Work Permit: Cold work permits are used in hazardous maintenance work that does not
involve “hot work”. Cold work permits are issued when there is no reasonable source of ignition,
and when all contact with harmful substances has been eliminated or appropriate precautions
taken.
21
Confined Space Entry Permit:Confined space entry permits are used when entering any confined
space such as a tank, vessel, tower, pit or sewer. The permit should be used in conjunction with a
“Code of Practice” which describes all important safety aspects of the operation.
Some employers use special permits to cover specific hazards such as:
extremely hazardous conditions
radioactive materials
PCBs and other dangerous chemicals
excavations
power supplies
1.12 KEYWORDS
Work Design-work design is the application of Socio-Technical Systems principles and techniques to
the humanization of work.
Work System-A work system is a system in which human participants and/or machines perform
work using information, technology, and other resources to produce products and/or services for
internal or external customers
Productivity- an economic measure of output per unit of input, Inputs include labor and capital,
while output is typically measured in revenues and other GDP components such as business
inventories
Efficiency-Efficiency in general, describes the extent to which time, effort or cost is well used for the
intended task or purpose
Work System Design-A work system is a system in which human participants and/or machines
perform processes and activities using information, technology, and other resources to produce
products/services for internal or external customers
Work System Life Cycle (WSLC) -The dynamic view of a work system starts with the work system life
cycle (WSLC) model, which shows how a work system may evolve through multiple iterations of four
phases: operation and maintenance, initiation, development, and implementation.
22
1.12 SUMMARY
A work system is a system in which human participants and/or machines perform processes and
activities using information, technology, and other resources to produce products/services for
internal or external customers. Work system can be an operational information system, service
systems, projects, supply chains, and ecommerce web sites can all be viewed as special cases of
work systems.
The work system approach for understanding systems includes both a static view of a current (or
proposed) system in operation and a dynamic view of how a system evolves over time through
planned change and unplanned adaptations. The dynamic view of a work system starts with the
work system life cycle (WSLC) model. This model encompasses both planned and unplanned change.
Planned change occurs through a full iteration encompassing the four phases, i.e., starting with an
operation and maintenance phase, flowing through initiation, development, and implementation,
and arriving at a new operation and maintenance phase. Unplanned change occurs through fixes,
adaptations, and experimentation that can occur within any phase. The work system method (WSM)
is a systems analysis and design method that business professionals can use for understanding and
analyzing a work system at whatever level of depth is appropriate for their concerns. The work
system method will be different for different industries.
There are nine elements of a work system namely processes and activities, participants, information,
technology, products and service, customers, environment, and strategy.The pain reduction, stress
reduction, personalization are the three important aspects in the relationship of productivity
enhancement of any organization with respect to employee.
In a different point of view the examples of work standards are hot work permit and confined space
entry permit.
23
UNIT 2 PROBLEM SOLVING TOOLS
Objectives
After going through this unit, you will be able to:
Recognize the impotence of problem solving tools in business environment
Calculate the RPN (Risk Priority Number) for problem solving
State different Quantitative and Qualitative tools for problem solving
Analyze the relationship between machine and operator
Apply different charts in practice to study the relationship between various entities involved
in process.
Structure
2.1 Introduction
2.2 Exploratory Tools
2.3 Recording and Analysis Tools
2.4 Quantitative tools
2.5 Worker Machine Relationship
2.6 Keywords
2.7 Summary
2.1 INTRODUCTION
Problem solving is a fixture in life. You must be able to solve problems. Problems pop up every day.
Sometimes they are small and sometimes they are large. The same can be said in business.
Businesses have plenty of problems and it is up to the employees to find a way to solve those
problems. Again, sometimes simple problem-solving techniques just are not going to work because
some problems require more problem solving skills, techniques in a scientific way.
While most people associate lean with tools and principles such as value stream mapping, one-piece
flow, Kanban, 5-S, Total Productive Maintenance and kaizen events, few people think about the
more mundane aspects of lean. Problem solving is one of the keys to a successful lean
implementation because it empowers all of those involved.
Lean manufacturing has a unique way of solving problems. It does not just look at the effect of the
problem and try to cover it with a Band-Aid. Rather, the root cause of the problem is identified and
the root cause, as well as all contributing factors, is eliminated from the system, process, or
infrastructure in order to permanently solve the problems. What is the difference in these two
24
approaches? Simple, when you find and rectify the root causes, the problem will be solved forever.
Even other problems occurring due to these root causes will be eliminated in this effort.
It is very clear now that we must find out the root causes of the problems before we think about
rectifying them in lean manufacturing environments. So, how should we do this? What are the tools
available to perform these tasks? Let’s look at what problem solving is about. We’ll begin by asking
the question: “What is a problem?” A good definition of a problem is a variation from a recognized
standard. In other words, you need to know how things should be before you can recognize a
possible cause for them not being that way. After a problem has been recognized, a formal problem-
solving process should be applied.
25
Collect Evidence of Problem (P2): This activity focuses on obtaining information/data to clearly
demonstrate that the problem does exist. In the case of team problem solving, this should be a quick
exercise since the reliability engineering function must have been looking at data in order to create
the team. The output of this activity will be a list of evidence statements (or graphs) to illustrate that
the problem exists, its size and the chronic nature of it.
Identification of Impacts or Opportunities (P3): This part of the Plan segment focuses on identifying
the benefits if this problem solving is successful. This activity needs to be thought of in two different
perspectives because Project Team work can take the form of control work, e.g. fixing a problem
that stands in the way of expected results or pure improvement (attempting to take results to a new
level of performance). In each case the output of this activity will be a list of statements. The impact
statements and opportunity statements should be stated in terms of loss of dollars, time, “product”,
rework, processing time and/or morale.
Measurement of Problem (P4): Before problem solving proceeds, it is important for the team to do a
quick check on the issue of how valid or reliable the data is on which the team is making the decision
to tackle the problem. For the parameter(s) that are being used as evidence of the problem, is there
any information known by the team that would question the validity, accuracy or reliability of the
data? This question should be examined whether we are relying on an instrument, a recorder or
people to record information or data. If the team suspects that there are significant issues that
“cloud” the data, then these measurement problems needs to be addressed, fixed and new
measures obtained before proceeding with the other segments of PDCA.
Measure(s) of Effectiveness (P5): At this point, the team needs to identify how they intend to
measure success of their problem-solving efforts. This is one of the most important steps in PDCA
and one that certainly differentiates it from traditional problem solving. The strategy is to agree on
what and how, to obtain the benchmark “before” reading, perform the PDCA activities and re-
measure or obtain the “after” measure. At that point, the team will need to decide whether they
need to recycle through PDCA in order to achieve their pre-stated objective.
Do
Generate Possible Causes (D1): To avoid falling into the mode of solution implementation or trial
and error problem solving, the team needs to start with a “blank slate” and from a fresh perspective
lay out all possible causes of the problem. From this point, the team can use data and its collective
26
knowledge and experience to sort through the most feasible or likely major causes. Proceeding in
this manner will help ensure that the team will ultimately get at root causes of problems and won’t
stop at the treatment of other symptoms. The best tool to facilitate this thinking is the Cause and
Effect Diagram done by those people most knowledgeable and closest to the problem.
Broke-Need-Fixing Causes Identified, Worked On (D2): Before proceeding to carry out either an
Action Plan (for Cause Remedies) or an Experimental Test Plan, there are often parts of the process
that are “broke”. This could take on many different forms.
Write Experimental Test or Action Plan (D3/4): Depending upon the type of problem being worked
on, the PDCA strategy will take one of two different directions at this point. The direction is based on
whether it is a “data-based” problem or “data-limited” problem. Shown in the table below is the
distinction between these two strategies and in particular, the difference between an Action Plan
and Experimental Test Plan. Note that in some cases, it will be necessary to use a combination of
Action Plans and Experimental Test Plans. That is, for some cause areas an Action Plan is appropriate
and for other causes within the same problem, carrying out an Experimental Test Plan is the best
route.
Write Action Plan for Cause Remedies (D3): In order to get to the point of writing the Action
Plan, the team needs to brainstorm possible solutions or remedies for each of the “cause
27
areas” and reach consensus on the prioritized solutions. This work can be carried out as a
team or split into sub-teams. Either way, the entire team will have to reach agreement on
proposed remedies and agree to the Action Plan. The Action Plan will be implemented in the
Check segment.
Write Experimental Test Plan (D4): The Experimental Test Plan is a document which shows
the experimental test(s) to be carried out. This will verify whether a root cause that has been
identified really does impact the dependent variable of interest. Sometimes this can be one
test that will test all causes at once or it could be a series of tests.
Note: If there is a suspicion that there is an interaction between causes, those causes should
be included in the same test.
The Experimental Test Plan should reflect:
Time/length of test
How the cause factors will be altered during the trials
Dependent variable (variable interested in affecting) of interest
Any noise variables that must be tracked
Items to be kept constant
Everyone involved in the Experimental Test Plan(s) should be informed before the test is
run. This should include:
Purpose of the test
Experimental Test Plan (details)
How they will be involved
Key factors to ensure good results
When solutions have been worked up, the team should coordinate trial implementation
of the solutions and the “switch on/off” data analysis technique.
Resources Identified (D5): Once the Experimental Test Plan or the Action Plan is written, it
will be fairly obvious to the team what resources are needed to conduct the work. For
resources not on the team, the team should construct a list of who is needed, for what
reason, the time frame and the approximate amount of time that will be needed. This
information will be given to the Management Team.
28
Revised PDCA Timetable (D6): At this point, the team has a much better feel for what is to be
involved in the remainder of its PDCA activities. They should adjust the rough timetables
that had been projected in the Plan segment. This information should be updated on the
team Plan, as well as taken to the Management Team.
Management Team Review/Approval (D7): The team has reached a critical point in the PDCA
cycle. The activities they are about to carry out will have obvious impact and consequences
to the department. For this reason, it is crucial to make a presentation to the Management
Team before proceeding. This can be done by the team leader or the entire team. The
content/purpose of this presentation is:
Present team outputs to date
Explain logic leading up to the work completed to date
Present and get Management Team approval for
− Measure of Effectiveness with “before” measure
− Priority causes
− Action Plan (for Cause Remedies) or Experimental Test Plan
− Revised PDCA timetable
Check
Carry out Experimental Test or Action Plan (C1/C2): Depending upon the nature of the problem, the
team will be carrying out either of these steps:
Conduct Experimental Test Plan(s) to test and verify root causes or
Work through the details of the appropriate solutions for each cause area. Then,
through data, verify to see if those solutions were effective.
Carry out Action Plan (C1): In the case of Action Plans, where solutions have been worked up and
agreed to by the team, the “switch on/switch off” techniques will need to be used to verify that the
solutions are appropriate and effective. To follow this strategy, the team needs to identify the
dependent variable – the variable that the team is trying to impact through changes in cause factors.
Carry out Experimental Test Plan (C2): During the Check segment, the Experimental Tests to check
all of the major prioritized causes are to be conducted, data analyzed, and conclusions drawn and
agreed to by the team.
29
Analyze Data from Experimental or Action Plan (C3): Typically, one person from the team is assigned
the responsibility to perform the analysis of the data from the Test Plan. When necessary, this
person should use the department or plant resource available to give guidance on the proper data
analysis tools and/or the interpretation of outputs. The specific tools that should be used will
depend upon the nature of the Test Plan.
Decisions-Back to Do Stage or Proceed (C4): After reviewing the data analysis conclusions about the
suspected causes or solutions that were tested, the team needs to make a critical decision of what
action to take based on this information.
Implementation Plan to Make Change Permanent (C5): The data analysis step could have been
performed in either of the following contexts:
After the Action Plan (solutions) was carried out, data analysis was performed to see
if the dependent variable was impacted. If the conclusions were favorable, the team
could then go on to develop the Implementation Plan.
The Experimental Test Plan was conducted; data was analyzed to verify causes. If the
conclusions were favorable (significant causes identified), the team must then
develop solutions to overcome those causes before proceeding to develop the
Implementation Plan. (e.g., It was just discovered through the Test Plan that
technician differences contribute to measurement error.)
Force Field on Implementation (C6): Once the Implementation Plan is written, the team
should do a Force Field Analysis on factors pulling for and factors pulling against a successful
implementation – success in the sense that the results seen in the test situation will be
realized on a permanent basis once the solutions are implemented.
Management Team Review/Approval (C7): The team has reached a very critical point in the
PDCA cycle and needs to meet with the Management Team before proceeding. This meeting
is extremely important, because the team will be going forward with permanent changes to
be made in operations. The Management Team not only needs to approve these changes
but also the way in which they will be implemented.
Act
30
Carry out Implementation Plan (A1): If the team has written a complete, clear and well thought
through Implementation Plan, it will be very obvious what work needs to be done, by whom and
when to carry out the Act segment of the PDCA cycle. The team should give significant attention to
assure communications and training is carried out thoroughly, so department members will know
what is changing, why the change is being made and what they need to do specifically to make
implementation a success.
Post-Measure of Effectiveness (A2): After all changes have been made and sufficient time has passed
for the results of these changes to have an effect, the team needs to go out and gather data on all of
the Measures of Effectiveness. The data then needs to be analyzed to see if a significant shift has
occurred.
Analyze Results vs. Team Objectives (A3): In the previous step, the team looked at whether the
Measure(s) of Effectiveness had been impacted in any significant way by the permanent
implementation of the changes. The team cannot stop here. If the answer to that question is
favorable, then the team needs to verify if the amount of improvement was large enough to meet
the team objective.
Team Feedback Gathered (A4): Once the team decision has been made that the PDCA cycle has been
successfully completed (based on Measure of Effectiveness change), the team needs to present this
information to the Management Team. Before this is done, the team leader needs to gather
feedback from the team. This feedback will be in the form of a questionnaire that all team members
(including the team leader) should fill out. The results will be tallied by the team leader and recorded
on form A3.
Management Team Close-out Meeting (A5): Before disbanding, the team needs to conduct a close-
out meeting with the Management Team. The major areas to be covered in this meeting are:
Wrap up any implementation loose ends
Review Measure of Effectiveness results, compare to team objective
Ensure team documentation is complete and in order
Share team member feedback on team experiences (standardized forms and
informal discussion)
31
When you have a problem, go to the place where the problem occurred and ask the question
“Why” five times. In this way, you will find the root causes of the problem and you can start
treating them and rectifying the problem.
5-Why analysis is a technique that doesn’t involve data segmentation, hypothesis testing,
regression or other advanced statistical tools, and in many cases can be completed without a
data collection plan. By repeatedly asking the question “Why” at least five times, you can peel
away the layers of symptoms which can lead to the root cause of a problem.
Here is a simple example of applying the 5-Why analysis to determine the root cause of a
problem. Let’s suppose that you received many customer returns for a particular product. Let’s
attack this problem using the five whys:
32
RPN = Severity × Occurrence × Detection
The second stage is to consider corrective actions which can reduce the severity or occurrence
or increase detection. Typically, you start with the higher RPN values, which indicate the most
severe problems, and work downwards. The RPN is then recalculated after the corrective actions
have been determined. The intention is to get the RPN to the lowest value.
Conclusion
These three tools can be effectively utilized by natural work teams to resolve most problems
that could confront them as part of their day-to-day activities. None require special skills.
Instead, they rely on native knowledge, common sense, and logic. The combined knowledge,
experience and skills of the team is more than adequate for success.
33
Constructing a Cause-and-Effect Diagram can help your team when you need to
Identify the possible root causes, the basic reasons, for a specific effect, problem, or
condition.
Sort out and relate some of the interactions among the factors affecting a particular
process or effect.
Analyze existing problems so that corrective action can be taken
Step 2 - Using a chart pack positioned so that everyone can see it, draw the SPINE and
create the EFFECT box.
Draw a horizontal arrow pointing to the right. This is the spine.
To the right of the arrow, write a brief description of the effect or outcome which results
from the process.
EXAMPLE: The EFFECT is Poor Gas Mileage
Draw a box around the description of the effect.
Step 3 - Identify the main CAUSES contributing to the effect being studied.
These are the labels for the major branches of your diagram and become categories under
which to list the many causes related to those categories.
Establish the main causes, or categories, under which other possible causes will be
listed. You should use category labels that make sense for the diagram you are
creating. Here are some commonly used categories:
34
> 3Ms and P - methods, materials, machinery, and people
> 4Ps - policies, procedures, people, and plant
> Environment - a potentially significant fifth category
Write the main categories your team has selected to the left of the effect box, some
above the spine and some below it.
Draw a box around each category label and use a diagonal line to form a branch
connecting the box to the spine.
Step 4 - For each major branch, identify other specific factors which may be the
CAUSES of the EFFECT
Identify as many causes or factors as possible and attach them as sub-branches of
the major branches.
EXAMPLE: The possible causes for Poor Gas Mileage are listed under the
appropriate categories in fig 2.1
Fill in detail for each cause. If a minor cause applies to more than one major cause,
list it under both.
Step 5 - Identify increasingly more detailed levels of causes and continue organizing them
under related causes or categories. You can do this by asking a series of why questions.
EXAMPLE: We’ll use a series of why questions to fill in the detailed levels for one of the
causes listed under each of the main categories.
Q: Why was the driver using the wrong gear?
A: The driver couldn't hear the engine.
Q: Why couldn't the driver hear the engine?
A: The radio was too loud. Or A: Poor hearing
35
Step 6 - Analyze the diagram. Analysis helps you identify causes that warrant further
investigation. Since Cause-and-Effect Diagrams identify only possible causes, you may
want to use a Pareto Chart to help your team determine the cause to focus on first.
Look at the “balance” of your diagram, checking for comparable levels of detail for most
of the categories.
> A thick cluster of items in one area may indicate a need for further study.
> A main category having only a few specific causes may indicate a need for further
identification of causes.
> If several major branches have only a few sub-branches, you may need to combine
them under a single category.
Look for causes that appear repeatedly. These may represent root causes.
Look for what you can measure in each cause so you can quantify the effects of any
changes you make.
Most importantly, identify and circle the causes that you can act on.
CHECKSHEET
A check sheet is a simple form you can use to collect data in an organized manner and easily
convert it into readily useful information. With a check sheet, you can:
Collect data with minimal effort.
Convert raw data into useful information.
Translate opinions of what is happening into what is happening.
In other words, “I think the problem is . . .” becomes “The data says the problem is . . .”
36
Check Sheet for Defective Items
Work Station Defective Items (Date: 23/11/2012) Total Defects
A 1111 1111 10
B 1111 11 7
C 11 2
D 111 3
E 1111 1 6
Pareto Analysis
Imagine that you've just stepped into a new role as head of department. Unsurprisingly,
you've inherited a whole host of problems that need your attention.
Ideally, you want to focus your attention on fixing the most important problems. But how do
you decide which problems you need to deal with first? And are some problems caused by
the same underlying issue?
Pareto Analysis is a simple technique for prioritizing possible changes by identifying the
problems that will be resolved by making these changes. By using this approach, you can
prioritize the individual changes that will most improve the situation.
Pareto Analysis uses the Pareto Principle – also known as the "80/20 Rule" – which is the
idea that 20% of causes generate 80% of results. With this tool, we're trying to find the 20%
of work that will generate 80% of the results that doing all the work would deliver.
37
For each problem, identify its fundamental cause. (Techniques such as Brainstorming,
the 5 Whys, Cause and Effect Analysis, and Root Cause Analysis will help with this.)
Step 6: Act
Now you need to deal with the causes of your problems, dealing with your top-priority
problem, or group of problems, first.
Keep in mind that low scoring problems may not even be worth bothering with - solving
these problems may cost you more than the solutions are worth.
Note: While this approach is great for identifying the most important root cause to deal
with, it doesn't consider the cost of doing so. Where costs are significant, you'll need to use
techniques such as Cost/Benefit Analysis, and use IRRs and NPVs to determine which
changes you should implement.
38
He decides to score each problem by the number of complaints that the center has received
for each one. (In the table below, the second column shows the problems he has listed in
step 1 above, the third column shows the underlying causes identified in step 2, and the
fourth column shows the number of complaints about each column identified in step 3.)
Pareto Analysis for Service Center
# Problem (Step 1) Cause (Step 2) Score
(Step 3)
1 Phones aren't answered quickly enough. Too few service centers staff. 15
2 Staff seem distracted and under pressure. Too few service centers staff. 6
3 Engineers don't appear to be well organized. They Poor organization and
4
need second visits to bring extra parts. preparation.
4 Engineers don't know what time they'll arrive. This Poor organization and
means that customers may have to be in all day preparation. 2
for an engineer to visit.
5 Service center staff doesn’t always seem to know Lack of training.
30
what they're doing.
6 When engineers visit, the customer finds that the Lack of training.
21
problem could have been solved over the phone.
Jack then groups problems together (steps 4 and 5). He scores each group by the number of
complaints, and orders the list as follows:
Lack of training (items 5 and 6) – 51 complaints.
Too few service centers staff (items 1 and 42) – 21 complaints.
Poor organization and preparation (items 3 and 4) – 6 complaints.
60
50
40
30
20
10
0
Lack of Training Too Few Service Poor Organization
Center Staff & Prepration
39
As you can see from figure 1 above, Jack will get the biggest benefits by providing staff with
more training. Once this is done, it may be worth looking at increasing the number of staff in
the call center. It's possible, however, that this won't be necessary: the number of
complaints may decline, and training should help people to be more productive.
By carrying out a Pareto Analysis, Jack is able to focus on training as an issue, rather than
spreading his effort over training, taking on new staff members, and possibly installing a new
computer system to help engineers be more prepared.
Score each alternative for each criterion (again 1-10, with 10 being the best.)
For each alternative, multiply the score with the corresponding criteria weight and
add these multiples in the last column. This is the total score for each alternative.
40
Relative Score Score Sum of Weight X
Weights Score/Rank
1 …
2
3
4
Sensitivity Analysis
Sensitivity analysis answers the question: “how sensitive is the end result to changes in various
factors affecting it?” Accordingly, sensitivity analysis can help us to decide between alternate
courses of action based on those factors. For example, one set of data may suggest the validity
of a particular decision but, because of the high sensitivity to changes in one or more factors,
another decision may become more appealing if those factors are considered in the decision-
making process. Sensitivity analysis can be used effectively in combination with other
quantitative methods when input data is questionable.
41
Because of the large number of iterations, independent variables, their values and the value
probabilities, the Monte Carlo simulation are virtually impossible without computers with
substantial CPUs. These have become available only in the last 10-15 years and we are seeing
more and more use of this method of decision making.
Synchronous servicing: case with a fixed machine cycle time in which the worker loads /unloads
the machine (both worker and machine are utilized simultaneously!) the machine at regular
intervals. Ideally, several machines can be serviced (machine coupling).
In an ideal case:
N = R+m/R
Where: N = number of machines that can be serviced by one operator
R = loading/unloading time per machine = 1
m = machine cycle time (automatic run time) = 2
Operator Vs Machine Load
Operator Machine#1 Machine#2 Machine#3
42
Loads#1 Loads#1
Loads#2 Loads#2
Loads#3 Loads#3
In real life, the operator will be able to service fewer machines because of w= worker (walk)
time:
N <R+m
R+w
Also, N is typically non-integer; then a decision (typically an economic one, lowest unit cost)
must be made regarding who (worker vs. machine) will be idle
EX: Consider a walk time of 0.1 min (with R = 1.0 and m =2.0). Also, the operator earns
$10.00/hr and the machine cost $20.00/hr to run. Then N = (R+m)/(R+w) = 3/1.1 = 2.7
2.6 KEYWORDS
Problem solving tools-Its purpose is to identify, correct and eliminate recurring problems, and it is
useful in product and process improvement. It establishes a permanent corrective action based on
statistical analysis of the problem (when appropriate) and focuses on the origin of the problem by
determining its root causes.
Monte-carlo simulation-A problem solving technique used to approximate the probability of certain
outcomes by running multiple trial runs, called simulations, using random variables.
Pareto analysis- Pareto Analysis is a simple technique for prioritizing possible changes by identifying
the problems that will be resolved by making these changes.
Why-Why analysis-It is a method of questioning that leads to the identification of the root cause(s)
of a problem
Cause and effect analysis-This diagram is a causal diagram created by Kaoru Ishikawa (1968) that
shows the causes of a specific event. Common uses of the Ishikawa diagram are product design and
quality defect prevention, to identify potential factors causing an overall effect.
43
PDCA cycle-PDCA (plan–do–check–act or plan–do–check–adjust) is an iterative four-step
management method used in business for the control and continuous improvement of processes
and products.
Multi activity chart- Multiple Activity Charts' illustrate parallel activities and time relationships
between two or more 'resources' i.e. workers, plant/equipment or materials. They are useful where
the interactions between workers, plant/equipment and materials repeat in periodic cycles, e.g.
concreting operations on construction sites.
2.7 SUMMARY
Cause-focused brainstorming and decision-making incorporate both data and experience to identify
and resolve issues. The C&E Matrix uses the voice of the customer (and experience and collected
data) to determine where we should focus our improvement efforts.
The Criteria-based Decision Matrix allows team members to debate and quantify the importance of
selected criteria and identify the best possible option after reviewing how each alternative
addresses the issues and concerns. The 5 Whys can be used individually or, as a part ofthe fishbone
diagram
44
UNIT 3 OPERATION ANALYSES
Objectives
After going through this unit,you will be able to:
schedule the two jobs in an appropriate sequence.
analyze the product design for best suitability according to market.
state application of various tools and techniques used in operations.
design the specifications and tolerances for a product or process.
recognize the critical factors in material handling.
Structure
3.1 Introduction
3.2 Operations purpose
3.3 Part/Product Design
3.4 Specification and Tolerance
3.5 Manufacturing and Process Sequencing
3.6 Setup and Tools
3.7 Material Handling
3.8 Work Design
3.9 Keywords
3.10 Summary
3.1 INTRODUCTION
45
customer service may be timed or tracked during the observational period to produce statistical
information for the report. Employees are commonly asked to perform tasks as they normally
would and try to ignore the presence of the evaluators. On-site observation may last a day or
several weeks, depending on the size of the company.
Companies and organizations making products and delivering it for profit or not for profit rely
on a handful of processes to get their products manufactured properly and delivered on time.
Each of the process acts as an operation for the company. To the company this is essential. That
is why managers find operations management more appealing. We begin this section by looking
at what operations are. Operations strategy is to provide an overall direction that serves the
framework for carrying out all the organization’s functions.
Have you ever imagined a car without a gear or the steering wheel? Whilst, what remains of an
utmost importance to you is to drive the locomotive from one location to another for whatever
purpose you wish, but can only be made possible with each and every part of the car working
together and attached.
Creation
The foundation of every production and operations department is the creation of goods or
services. Traditionally, production included the physical assembly of goods, but production
46
can also include data-based goods such as websites, analysis services and order processing
services.
Customer Service
In many companies, the production and operations department contain the customer-facing
customer service department that addresses the needs of the customer after the purchase
of goods or services. The support function usually is served through phone, online or mail-
based support.
Profit
The main function of the production and operations department is to produce a product or
service that creates profit and revenue for the company. Actualization of profit requires
close monitoring of expenses, production methodology and cost of inputs.
Evaluation
Every production and operations department must function as self-evaluating entity that
monitors the quality, quantity, and cost of goods produced. Analysis usually takes the form
of statistical metrics, production evaluation and routine reporting.
Tasks
Common task functions in a production and operation department include forecasting,
scheduling, purchasing, design, maintenance, people management, flow analysis, reporting,
assembly, and testing.
Fulfillment
Production and operations departments typically function as a fulfillment entity that
ensures the timely delivery of the output from production to customers. Traditionally
fulfillment is shipping and mailing based function but can be electronically based in a data-
driven product.
Analysis
Standard analysis functions in a production and operations department include critical path
analysis, stock control analysis, utilization analysis, capacity analysis, just-in-time analysis of
inputs, quality metrics analysis and break-even analysis.
47
You have heard this often, usually as “Keep it simple!” But for good designers, just keeping it
simple is not enough. If you only just keep things simple, you will still have complicated designs.
You must simplify, simplify, simplify. So, what makes a design simple? Can your institution alone
judge simplicity? Will you know it when you see it? The less thought and the less knowledge a
device require, the simpler it is. This applies equally to its production, testing and use. Use these
criteria-how much thought, how much knowledge- requires judge your designs, judge best by
comparing one solution to another. Of course, it may take lots of thought and knowledge to get
to a design requiring little of either; that in design. Product design is the process of creating a
new product to be sold by a business to its customers.
It is the efficient and effective generation and development of ideas through a process that
leads to new products. In a systematic approach, product designers conceptualize
and evaluate ideas, turning them into tangible products. The product designer's role is to
combine art, science, and technology to create new products that other people can use. Their
evolving role has been facilitated by digital tools that now allow designers to communicate,
visualize, and analyze ideas in a way that would have taken greater manpower in the past.
Product design is sometimes confused with industrial design and has recently become a broad
term inclusive of service, software, and physical product design. Industrial design is concerned
with bringing artistic form and usability, usually associated with craft design, together to mass
produce goods.
Mechanical part design constitutes a vast majority of industrial product-related design and
manufacturing work throughout the different branches of engineering. Often, if not always, it is
the first step in product manufacturing, often coupled with and simultaneously done with
preliminary product-related calculations. For example, when designing a mechanical gearbox
(gear-chain) system the whole design work constitutes of simultaneous sizing and material
selection procedure together with mechanical design of each gear and axle parts in 2D draft as
well as 3D modeling.
48
Specifications
Specifications are an important part of the design and construction process. Specifications
are a valuable tool during the design stage, are part of the contract documentation, and
have a key role in the efficiency of project fulfillment.
A specification is:
a tool for conveying the required quality of products (both fabric and services) and
their installation for a particular project; and
a means of drawing together all the relevant information and standards that apply
to the work to be constructed.
The best guarantee that the project specification will meet these high standards is to base it
on a sound master specification.
49
But these standards are based on ad-hoc conventions collected from years of engineering
practice rather than on mathematical principles. This leads to two major problems: (1)
miscommunication and misinterpretation of design specifications by manufacturers,
inspection departments, or suppliers, and (2) unavailability of full three-dimensional analysis
of tolerance stackups involving all types of dimensional and geometric variations.
Comprehensive 3D analysis of stack-ups is only possible if a mathematical model of such
variations exists. The attempt to “retrofit” an “official” math model to the tolerance
standard has not gone far enough. Researchers have proposed replacing the standard
completely—a proposition unacceptable to industry because the valuable empirical
knowledge contained in the current standard will be lost.
Tolerances
As products have gotten increasingly sophisticated and geometrically more complex, the
need to better specify regions of dimensional acceptability has become 2 more apparent.
Traditional tolerances schemes are limited to translation (linear) accuracies that get
appended to a variety of features. In many cases bilateral, unilateral, and limiting tolerance
specifications are adequate for both location as well as feature size specification.
Unfortunately, adding a translation zone or region to all features is not always the best
alternative. In many cases in order to reduce the acceptable region, the tolerance
specification must be reduced to ensure that mating components fit properly.
The result can be a high than necessary manufacturing cost. In this chapter, we will
introduce geometric tolerances; how they are used; and how they are interpreted.
In 1973, the American National Standards Institute, Inc. (ANSI) introduced a system for
specifying dimensional control called Geometric Dimensioning and Tolerance (GD&T). The
system was intended to provide a standard for specifying and interpreting engineering
drawings and was referred to as ANSI Y14.5 – 1973. In 1982, the standard was further
enhanced and a new standard ANSI Y14.5 – 1982 was born. In 1994, the standard was
further evolved to include formal mathematical definitions of geometric dimensioning and
tolerance and became ASME Y14.5 – 1994 may be placed in front of the tolerance value to
50
denote the tolerance is applied to the whole diameter. The meaning of the modifier will be
discussed later and omitted here.
Following the modifier could be from zero to several Datums (Figure 3-1 and Figure 3.2). For
tolerances such as straightness, flatness, roundness, and cylindricalness, the tolerance is
internal; no external reference feature is needed. In this case, no datum is used in the
feature control frame.
A datum is a plane, surface, point(s), line, axis, or other information source on an object.
Datums are assumed to be exact, and from them, dimensions like the reference location
dimensions in the conventional drawing system can be established. Datums are used for
geometric dimensioning and frequently imply fixturing location information. The correct use
of Datums can significantly affect the manufacturing cost of a part. Figure 3-3 illustrates the
use of Datums and the corresponding fixturing surfaces. The 3-2-1 principle is a way to
guarantee the work piece is perfectly located in the three-dimensional space. Normally
three locators (points) are used to locate the primary datum surface, two for secondary
surface, and one for tertiary surface. After the work piece is located, clamps are used on the
opposing side of the locators to immobilize the work piece.
51
Geometric tolerance symbols (ASME Y14.5M-1994 GD&T (ISO 1101, geometric tolerance; ISO
5458 positional tolerance; ISO 5459 Datums; and others)
52
Symbolic modifiers are used to clarify implied tolerances (Figure 3-1). There are three
modifiers that are applied directly to the tolerance value: Maximum Material Condition
(MMC), Regardless of Feature Size (RFS), and Least Material Condition (LMC). RFS is the
default, thus if there is no modifier symbol, RFS is the default callout. MMC can be used
to constrain the tolerance of the produced dimension and the maximum designed
dimension. It is used to maintain clearance and fit. It can be defined as the condition of
a part feature where the maximum amount of material is contained. For example,
maximum shaft size and minimum hole size are illustrated with MMC as shown in Figure
3-5. LMC specifies the opposite of the maximum material condition. It is used for
maintaining interference fit and in special cases to restrict the minimum material to
eliminate vibration In rotating components. MMC and LMC can be applied only when
both of the following conditions hold:
Two or more features are interrelated with respect to the location or form (e.g.,
two holes). At least one of the features must refer to size.
53
Maximum material diameter and least material diameter
54
When MMC or LMC is used to specify the tolerance of a hole or shaft, it implies that the tolerance
specified is constrained by the maximum or least material condition as well as some other
dimensional feature(s). For MMC, the tolerance may increase when the actual produced feature size
is larger (for a hole) or smaller (for a shaft). Because the increase in the tolerance is compensated by
the deviance of size in production, the final combined hole-size error and geometric tolerance error
will still be larger than the anticipated smallest hole. Figure 3-6 illustrates the allowed tolerance
under the produced hole size. The allowed tolerance is the actual acceptable tolerance limit; it varies
as the size of the produced hole changes. The specified tolerance is the value.
The theory of sequencing and scheduling is the study of allocating resources over time to perform a
collection of tasks. Such problems occur under widely varying circumstances. Various interpretations
are possible. Tasks and resources can stand for jobs and machines in a manufacturing system,
patients and hospital equipment, class and teachers, ships and dockyards, programs and computers,
or cities and (traveling) salesmen.
Sequencing and scheduling are concerned with the optimal allocation of resources to activities over
a period of time, which could be infinite of finite. Of obvious practical importance, it has been the
subject of extensive research since the early 1950’s and an impressive amount of literature has been
created. The terminology ‘flow shop’ is used to describe a serial production system in which all jobs
55
flow along the same route. A more general case would however be when some jobs are not
processed on some machines. In other words, these jobs simply flow through the machines, on
which they are not processed at, but without having to spend any time on them. To generalize the
situation, we can assume that all the jobs must flow through all the machines but have a zero-
processing time at the machines, which are not in the routing matrix. The static flow shop-
sequencing problem denotes the problem of determining the best sequence of jobs on each
machine in the flow shop. The class of shops, in which all the jobs have the same sequence on all the
machines, is called ‘Permutation’ flow shop. Thus, in this case, the problem is then be that of
sequencing the jobs only on the first machine, due to the addition of an extra constraint of same job
sequence at each machine. Ironically this problem is a little harder to address than the more general
case, even though this might seem as a small part (sub problem) of the general case. Various
objectives can be used to determine the quality of the sequence, but most of the research considers
the minimization of make span (i.e., the total completion time of the entire list of jobs) as the
primary objective. Other objectives that can be found in the literature of flow shops are flow time
related (e.g., minimal mean flow time), due-date related (e.g., minimal maximum lateness), and cost
related (e.g., minimal total production cost).
While the Gantt charts are useful for tracking job loading, they do not have the sophistication to
help management determine what job order priorities should be. Sequencing is a process that
determines the priorities job orders should have in the manufacturing process. Sequencing results in
priority rules for job orders.
The basic function of priority rules is to provide direction for developing the sequence in which jobs
should be performed. This assists management in ranking job loading decisions for manufacturing
centers. There are several priority rules which can be applied to job loading. The most widely used
priority rules are:
DD-Due Date of a job- The job having the earliest due date has the highest priority.
FCFS-First Come- First Served. The first job reaching the production center is processed first.
LPT-Longest Processing Tim-Jobs having the longest processing time have the highest priority.
PCO-Preferred Customer Order_ A job from a preferred customer receives the highest priority.
SPT-Shortest Processing Time- The job having the shortest processing time has the highest priority.
56
EXAMPLE:Using the data contained in the table Job Processing Data, it is necessary to schedule
orders according to the priority rules of Due Date (DD), First Come, First Served (FCFS), Longest
Processing Time (LPT), Preferred Customer Order (PCO), and Shortest Processing Time (SPT).
Job Processing Data
Job Processing Data
Job Preferred Customer Processing Time (days) Due Date (days)
Status (1= Highest)
A 3 7 9
B 4 4 6
C 2 2 4
D 5 8 10
E 1 3 5
(Source: Jae K. Shim, Joel G. Siegel, Abraham J. Simon, Business and Economics, 2004)
The critical ratio gives the highest priority to jobs that must be done to maintain a predetermined
shipping schedule. Jobs that are falling behind a shipping schedule receive a ratio of less than 1,
while a job receiving a critical ratio greater than 1 is ahead of schedule and is less critical. A job
receiving a critical ratio score of 1.0 is precisely on schedule.
The critical ratio is calculated by dividing the remaining time until the date due by the remaining
process time using the following formula:
critical ratio = remaining time / remaining process time = (Due Date - Today`s date) / days of
remaining process time
EXAMPLE: On day 16, four jobs, A, B, C, and D, are on order for Ferguson`s Kitchen Installation
Service:
Jobs on Order
Job Due Date Days of Remaining Process Time
A 27 8
B 34 16
57
C 29 15
D 30 14
Using this data, the critical ratios and priority order are computed.
Job C has a critical ratio less than one indicating it has fallen behind schedule. Therefore, it gets the
highest priority. Job D is exactly on schedule, but jobs B and A have respectively higher critical ratios
indicating they have some slack time. This gives them respectively lower priorities.
58
Job Processing Centre 1 Processing Centre 2
A 11 3
B 6 8
C 9 4
D 2 9
E 10 6
(Source: Jae K. Shim, Joel G. Siegel, Abraham J. Simon, Business and Economics, 2004)
Now it is necessary to sequence the jobs starting with the smallest processing time. The smallest job
is job D in Processing Center 1. Since it is in Processing Center 1, it
is sequenced first and then eliminated from further consideration.
The second smallest processing time is A in Processing Center 2. It is placed last, since it is at
Processing Center 2, and eliminated from further consideration.
D A
The next smallest processing time is job C in Processing Center 2. It is placed next to last.
D CA
For the next smallest processing time, there is a tie in job B in Processing Center 1 and Job E in
Processing Center 2. B is placed in the next highest sequence after job D and job E is placed directly
after B.
D BECA
The resulting sequential processing times are:
Table 3.5 Sequential processing time
Processing
2 6 10 9 11
Centre 1
Processing
9 8 6 4 3
Centre 2
In Processing Center 2 the five jobs are completed in 41 hours, and there are 11 hours of idle time.
59
SMED
SMED is the term used to represent the Single Minute Exchange of Die or setup time that can be
counted in a single digit of minutes. SMED is often used interchangeably with “quick
changeover”. SMED and quick changeover are the practice of reducing the time it takes to
change a line or machine from running one product to the next. The need for SMED and quick
changeover programs is more popular now than ever due to increased demand for product
variability, reduced product life cycles and the need to significantly reduce inventories.
KANBAN
Set up your Kanban system. (Kanban is the Japanese word for “card,” “ticket,” or “sign” and is a
tool for managing the flow and production of materials in a Toyota-style “pull” production
system.). You plug in the andon, which is a visual control device in a production area that alerts
workers to defects, equipment abnormalities, or other problems using signals such as lights,
audible alarms, etc.
KAIZEN
Kaizen is the lean manufacturing term for continuous improvement and was originally used to
describe a key element of the Toyota Production System. In use, Kaizen describes an
environment where companies and individuals proactively work to improve the manufacturing
process.
OVERALL EQUIPMENT EFFECTIVENESS (OEE)
OEE is an abbreviation for the manufacturing metric Overall Equipment Effectiveness. OEE
considers the various subcomponents of the manufacturing process – Availability, Performance
and Quality. After the various factors are considered the result is expressed as a percentage. This
percentage can be viewed as a snapshot of the current production efficiency for a machine, line,
or cell.
OEE= Availability x Performance x Quality
60
Distance material is moved
Object itself - Is it easy to grasp? Does it have handles?
Personal Protective Equipment used – Does it fit properly?
Adjustability of work surfaces
Awkward postures
Frequency of lift
By identifying hazards, problems can be resolved, and efforts can be directed where they are
needed most. Potential problems can also be identified and handled prior to becoming worse.
During the assessment step of the ergonomics program, it is important to involve employees.
After all, they are one of the beneficiaries of a safe workplace. Obtaining worker input allows for
a feeling of importance in the ergonomics process, and enhanced worker motivation. Employees
can participate in the hazard analysis step by completing surveys regarding discomfort felt from
performing the job. Once the risk factors and hazards have been identified, methods to reduce
or eliminate these risks can be developed. These methods are known as controls. Development
of controls is the second element of an ergonomics program.
Development of Controls
Controls can be one of three types: engineering, administrative, or personal protective
equipment. Each has its positive and negative points. When choosing an appropriate control for
a task, the activity must be analyzed as discussed previously and the problems causing the injury
risk can be alleviated using one of the three types of controls.
Engineering Controls
Engineering controls reduce or eliminate hazardous conditions by designing the job to take
account for the employee. The design of the job matches it to the employee, so the job demands
do not pose stress to the worker. Jobs can be redesigned to reduce the physical demands
needed while performing the task. Suggested Engineering controls to be used with a material
handling task include the following:
Reduced weight lifted
Decreased distance traveled
Addition of handles to boxes
Adjustable work surfaces
Lifting/Carrying Aids- Hoists, Carts, Conveyors
61
Engineering controls, while usually more expensive than other controls, are the preferred
approach to preventing and controlling injuries due to material handling tasks. These permanent
design changes allow for increased worker safety while performing the task.
Administrative Controls
Administrative controls deal with work practices and policies. Changes in job rules and
procedures such as rest breaks, or worker rotation schedules are examples of administrative
controls. Administrative controls can be helpful as temporary fixes until engineering controls can
be established.
62
poor access or inadequate clearance and excessive reach,
display that are difficult to read and understand, and
Controls that are, confusing to operate or require too much force. Therefore,
furniture that is selected should be suitable for the types of tasks performed and be
adaptable to multi-purpose use. Office workstations must be designed carefully to
meet the need of the staff and to accomplish the goals of the facility.
Design objectives should support humans to achieve the operational objectives for which they
are responsible. There are three goals to consider in human-centered design.
Enhance human abilities
Overcome human limitations
Foster user acceptance.To achieve these objectives, there are several key elements of
ergonomics in the office to consider.
Recommendations
To give departments guidance in selecting office furniture and setting up workstations, the
following guidelines are from the American National Standards Institute and the Environmental
Health and Safety Center. Included are diagrams and a checklist to guide you through the
process.
Keyboard Tray Adjustment Procedure
ERGONOMIC CHAIR CHECKLIST
1. Chair has wheels or castors suitable for the floor surface Yes No
5. Chair height is appropriate for the individual and the work surface height Yes No
63
6. Chair is adjusted so there is no pressure on the backs of the legs, and feet are flat on Yes No
the floor or on a foot rest
9. Footrests are used if feet do not rest flat on the floor Yes No
MONITOR CHECKLIST
1. Top surface of the keyboard space bar is no higher than 2.5 inches above the work surface Yes No
2. During keyboard use, the elbow forms an angle of 90-100 with the upper arm almost Yes No
vertical, the wrist is relaxed and not bent, wrist rests are available
3. If used primarily for text entry, keyboard is directly in front of the operator Yes No
4. If used primarily for data entry, keyboard is directly in front of the keying hand Yes No
8. Images on the screen are sharp, easy to read and do not flicker Yes No
3.9 KEYWORDS
64
Gantt Charts- A Gantt chart is a type of bar chart, developed by Henry Gantt in the 1910s, that
illustrates a project schedule. Gantt charts illustrate the start and finish dates of the terminal
elements and summary elements of a project.
Specification or Tolerance-A specification (often abbreviated as spec) may refer to an explicit set
of requirements to be satisfied by a material, design, product, or service
3.10 SUMMARY
Operational analysis is conducted in order to understand and develop operational processes that
contribute to overall performance and the furthering of company strategy.
Ideally, a thorough operational analysis should seek to examine a number of functional areas
including strategic planning, customer and business results, financial performance, and quality of
innovation. The objective of this process should principally be to reassess existing processes and
determine how objectives might be better met and how costs could be saved. Operational analysis is
an important part of a business’s self-assessment. The tools like SMED, KAIZEN, KANBAN and OEE
are effective tools to enhance operational performance of a firm.
65
UNIT 4 DESIGN OF MANUAL WORK, WORKPLACE, EQUIPMENT
& TOOLS
Objectives
After going through this unit,you will be able to:
identify the relationship between equipment design, tool design and performance of
worker.
design a suitable workplace for an organization.
analyze the various situations regarding work holders, fixtures, hand tools, portable power
tools.
discuss, impact of repetitive motion injuries on an individual.
Arrange bins and drop delivery to reduce reach and move times.
Structure
4.1 Introduction
4.2 Anthropometry & Design
4.3 Principles of work Design
4.4 Principles of work place design
4.5 Principles of machine and equipment design
4.6 Principles of Tool Design
4.7 Cumulative Trauma Disorders (CTD)
4.8 Motion Economy & Motion Study
4.9 Keywords
4.10 Summary
4.1 INTRODUCTION
The musculoskeletal system is composed of osseous (bone) tissue and muscle tissue. Both are
essential parts of the complex structure that is the body. The skeletal system has a major role in the
total structure of the body, but bones and joints alone cannot produce movement. Together,
skeletal tissue and muscle tissue are important parts of the functioning of the body as a whole.
Because of their structural and functional interrelationships, the muscular and skeletal systems are
often considered as a composite system, the musculo-skeletal system. Organs of the skeletal system
66
include the numerous skeletal elements (cartilages and bones) consisting of cartilaginous and
osseous CT and associated CT coverings.
In adult humans, long and short elements usually initially form as cartilages in the embryo, then are
replaced by bone tissue during the process of endochondrial bone formation. Flat and irregular
elements may form as cartilage and remain as cartilage in the adult, or may form as bone during the
process of intra-membranous bone formation.
Organs of the skeletal muscular system are the numerous skeletal muscles comprised of striated
skeletal muscle cells (parenchyma tissue) and associated CT coverings.
The primary guideline is to design the workplace to accommodate most individuals with regard to
structural size of the human body. Anthropometry is the science of measuring the human body
(weight, length)
67
Selected Body Dimensions
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
A kth percentile is the value such that k% of the data is below this value and 100 k% of the data
values are above this value. To form a standard normal distribution, approximate by z = (x μ)/σ with
μ the mean value and σ the standard deviation
68
• 50th percentile = μ
Half of the males are shorter than 68.3 in (1.73 m) while other half are taller. Work
Z Value Table
Kth Percentile 10 or 90 5 or 95
Z value ±1.28 ±1.645
kth percentile 10 or 90 5 or 95
z value ± 1.28 ± 1.645
Mean value of male height, μ = 68.3 in, Stand. Deviation, σ = 2.71 in
95th percentile male height = 68.3 +1.645 x (2.71) = 72.76 in (≈185 cm)
5th percentile male height = 68.3 - 1.645 x (2.71) = 63.84 in (≈162 cm)
Doorway or an entry opening into a storage tank should be designed for the maximum
individual: 95th percentile male stature or shoulder width.
However, this is not true in military aircraft or submarines since space is expensive!
69
Reaches to a brake pedal or control knob are designed for the minimum individual: 5th
percentile female leg or arm length – 95% of all females and practically all males will have
longer reach.
Practical considerations
Industrial designer should consider the legal rules or advices
Special accessibility guidelines (Indian Department of Justice, 1991): entryways into
buildings, assembly area, ramps, elevators, doors, water fountains, lavatories, restaurants,
alarms, telephones, etc.
Very useful to build a full scalemockup and ask the end users to evaluate it before a mass
production
70
Seating arrangement in training room
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Determine the body dimensions critical for the design:- Sitting height, eye height
Define the population being served:- Adult males and females
Select a design principle and the percentage of the population to be accommodated:-Design
for extremes and accommodate 95% of the population -Allow 5th percentile female sitting
behind a 95th percentile male.
Find the appropriate anthropometric values from the given Table 4.1
-5th percentile female eye height sitting is 26.6 in (67.5 cm) 95th percentile male sitting
height is 38.1 in (96.7 cm)
A rise height of 11.5 in (29.2 cm) is necessary and– Too much and too steep it can be decreased
Example of designing work place: Gender Basis
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
71
4.4PRINCIPLES OF WORK PLACE DESIGN
Determine work surface height by elbow height
Upper arms hanging down naturally
Elbows are flexed at 90 degree
Forearms are parallel to the ground
If work surface is too high → shoulder fatigue
If surface is too low → back fa gue
Adjust the work surface height based on the task being performed
Fine assembly → raise the work surface up to 8 in (20 cm) to bring the details closer to
the optimal line of sight
Light, normal assembly
Rough assembly involving the lifting of heavy parts → lower the work surface up to 8 in
(20 cm) to take advantage of the stronger trunk muscles
72
Surface Height Based on the Task Being Performed
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Sitting Standing
Posture of the Spine When Sitting And Standing
73
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
74
Arms:When Operator’s hand are on keyboard , upper arm and forearm should form right angle;
hands should be lined up with forearm; if hands are angled up from the wrist, try lowering or
downward tilting the keyboard; optional arm rests should be adjustable.
Backrest: Adjustable for occasional variations; shape should match contour of lower back,
providing even pressure and support.
Posture: Sit all the way back into chair for proper back support; back, neck should be
comfortably erect; knees should be slightly lower than hips; do not cross legs or shift weight
one side; give joints, muscles a chance to relax; periodically, get up and walk around.
Desk: Thin work surface to allow leg room and posture adjustments; adjustable surface height
preferable; table should be large enough for books, files, telephone while permitting
different positions of screen, keyboard, and mouse pad.
Telephone: Cradling telephone receiver between head and shoulder can cause muscle strain;
headset allows head, neck to remain straight while keeping hands free.
Document Holder: Same height and distance from user as the screen, so eyes can remain
focused as they look from one to the other.
Avoiding Eye Strain: 1 Get Glasses that improve focus on screen; measure distance before
visiting eye doctor.
Try to position screen or lamps so that lighting is indirect; do not have light shining
directly at screen or into eyes
Use a glare-reducing screen
Periodically rest eyes by looking into the distance
75
Properly Adjusted Work Station
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
The work station height should be adjustable so that the work can be performed
efficiently either standing or sitting
• The human body is not designed for long periods of sitting
• Provide a sit/stand stool so that the operator can change postures easily
– Needs height adjustability
– Large base of support
76
Posture Flexibility
(Source: A guide to ergonomics by C. Berry, 2008)
77
Anti-Fatigue Mats for A Standing Operator
(Source: A guide to ergonomics by C. Berry, 2008)
Locate all tools and materials within the normal working area
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Normal and Maximum working areas in the horizontal plane for women (for men multiply
by 1.09)
78
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
Normal and maximum working areas in vertical plane for women (for men, multiply by
1.09)
Fix locations for all tools and materials to permit the best sequence
Eliminates or minimizes the short hesitations required to search for and select the
objects needed to do the work (S, SE: ineffective therbligs)
79
Use gravity bins and drop delivery to reduce reach and move times
Reach (RE) and move (M) are directly proportional to the distance that the hands
must move
Gravity bins eliminates long reaches to get the supplies
Gravity chutes allow the disposal of completed parts within the normal area
eliminating long moves
Gravity Bins and Drop Delivery to Reduce Reach and Move Times
(Source: A guide to ergonomics by C. Berry, 2008)
80
Use Systematic Layout Planning (SLP) or other techniques to develop alternative
layouts
Locate all control devices for best operator accessibility and strength capability
• Frequently used controls should be positioned between elbow and shoulder height
• Seated operators can apply maximum force to levers located at elbow level, standing
operators at shoulder height
• Hand wheel and crank diameters depend on the torque to be expended and the position
• The diameters of knobs should be increased as greater torques are needed
81
• Control response ratio (C/R): is the amount of movement in a control divided by amount of
movement in the response. Low ratio indicates high sensitivity, high ratio means low
sensitivity
• Control resistance: Important for providing feedback to the operator. It can be
displacement with no resistance, force with no displacement or incorporating features of
both
• Size coding is used principally where the controls cannot be seen by the operators
Generalized illustrations of low and high control‐response ratios (C/R ratios) for lever and rotary
controls the C/R ratio function of display
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
82
Ensure proper compatibility between controls and displays
• Compatibility is defined as the relationship between controls and displays that is consistent
with human expectations
• Affordance/Intuitive: the perceived property results in the desired action
– A door with a handle to pull to open / a door with a plate to push to open
• Mapping: the clear relationship between controls and responses
– Controls on the stoves, clockwise movement to increase
• Feedback: so that the operator knows that the function is accomplished
Use a power grip for tasks requiring force and pinch grips for tasks requiring precision
• In a power grip the handle of the tool, whose axis is perpendicular to the forearm, is held by
the partly flexed fingers and the palm. Opposing pressure is applied by the thumb, which
slightly overlaps the middle finger.
– The force parallel to forearm: sawing
– The force at an angle to the forearm: hammering
– The force acting on a moment arm: using a screwdriver
• The pinch grip is used for control and precision. The item is held between the distal ends of
one or more fingers and the opposing thumb.
Type of Grip
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
83
Avoid prolonged static muscle loading
• Tools held for extended period’s results in fatigue, reduced work capacity, and soreness
Design tools so that they can be used by either hand or by most individuals
• Alternating hands allows reduction of the local muscle fatigue
84
• 90% of the population is right‐handed, 10% are left‐handed
• Female grip strength typically ranges from 50 to 67% of male strength with a smaller grip
span
• The best solution is to provide a variety of tool sizes
Use the strongest working fingers: the middle finger and the thumb
• Index finger is usually the fastest but not the strongest
• When a relatively heavy load is involved, use the middle finger, or a combination of middle
finger and the index finger
85
3-in (7.6cm) grip span for two handled tools
(Source: Occupational Ergonomics: Theory and Applications Amit Bhattacharya, James D. McGlothlin, 2012)
86
• A tradeoff between increased safety and decreased performance with gloves must be
considered
87
A collection of a variety of problems including repetitive motion disorders, carpal tunnel
syndrome, tendinitis, ganglionitis, tenosynovitis, bursitis.
Four major work-related factors:
Excessive force,
Awkward or extreme joint motions
High repetition,
Duration of work
Most common symptoms associated with CTD:
– Pain
– Joint movement restriction
– Soft tissue swelling
If the nerves are affected, sensory responses and motor control may be impaired
If left untreated, CTD can result in permanent disability
Motion study involves the analysis of the basic hand, arm, and body movements of workers as they
perform work.
Work design involves the methods and motions used to perform a task.
This design includes
the workplace layout and environment
The tooling and equipment (e.g., work holders, fixtures, hand tools, portable power
tools, and machine tools). Work design is the design of the work system.
Any manual task is composed of work elements, and the work elements can be further
undivided into basic motion elements. We will define basic motion elements and how they can
be used to analyze work. Frank Gilbreth was the first to catalog (list) the basic motion elements.
Therbligs are the basic building blocks of virtually all manual work performed at a single
workplace and consisting primarily of hand motions. – A list of Gilbreth’s 17 therbligs is
presented along with the letter symbol used for each as well as a brief description. With some
modification, these basic motion elements are used today in several work measurement
systems, such as Methods - Time Measurement (MTM) and the Maynard Operation Sequence
88
Technique (MOST). Methods analysis at the therblig level seeks to eliminate or reduce
ineffective therbligs. Some of the motion element names and definitions have been revised
Therbligs
Transport empty (TE) – reach for an object
Grasp (G) – grasp an object
Transport loaded (TL) – move an object with hand and arm
Hold (H) – hold an object
Release load (RL) – release control of an object
Use (U) – manipulate a tool
Pre-position (PP) – position object for next operation
Position (P) – position object in defined location
Assemble (A) – join two parts
Disassemble (DA) – separate multiple parts that were previously joined
Search (Sh) – attempt to find an object using eyes or hand
Select (St) – choose among several objects in a group
Plan (Pn) – decide on an action
Inspect (I) – determine quality of object
Unavoidable delay (UD) – waiting due to factors beyond worker control
Avoidable delay (AD) – worker waiting
Rest (R) – resting to overcome fatigue
Each therblig represents time and energy spent by a worker to perform a task. If the task is
repetitive, of relatively short duration, and will be performed many times, it may be appropriate
to analyze the therbligs that make up the work cycle as part of the work design process. The
term micro motion analysis is sometimes used for this type of analysis.
4.9 KEYWORDS
89
Work Place- The workplace is the physical location where someone works. Such a place can range
from a home office to a large office building or factory.
Equipment and Tool Design-The applied science of equipment design, as for the workplace,
intended to maximize productivity by reducing operator fatigue and discomfort.
Cumulative Trauma Disorders (CTD)- A cumulative trauma disorder is a condition where a part of the
body is injured by repeatedly overusing or causing trauma to that body part
4.10 SUMMARY
Disorders of the musculoskeletal system are a major cause of absence from work and lead,
therefore, to considerable cost for the public health system. Health problems arise if the mechanical
workload is higher than the load-bearing capacity of the components of the musculoskeletal system
(bones, tendons, ligaments, muscles, etc.). Apart from the mechanically-induced strain affecting the
locomotors organs directly, psychological factors such as time-pressure, low job decision latitude or
insufficient social support can augment the influence of mechanical strain or may induce
musculoskeletal disorders by increasing muscle tension and affecting motor coordination.
90
UNIT 5 DESIGN OF WORK ENVIRONMENT
Objectives
After going through this unit,you will be able to:
control the heat in real workplace.
realize the impact of noise in manufacturing industry.
analyze the importance of ventilation in manufacturing environment.
implement OHSAS 18000:2004 in manufacturing environment.
differentiate between temporary noise induced hearing loss and permanent noise induced
hearing loss.
Structure
5.1 Introduction
5.2 Impact of Temperature
5.3 Role of Ventilation
5.4 Noise and Its Impact
5.5 Lighting
5.6 OHSAS 18001:2004
5.7 Keywords
5.8 Summary
5.1 INTRODUCTION
The physical aspects of a workplace environment can have a direct impact on the productivity,
health and safety, comfort, concentration, job satisfaction and morale of the people within it.
Important factors in the work environment that should be considered include building design and
age, workplace layout, workstation set-up, furniture and equipment design and quality, space,
temperature, ventilation, lighting, noise, vibration, radiation, air quality.
When assessing the workplace environment, consideration should be given to individual human
characteristics such as age, sex, experience, physical stature etc., and how well these human
91
characteristics match the physical environment. Appropriate design of workplace environments will
ensure that they accommodate a broad variety of human characteristics.
The work environment should satisfy the physical and mental requirements of the people who work
within it. The necessary adjustments to the work area, in terms of the heights and angles of furniture
and equipment, should be made for the comfort and safety of each person. Temperature;
ventilation; noise; lighting all has an impact on workers’ health and safety in factories and require a
variety of control mechanisms.
Temp in workplace
92
A worker’s ability to do his/her job is affected by working in hot environments. One of the most
important conditions for productive work is maintaining a comfortable temperature inside the
workplace. Of course, the temperature inside the factory varies according to the season and several
methods can be used to address the problem. There are two main ways in which heat (or cold) gets
into the factory:
Directly: through windows, doors, air bricks etc.;
Indirectly: by conduction through the actual fabric of the building namely the roof, walls, and
floor. These warm up through the day as the sun shines and the heat is transferred to the
internal environment often making it hot and sticky for the workers.
There are several measures that management can take to try to reduce the sun’s heat from entering
the factory. These include:
ensuring that the external walls are smooth in texture and painted in a light colour to help to
reflect the heat.
Improving heat insulation of walls and ceilings (investigate the possibility of dry lining walls
or adding an insulated ceiling below the roof. Although this is an expensive option it should
be considered in the plans for all new buildings and local, cheap materials should be used as
far as possible);
Ensuring that the factory is shaded as far as possible by natural means (trees, bushes,
hedges etc) or with shades on windows, doors etc., (note that any shades should not inhibit
access/egress for safety reasons). In very expensive offices, you can see that the windows
are darkened or have sun-reflecting glass. This is not an option for garment factories
because of expense – a simple, cheap option is to whitewash the top part of windows.
93
Fatigue and Dizziness Heat stress/strain (distress)
Sweating palms (become slippery) Heat cramps
Fogging of safety glasses Heat exhaustion/heat stroke
Possible burns Heat rash (prickly heat)
Lower performance/alertness Fainting (syncope)/ increasing irritability
The safety problems tend to be more obvious than the health issues. For example, there is
always the risk of burns for workers in the ironing section through accidental contact with hot
objects. There also tends to be an increased frequency in accidents as workers lose
concentration, get more fatigued, and become more irritable. Tools/equipment can also slip
through sweaty palms and fingers thereby adding to the safety problem. The health problems
associated with hot working environments tend to be more insidious and affect workers more
slowly.
• Byradiation – by increasing blood flow and the temperature of the skin surface. It needs
cooler objects nearby for this method to be effective.
• Byconvection – exchange of heat between the body surface and the surrounding air. It
needs air movement to be effective.
• Byconduction– direct exchange of heat between the body and cooler, solid objects.
94
Engineering controls include:
• the use of increased general ventilation throughout the factory by opening windows, by
ensuring that air bricks, doors etc. are not blocked.
• The use of “spot cooling” by the use of fans to reduce the temperature in certain
sections of the factory (the correct placement of fans is essential – see Pictures 22 and
23).
• the use of local exhaust ventilation systems in hot spots such as the ironing section to
directly remove the heat as close to the source of the heat as possible – see Picture 24.
introducing job rotation so that workers are not always doing so-called “hot work”.
providing more workers to reduce the work load so that workers spend shorter
It is important to know the humidity inside the factory. If the factory is very hot and humid,
the process of sweating is not effective, and the workers are in danger of overheating.
95
It is important not to confuse ventilation and air circulation inside the factory. What we tend to see
inside many garment factories is air circulation, namely moving the air around inside the factory
without renewing it with fresh air from outside. In the case of air circulation, fans are placed near
workers (see picture 24) to improve thermal comfort and, in some cases, remove dust. In essence
this means that you are simply circulating stale air plus any contaminants around the factory.
Ventilation refers to replacing stale air (plus any contaminants) with fresh air (or purified air in the
case of air conditioners) at regular intervals. In an average workplace, the air needs to be changed
between 8 and 12 times per hour and that there should be at least 10 cubic meters of air per
worker.Many Indian garment factories rely on the principle of general ventilation by allowing the
free flow of air through the factory from one side to the other – referred to as horizontal airflow.
This can be achieved by opening doors and windows and putting more air bricks in the walls to take
advantage of any prevailing wind. However, it is all too common to find doors and windows etc.,
locked for security reasons or blocked with excess stock or boxes of finished goods awaiting export.
As a result, ventilation is limited.
If you are trying to improve the general ventilation in your factory, here are a few simple suggestions
that can help:
• if you have ventilation systems or free-standing fans in the factory, make sure that they
increase the natural flow of air through the factory and not try to blow air against any
prevailing wind.
• ensure that hot, stale air that rises to the factory roof can easily be removed and replaced
with fresh air (see fig 5.2).
• make sure that all fans are well maintained and regularly cleaned so that they work
efficiently.
• try to ensure that any “hot” processes such as the ironing section is sited next to the “down
wind” wall so that the heat is extracted directly outside rather than being spread around the
factory.
96
Ventilation and space for stale air
(Source: Better factories environment, 2013 (betterfactories.org)
In cases where there is a buildup of contaminants or heat in specific areas of the factory, local
exhaust ventilation must be used to remove the hazard. This type of ventilation uses suction and
hoods, ducts, tubes etc. to remove the hazard as close to the source as possible and extract it to the
outside environment. It works on a principle like that of a vacuum cleaner but on a much larger
scale.
97
Local exhaust ventilation tubes, sucking the dust away from the sewing machine, into a waste
reservoir is emptied, daily. Note that the suction tubes are placed as close as possible to the source
of the hazard
Checklist for Temperature and Ventilation
Yes No Action Required
Are temperatures in the factory maintained at
comfortable working levels?
Are there any hot or cold areas in the factory?
Have any workers complained about these areas?
Is there good natural ventilation (through open
doors, windows, air bricks etc.) in the factory?
98
What some people find enjoyable and stimulating, others may find noisy and unpleasant. Thus, the
perception of what is sound, or noise is personal, however it is clear that workers can have their
hearing damaged, in some cases permanently, if the sound/noise levels are too high. Most people
define noise as unwanted or unpleasant sound.
Noise can mask or interfere with conversation in the workplace and may contribute to
accidents as warning shouts may not be heard.
Workers exposed to high noise levels often have difficulty in sleeping when they get home
and are constantly fatigued with that feeling of being tired all the time. Some workers take
painkillers on a regular basis to get rid of headaches induced by the noise. Not surprisingly,
when these workers return to work, their job performance will be reduced. High noise levels
in the workplace are thought to be a contributory factor to increased absenteeism.
Workers exposed to high noise levels suffer from what is known as noise induced hearing
loss (NIHL) which can lead to several social problems. These workers often cannot hear or
understand instructions at work; they are left out of conversations as fellow workers, family
members or friends get fed up with having to repeat everything; they have to have the
volume of the TV or radio up much higher than others can tolerate leading to arguments at
home. As a result, workers suffering from NIHL tend to be isolated and alone.
99
period of rest. However, the longer you are exposed to the noise, the longer it takes for your
hearing to return to normal. There comes a point, however, when your hearing does not return
to normal and the condition becomes permanent. This is known as permanent noise-induced
hearing loss.In such cases, you have been exposed to excessive noise for too long and the
sensitive components of the hearing organ have been permanently damaged – it cannot be
repaired. When workers first begin to lose their hearing, there are several warning signals that
are significant:
Workers may notice that normal conversation is difficult to hear or have difficulty listening to
someone talking in a crowd or on the telephone. This is often masked to friends or work
colleagues as people suffering from NIHL begin to lip read as people talk to them. In other
words, they adapt themselves to the situation.
The ear can tolerate low tones more easily than high tones. As a result, it is the high tones which
disappear first so that workers suffering from NIHL will hear people with deeper voices more
easily than colleagues with high voices.
When visitors or new workers come to a noisy part of the factory, it is always interesting to note
their reaction if they are not wearing any form of hearing protection. Do they cover their
ears? Do they shout to hold a conversation? Do they leave in a hurry? All these indicators
are significant.
How do you know if the noise level in the factory is too high?
One of the problems is trying to find out if the noise in certain parts of the factory is too high.
One method is to take measurements and compare them with so-called safe levels as
recommended by national regulations7. Unfortunately, few factories or the Labour Inspectorate
have sound level meters to take such noise measurements. Another method is to undertake a
survey and ask workers if they find the workplace too noisy – BE CAREFUL. Many of the workers
will reply that “it was noisy at the beginning, butI’ve got used to it”. Remember the term “I’ve
got used to it” – No they haven’t – the noise level is still the same. All that has happened is that
they have started to lose their hearing. Sound usually consists of many tones of different
volumes (loudness) and pitches (high or low frequency). We find that it is a combination of
volume and pitch that effects our hearing – not solely the volume. High tones irritate much more
than low tones. The volume of sound is measured in decibels (dBA) and the pitch is measured in
hertz (Hz).
100
Inside a typical garment factory, noise may come from a number of different sources such as the
sewing machines, weaving looms, compressors, radios, background noise, etc. The noise, in the
form of sound waves, is transmitted directly through the air and reflects off walls and ceilings as
well as passing through the factory floor. Obviously, the further away you are from the source of
the noise, the quieter and less harmful it is as the sound waves lose their intensity and die out.
So one method of control is to be as far away as possible from the source of the noise –
unfortunately, many workers cannot do this as they have to operate the noisy machine. If you
want to identify the noise problem in a factory you should measure the noise from each source
and then calculate the overall level using the decibel scale. This is unusual as the scale is a
logarithmic one in which a change of 3 dBA means that the sound has either doubled or halved.
For example, if two machines each create noise levels of 80dBA by themselves, the total noise
level they make together is 83 dBA (not 160 dBA). Similarly, if the noise level has been cut from
90 dBA to 80dBA it means the reduction is the same as if we removed 9 out of 10 noisy
machines from the factory.
101
Workplace noise can be controlled in three ways:
• At the source of the noise.
• Along the path between the source and the worker; and lastly
• At the worker (see below).
Source of Sound
Source: Better factories environment, 2013 (betterfactories.org)
In common with all control strategies for health and safety problems, the most effective method
is to control the hazard at source. However, this often requires considerable expense, and with
profitability being cut to a minimum in the global market, owners and managers are often loathe
spending money, in this area. The least effective, but most common and cheapest method of
control is to put the emphasis on workers wearing some form of personal protective equipment
(PPE). Let us look at some of these methods of control in more detail:
• Enclose entire machines or particularly noisy parts of machines with soundproof casing.
Remember that no part of the enclosure should in contact with the machine otherwise
the sound waves will be transferred through to the outside. The number of holes in the
102
enclosure (access points, holes for wires, piping etc.) should be minimized and fitted
with rubber gaskets where possible.
• Reduce the vibration in component parts and casings. Ensure that the machines are
mounted correctly on rubber mats or other damping material and that mounting bolts
are secured tightly.
• Replace metal parts with others made of sound absorbing materials e.g. plastic or heavy-
duty rubber.
• Fit mufflers on exhaust outlets and direct them away from the working area.
The noise generated in the handling of materials can also be reduced in many ways,like:
• Reduce the dropping height of goods/waste being collected in bins and containers.
Make sure these boxes and containers are rigid and made of sound absorbing material
such as heavy plastic or rubber.
• Ensure that chutes, conveyor belts etc., are made of similar sound absorbing materials.
Controlling noise along the path between the source and the workers:
If it is not possible to control the noise at source, then methods can be used to minimize the
spread of the sound waves around the factory. Sound waves travel through the air rather
like the ripples on water if you throw a pebble into a pond – the waves spread out from the
source. Accordingly, any method that can be used to stop the spread or absorb the sound
waves can effectively reduce the noise problem. Such methods include:
103
• Use sound absorbing materials where possible on the walls, floors, and ceilings.
• Place sound absorbing screens between the source of the noise and workers.
• Hang sound absorbing panels from the ceilings to “capture” some of the sound waves
and reduce the overall noise level.
• Build sound-proof control areas and rest rooms.
• If possible, increase the distance between a worker and the source of the noise.
Ear plugs are worn in the internal part of the ear and they are made of a variety of materials
including rubber, moldable foam, coated plastic or any other material that will fit tightly in
the ear (see figure 5.5). Ear plugs are the least desirable type of hearing protection from an
efficiency and hygiene perspective. On no account should workers be encouraged to stuff
cotton wool in their ears to act as some form of ear plug – all that happens is that some of
the cotton wool gets left behind when the plug is removed and causes an ear infection. From
a health and safety perspective, ear muffs are more efficient than ear plugs providing they
are worn correctly. They must fit over the whole ear (not press the ear flap against the side
of the head) and seal the ear from the sound waves. Workers who have beards or wear
glasses have difficulty in getting a tight seal around the ear.
104
Source: Better factories environment, 2013 (betterfactories.org)
Look closely at this picture. The worker on the right is wearing a set of earplugs but not the
one on the left. Conversely, the one on the left is wearing a dust mask but not the worker on
the right. Is there a noise problem or a dust problem or both?
Whatever type of ear protection is used, there are several points to remember:
• The noise problem is still present – it has not been reduced.
• In the hot, humid conditions that exist in many Indian factories, most workers find
the wearing of any type of PPE uncomfortable.
• Workers cannot communicate easily if they are wearing hearing protection which
can be a problem in the case of emergency.
• Ear plugs and muffs must be thoroughly tested before use and regularly cleaned,
repaired or replaced.
• Workers must be given training in the correct use of the PPE.
105
Yes No Action Required
Does the factory conform to national regulations?
on noise?
Are noisy parts of machines enclosed?
Are machines serviced and maintained regularly?
Is there a policy to replace older, noisy machines
with quieter ones?
Are machines correctly mounted to avoid
vibration and reduce noise levels?
Are sound absorbing materials used on ceilings,
walls and floors?
Are adequate barriers used to prevent noise
spreading around the workplace?
Are people working in quieter sections of the
factory protected from noise sources?
Are workers in noisy areas rotated so that their
noise exposure is reduced in duration?
Are workers provided with the best form of?
hearing protection?
Are the earmuffs/plugs etc. regularly cleaned,
maintained or replaced as necessary?
Have workers been given training in the correct
use of ear muffs or ear plugs?
5.5 LIGHTING
From the workers’ perspective, poor lighting at work can lead to eye strain, fatigue, headaches,
stress, and accidents. On the other hand, too much light can also cause health and safety problems
such as “glare” headaches and stress. Both can lead to mistakes at work, poor quality and low
productivity. Various studies suggest that good lighting at the workplace pays dividends in terms of
improved productivity and a reduction in errors. Improvements in lighting do not necessarily mean
that you need more lights and therefore use more electricity – it is often a case of making better use
of existing lights; making sure that all lights are clean and in good condition; and that lights are
positioned correctly for each task. It is also a case of making the best use of natural light.
106
Most garment factories have a combination of natural and artificial lighting. However, little attention
appears to be paid on the nature of the work – it is as though all work in the factory requires the
same degree of lighting. As we will see, this is not the case. Let us look at some common lighting
problems in the factory:
It has been known for companies, when the order books are low, to introduce “energy saving”
program to save costs. In the case of lighting, “non-essential” light bulbs may be removed or
reduced in number, flickering fluorescent tubes which need changing are left in place – this
proves to be a false economy as quality and productivity fall. One simple way to improve the
lighting levels in the factory is to paint the walls and ceilings with light, pale, matt colors – the
use of matt paint avoids reflection of light which can lead to problems of glare. The color of
equipment such as sewing machines, workbenches, etc., should normally be matched with that
of the walls and again avoid black, shiny paints. By brightening up the workplace, this helps to
produce a more pleasant place to work which impacts on workers’ well-being and, ultimately,
productivity.
107
lighting near any potential hazards such as steps, ramps, etc. and outside the factory for security
at night.
Avoid glare:
Although lighting levels may be adequate in the factory as a whole, glare from a direct light
source or reflected off equipment or shiny surfaces can cause discomfort, eye strain and fatigue
– all of which contribute to an increase in errors, and a reduction in quality and productivity.
Glare has been described as “light in the wrong place” and comes in three different kinds:
Disability glare – can dazzle and impede vision, and so may be a cause of accidents. It is the result of
too much light entering the eye directly.
Discomfort glare – is more common in work situations – it can cause discomfort, strain and fatigue,
especially over long periods. It is caused by direct vision of a bright light source and background.
Reflected glare – is bright light reflected by shiny surfaces into the field of vision. There are several
methods that you can use to avoid or reduce glare in the workplace:
108
The level of light is measured in LUX using a light meter. Unfortunately, few factories or the
Labor Inspectorate have any of these meters. The table below gives an indication of some typical
light levels:
Light Intensity
Very bright sunny day Up to 100,000 lux
Overcast day 30,000- 40,000 lux
Dusk 1,000 lux
Shady room in daylight 100 lux
Machine shops:
- Rough work and assembly 300 lux
- Medium bench and machine work 500 lux
- Fine bench and machine work 1000 lux
109
Is there local lighting for close work to reduce eye
Strain and fatigue?
Are "flickering" fluorescent tubes replaced as soon as
possible?
Are the walls and ceilings painted in light colors and kept
clean?
Is there adequate emergency lighting in all areas?
Are outside areas satisfactorily lit for work and access
during hours of darkness for security as well as safety?
"The main focus in occupational health is on three different objectives: (i) the maintenance and
promotion of workers’ health and working capacity; (ii) the improvement of working environment
and work to become conducive to safety and health and (iii) development of work organizations and
working cultures in a direction which supports health and safety at work and in doing so also
promotes a positive social climate and smooth operation and may enhance productivity of the
110
undertakings. The concept of working culture is intended in this context to mean a reflection of the
essential value systems adopted by the undertaking concerned. Such a culture is reflected in practice
in the managerial systems, personnel policy, principles for participation, training policies and quality
management of the undertaking.
Physical hazards are a common source of injuries in many industries. They are perhaps unavoidable
in many industries such as construction and mining, but over time people have developed safety
methods and procedures to manage the risks of physical danger in the workplace. Employment of
children may pose special problems.
Falls are a common cause of occupational injuries and fatalities, especially in construction,
extraction, transportation, healthcare, and building cleaning and maintenance.
An engineering workshop specializing in the fabrication and welding of components must follow the
Personal Protective Equipment (PPE) at work regulations 1992. It is an employer’s duty to provide
‘all equipment (including clothing affording protection against the weather) which is intended to be
worn or held by a person at work which him against one or more risks to his health and safety’. In a
fabrication and welding workshop an employer would be required to provide face and eye
protection, safety footwear, overalls, and other necessary PPE.
5.7 KEYWORDS
Noise- Noise is any annoying, disturbing, or unwanted sound
Manufacturing Industry- Manufacturing is the production of goods for use or sale using labor
and machines,tools, chemical and biological processing, or formulation
Lighting- lighting is one of the most essential elements for good office ergonomics. Having the
proper lighting level for the type of task being performed increases your comfort and accuracy and
reduces eye strain
OHSAS 18000:2004-OHSAS 18001 is a Standard for occupational health and safety management
systems. It exists to help all kinds of organizations put in place demonstrably sound occupational
health and safety performance. It is widely seen as the world’s most recognized occupational health
and safety management systems standard.
111
Temperature-The temperature of a body is a quantity which indicates how hot or cold the body is. It
is measured by detection of heat radiation, or by a material thermometer, which may
be calibrated in any of various temperature scales, Celsius, Fahrenheit, Kelvin, etc.
5.8 SUMMARY
As an employer, it is your responsibility to provide a safe work environment for all employees, free
from any hazards and complying with all state and central government laws. When we think about
lighting in the workplace, the first thing that comes to mind is the obvious physical effect it has on
us. Inappropriate lighting can lead to a host of problems, ranging from eyestrain to serious
musculoskeletal injuries. The worker should not be affected by light, noise, and temperature inside
the factory. Being manufacturing in the factories, employer, and managers should have to have
taken in consideration the minimum requirement and maximum a worker can be exposed to a
particular environment.
112
UNIT 6 DESIGN OF COGNITIVE WORK
Objectives
After going through this unit,you will be able to:
minimize informational workload.
recognize the importance of visual displays for long, complex messages in noise areas.
use auditory displays for warnings and short, simple messages.
value the color and flashing lights to get attention.
limit absolute judgments to 7 ±2 items.
Structure
1.1 Introduction
1.2 Information Theory
1.3 Human Information Processing Model
1.4 Perception and Signal Detection Theory
1.5 Coding of Information: General Design Principles
1.6 Display of Visual Information
1.7 Display of auditory information
1.8 Environmental Factors
1.9 Dissociating the signal from noise
1.10 Human computer interaction: hardware considerations
1.11 Pointing Devices
6.12 Human computer interaction: software considerations
6.13 Keywords
6.14 Summary
6.1 INTRODUCTION
The design of cognitive work has not been traditionally included as part of methods engineering.
However, with ongoing changes in jobs and the working environment, it is becoming increasingly
important to study not only the manual components of work but also the cognitive aspects of work.
Machines and equipment are becoming increasingly complex and semi, if not, fully automated. The
operator must be able to perceive and interpret large amounts of information, make critical
decisions, and be able to control these machines quickly and accurately. Furthermore, there has
been a gradual shift of jobs from manufacturing to the service sector. In either case, there typically
113
will be less emphasis on gross physical activity and a greater emphasis on information processing
and decision making, especially via computers and associated modern technology.
Information, in the everyday sense of the word, is knowledge received regarding a particular fact. In
the technical sense, information is the reduction of uncertainty about that fact. For example, the
fact that the engine (oil) light comes on when a car is started provides very little information (other
than that the light bulb is functioning) because it is expected. On the other hand, when that same
light comes on when you are driving down a road, it conveys considerable information about the
status of the engine because it is unexpected and a very unlikely event. Thus, there is a relationship
between the likelihood of an event and the amount of information it conveys, which can be
quantified through the mathematical definition of information. Note that this concept is irrespective
of the importance of the information; that is the status of the engine is quite a bit more important
than whether the windshield-washer container is empty.
Information theory measures information in bits, where a bit is the amount of information required
to decide between two equally likely alternatives. The term “bit” came from the first and last part of
the words binary digit used in computer and communication theory to express the on/off state of a
chip or the polarized/reverse polarized position of small pieces of ferromagnetic core used in archaic
computer memory. Mathematically this can be expressed as:
H = log2 n Where: H = the amount of information.
n = the number of equally likely alternatives.
With only two alternatives, such as the on/off state of a chip or the toss of anot weighted coin, there
is one bit of information presented. With ten equally likely alternatives, such as the numbers from 0
to 9, 3.322 bits of information can be conveyed (log210 = 3.322). An easy way of calculating log2 is
to use the following formula:
log2 n = 1.4427 × ln n
When the alternatives are not equally likely, the information conveyed is determined by:
H = ∑pi × log2 (1/pi)
Where: pi = the probability of the ith event.
i = Alternatives from 1 to n.
As an example, consider a coin weighted so that heads come up 90 percent of the time and tails only
10 percent of time. The amount of information conveyed in a coin toss becomes:
H = 0.9 × log2 (1/0.9) + 0.1 × log2 (1/0.1) = 0.9 × 0.152 + 0.1 × 3.32
114
= 0.469 bits
Note, that the amount of information (0.469) conveyed by a weighted coin is less than the amount
of information conveyed by an unweighted coin (1.0). The maximum amount of information is
always obtained when the probabilities are equally likely. This is because the more likely an
alternative becomes, the less information is being conveyed (i.e., consider the engine light upon
starting a car). This leads to the concept of redundancy and the reduction of information from the
maximum possible due to unequal probabilities of occurrence. Redundancy can be expressed as:
% redundancy = (1 - H/Hmax) ×100
For the case of the weighted coin, the redundancy is:
% redundancy = (1 - .469/1) × 100 = 53.1%
An interesting example relates to the use of the English language. There are 26 letters in the
alphabet (A through Z) with a theoretical informational content for a randomly chosen letter of 4.7
bits (log2 26 = 4.7). Obviously, with the combinations of letters into words, considerably more
information can be presented. However, there is a considerable reduction in the amount of
information that can be presented due to the unequal probabilities of occurrence. For example,
letters s, t, and e are much more common than q, x, and z. It has been estimated that the
redundancy in the English language amounts to 68 percent (Sanders and McCormick, 1993). On the
other hand, redundancy has some important advantages that will be discussed later with respect to
designing displays and presenting information to users.
One final related concept is the bandwidth or channel capacity, the maximum information
processing speed of a given communication channel. In terms of the human operator, the bandwidth
for motor-processing tasks could be as low as 6–7 bits/sec or as high as 50 bits/sec for speech
communication. For purely sensory storage of the ear (i.e., information not reaching the decision-
making stage), the bandwidth approaches 10,000 bits/sec (Sanders and McCormick, 1993). The
latter value is much higher than the actual amount of information that is processed by the brain in
that time because most of the information received by our senses is filtered out before it reaches
the brain. Source: Energy: Choices for Environment and Development, UN, 2013
115
combined with working memory and long-term memory, can be considered as the central
processing unit while the sensory store is a very transient memory, located at the input stage
(Wickens, Gordon, and Liu, 1997).
116
The detection part of perceptual encoding can be modeled or, in simple tasks, even quantified
through signal detection theory (SDT). The basic concept of SDT is that in any situation, an observer
needs to identify a signal (i.e., whether it is present or absent) from confounding noise. For example,
a quality inspector in an electronics operation must identify and remove defective chip capacitors
from the good capacitors being used in the assembly of printed circuit boards. The defective chip
capacitor is the signal, which could be identified by excessive solder on the capacitor that shorts out
the capacitor. The good capacitors, in this case, would be considered noise. Note that one could just
as easily reverse the decision process and consider good capacitors the signal and defective
capacitors noise. This would probably depend on the relative proportions of each. Given that the
observer must identify whether the signal is present or not and that only two possible states exist
(i.e., the signal is either there or not there), there is a total of four possible outcomes:
Hit—saying there is a signal when the signal is present
Correction rejection—saying there is no signal when no signal is present
False alarm—saying there is a signal when no signal is present
Miss—saying there is no signal when the signal is present
Both the signal and noise can vary over time, as is the case with most industrial processes. For
example, the soldering machine may warm up and, initially, expel a larger drop of solder on the
capacitors, or there may be simply “random” variation in the capacitors with no cause yet
determined. Therefore, both the signal and noise form distributions of varying solder quantity from
low to high, which typically are modeled as overlapping normal distributions (Figure 7–2). Note, the
distributions overlap because excessive solder on the body of the capacitor would cause it to short
out causing a defective product (in this case a signal). However, if there is excessive solder, but
primarily on the leads, it may not short out and thus is still a good capacitor (in this case noise). With
ever-shrinking electronic products, chip capacitors are smaller than pinheads, and the visual
inspection of these is not a trivial task. When a capacitor appears, the inspector needs to decide if
the quantity of solder is excessive or not and whether to reject the capacitor or not. Either through
instructions and/or sufficient practice, the inspector has made a mental standard of judgment, which
is depicted as the vertical line in Figure and termed the response criterion. If the detected quantity of
the solder, which enters the visual system as a high level of sensory stimulation, exceeds the
criterion, the inspector will say there is a signal. On the other hand, if the detected quantity is small,
a smaller level of sensory stimulation is received, landing below the criterion, and the inspector will
say there is no signal. Related to the response criterion is the quantity beta. Numerically beta is the
ratio of the height of the two curves (signal to noise) in at the given
117
Conceptual illustration of signal detection theory
(From: Sanders and McCormick, 1993. Reproduced with permission of the McGraw-Hill Companies)
criterion point. If the criterion shifts to the left, beta decreases with an increase of hits but at the
cost of a corresponding increase of false alarms. This behavior on the part of the observer is termed
risky. If the criterion were at the point where the two curves intersect, beta would be 1.0. On the
other hand, if the criterion shifts to the right, beta increases with a decrease of both hits and false
alarms. This behavior on the part of the observer would be termed conservative. The response
criterion (and beta) can easily change depending on the mood or fatigue of the visual inspector. It
would not be unexpected for the criterion to shift to the right and the miss rate to increase
dramatically late Friday afternoons shortly before quitting times. Note, that there will be a
corresponding decrease in the hit rate because the two probabilities sum to one. Similarly, the
probabilities of a correct rejection and false alarms also sum to one. The change in the response
criterion is termed response bias and could also change with prior knowledge or changes in
expectancy. If it was known that the soldering machine was malfunctioning, the inspector would
most likely shift the criterion to the left, increasing the number of hits. The criterion could also
change due the costs or benefits associated with the four outcomes. If a particular batch of
capacitors were being sent to NASA for use in the space shuttle, the costs of having a defect would
118
be very high, and the inspector would set a very low criterion producing many hits but also many
false alarms with corresponding increased costs (e.g., losing good products). On the other had, if the
capacitors were being used in cheap give-away cell phones, the inspector may set a very high
criterion, allowing many defective capacitors to pass through the checkpoint as misses.
5. Are incentives provided to change the response bias and increase hits? ❑ ❑
Memory Considerations
Yes No
119
4. Are numbers separated from letters in lists or chunks? ❑ ❑
Yes No
120
Qualitative—indicating general values or trends (e.g., up, down, hot, cold)
Status—reflecting one of a limited number of conditions (e.g., on/off,
stop/caution/go)
Warnings—indicating emergencies or unsafe conditions (e.g., fire alarm)
Alphanumeric—using letters and numbers (e.g., signs, labels)
Representational—using pictures, symbols, and color to code information (e.g.,
“wastebasket” for deleted files)
time-phased—using pulsed signals, varying in duration and interspinal interval (e.g.,
Morse code or blinking lights)
Note that one informational display may incorporate several of these types of information
simultaneously. For example, a stop sign is a static warning using both alphanumeric letters and
an octagonal shape and the red color as representations of information.
Display modality
Since there are five different senses (vision, hearing, touch, taste, smell), there could be five
different display modalities for information to be perceived by the human operator. However,
given that vision and hearing are by far the most developed senses and most used for receiving
information, the choice is generally limited to those two. The choice of which of the two to use
depends on a variety of factors, with each sense having certain advantages as well as certain
disadvantages. The detailed comparisons given in Table 6–1 may aid the industrial engineer in
selecting the appropriate modality for the given circumstances.
Taste has been used in a very limited range of circumstances, primarily added to give medicine a
“bad” taste and prevent children from accidentally swallowing it. Similarly, odors have been
used in the ventilation system of mines to warn miners of emergencies or in natural gas to warn
the homeowner of a leaking stove.
121
scale go from left to right and a clockwise (or left to right) movement of the pointer indicates
increasing values. With a moving scale and fixed pointer, one of these two compatibility principles
will always be violated. Note, that the display can be,circularor semicircular or a vertical bar or a
horizontal bar, or an open window. The only situation in which the moving scale and fixed pointer
design has an advantage is for very large scales, which cannot be fully shown on the fixed scale
display.
Moving Scale Fair Poor (may be Fair (may be Fair (may have
difficult to identify difficult to identify ambiguous
direction and relation between relationship to
magnitude) setting and manual-control
motion) motion)
In that case an open window display can accommodate a very large scale behind the display with
only the relevant portion showing. Note that the fixed scale and moving pointer design can display
very nicely quantitative information as well as general trends in the readings. Also, the same displays
can be generated with computer graphics or electronics without a need for traditional mechanical
scales.
122
Types of displays for presenting quantitative information
(From: Sanders and McCormick, 1993. Reproduced with permission of the McGraw-Hill Companies)
123
properties), auditory signals are especially useful if workers are at an unknown location and
moving about.
Since sound waves can be dispersed or attenuated by the working environment, it is important to
take environmental factors into account. Use signal frequencies below 1,000 Hz when the signals
need to travel long distances (i.e., more than 1,000 ft), because higher frequencies tend to be
absorbed or dispersed more easily. Use frequencies below 500 Hz when signals need to bend around
obstacles or pass through partitions. The lower the signal frequency the more similar sound waves
become to vibrations in solid objects, again with lower absorption characteristics.
Auditory signals should be as separate as possible from other sounds, whether useful auditory
signals or unneeded noise. This means the desired signal should be as different as possible from
other signals in terms of frequency, intensity, and modulation. If possible, warnings should be placed
124
on a separate communication channel to increase the sense of disposabilityand increase the
attention demanding qualities of the warning. The above principles for designing displays, both
auditory and visual, are summarized as an evaluative checklist in table 7.3. If purchased equipment
has dials or other displays that don’t correspond to these design guidelines, then there is the
possibility for operator error and potential loss. If at all possible, those problems should be
corrected, or the displays replaced.
The standard computer keyboard used today is based on the typewriter key layout patented by C. L.
Sholes in 1878. Termed a QWERTY keyboard, because of the sequence of the first six leftmost keys in
the third row, it has the distinction of allocating some of the most common English letters to the
weakest fingers
Checklist for Hardware Considerations
General Principles
Yes No
Visual Displays
Yes No
❑ ❑
6. For general purpose and trends, is a fixed-scale, moving-pointer
display ❑ ❑
125
being used? ❑ ❑
d. Are there intermediate markers at 5, 15, 25, etc. and minor markers at ❑ ❑
each unit?
❑ ❑
e. Does the pointer have a pointed tip just meeting the smallest scale
markers?
❑ ❑
f. Is the pointer close to the surface of the scale to avoid parallax?
pattern utilized? ❑ ❑
❑ ❑
❑ ❑
Auditory Displays
126
Yes No
7. Is the frequency of the signal in the range of 500 to 3,000 Hz for best
❑ ❑
auditory sensitivity?
A digitizing tablet is a flat pad placed on the desktop, again linked to the computer. Movement of a
stylus is sensed at the appropriate position on the tablet, which can either be absolute (i.e., the
tablet is a representation of the screen) or relative (i.e., only direct movement across the tablet is
127
shown). Further complications are tablet size versus accuracy trade-offs and optimum control
response ratios. Also, the user needs to look back at the screen to receive feedback.
Both displacement and force joysticks (currently termed track sticks or track points) can be used to
control the cursor and have a considerable background of research on types of control systems,
types of displays, control response ratios, and tracking performance. The mouse is a hand-held
device with a roller ball in the base to control position and one or more buttons for other inputs. It is
a relative positioning device and requires a clear space next to the keyboard for operation. The
trackball, an upside-down mouse without the mouse pad, is a good alternative for work surfaces
with limited space. More recently, touchpad, a form of digitizing tablets integrated into the
keyboard, have become popular especially for notebook PCs.
Even smaller handheld computers, termed personal digital assistants, have been developed but
are too new to have had detailed scientific evaluations performed. Being pocket-sized, they offer
much greater portability and flexibility, but at an even greater disadvantage for data entry.
Decrements in accuracy and speed, when entering text via the touch screen, have been found.
Alternate input methods such as handwriting, or voice input may be better.
128
Most current interactive computing software utilizes the graphical user interface (GUI), identified by
four main elements: windows, icons, menus, and pointers (sometimes collectively termed WIMP).
Windows are the areas of the screen that behave as if they were independent screens. They typically
contain text or graphics and can be moved around or resized. More than one window can be on a
screen at once, allowing users to switch back and forth between various tasks or information
sources. This leads to a potential problem of windows overlapping each other and obscuring vital
information. Consequently, there needs to be a layout policy with windows being tiled, cascaded, or
picture-in-a-picture (PIP). Usually windows have features that increase their usefulness such as
scrollbars, allowing the user to move the contents of the window up and down or from left to right.
This makes the window behave as if it were a real window onto a much larger world, where new
information is brought into view by manipulating the scrollbars. There is usually a title bar attached
to the top of the window, identifying it to the user, and there may be special boxes in the corners of
the window to aid in resizing, and closing. Icons are small or reduced representations of windows or
other entities within the interface. By allowing icons, many windows can be available on the screen
at the same time, ready to be expanded to a useful size by clicking on the icon with a pointer
(typically a mouse). The icon saves space on the screen and serves as a reminder containing the
dialog. Other useful entities represented by icons include a wastebasket for deleting unwanted files,
programs, applications, or files accessible to the user. Icons can take many forms: they can be
realistic representations of the objects they stand for or they can be highly stylized, but with
appropriate reference to the entity (known as compatibility) so that users can easily interpret them.
The pointer is an important component of the WIMP interface since the selection of an appropriate
icon requires a quick and efficient means of directly manipulating it. Currently the mouse is the most
common pointing device, although joysticks and trackballs can serve as useful alternatives. A touch
screen, with the finger serving as a pointer, can serve as very quick alternative and even redundant
backup/safety measure in emergency situations. Different shapes of the cursor are often used to
distinguish different modes of the pointer, such as, an arrow for simple pointing, crosshairs for
drawing lines, and a paintbrush for filling in outlines. Pointing cursors are essentially icons or images
and thus should have a hot spot that indicates the active pointing location. For an arrow, the tip is
the obvious hot spot. However, cutesy images (e.g., dogs and cats) should be avoided because they
have no obvious hot spot.
Menus present an ordered list of operations, services, or information that is available to the user.
This implies that the names used for the commands in the menu should be meaningful and
informative. The pointing device is used to indicate the desired option, with possible or reasonable
129
options highlighted and impossible or unreasonable actions dimmed. Selection usually requires an
additional action by the user, usually clicking a button on a mouse or touching the screen with the
finger or a pointer. When the number of possible menu items increases beyond a reasonable limit
(typically 7 to 10), the items need to be grouped in separate windows with only the title or a label
appearing on a menu bar. When the title is clicked, the underlying items pop up in a separate
window known as a pull-down menu. To facilitate finding the desired item, it is important to group
menu items by functionality or similarity. Within a given window or menu, the items should be
ordered by importance and frequency of use. Opposite functions, such as SAVE and DELETE, should
be clearly kept apart to prevent accidental miss election. Other menu like features include buttons,
isolated picture-in-picture windows within a display that can be selected by the user to invoke
specific actions, toolbars, a collection of buttons or icons, and dialog boxes that pop up to bring
important information to the user’s attention such as possible errors, problems, or emergencies.
Other principles in screen design include simple usability considerations: orderly, clean, clutter-free
appearance, expected information located where it should be consistently from screen to screen for
similar functions or information. Eye-tracking studies indicate that the user’s eyes typically move
first to the upper left center of the display and then move quickly in a clockwise direction. Therefore,
an obvious starting point should be in the left upper corner of the screen, permitted the standard
left-to-right and top-to-bottom reading pattern found in Western cultures. The composition of the
display should be visually pleasing with balance, symmetry, regularity, predictability, proportion, and
sequence.
1. Does the software use movable areas of the screen termed windows? ❑ ❑
2. Is there a layout policy for the windows (i.e., are they tiled, cascaded, or picture-in- ❑ ❑
picture)?
5. Are there special boxes in the corners of the windows to resize or close them? ❑ ❑
Icon Features
Yes No
130
1. Are reduced versions of frequently used windows, termed icons, utilized? ❑ ❑
2. Are the icons easily interpretable or realistic representations of the given feature? ❑ ❑
Pointer Features
Yes No
2. Is the pointer or cursor easily identifiable with an obvious active area or hot spot? ❑ ❑
Menu Features
Yes No
3. Is the starting point for the screen action the upper left-hand corner? ❑ ❑
4. Does the screen action proceed left to right and top to bottom? ❑ ❑
5. Is any text brief and concise and does it use both uppercase and lowercase ❑ ❑
fonts?
7. Does the user have control over exiting screens and undoing actions? Is ❑ ❑
feedback provided for any action?
6.13 SUMMARY
131
This unit presented a conceptual model of the human as an information processor along with the
capacities and limitations of such a system. Specific details were given for properly designing
cognitive work so as not to overload the human with regard to information presented through
auditory and visual displays, to information being stored in various memories, and to information
being processed as part of the final decision-making and response-selection step. Also, since the
computer is the common tool associated with information processing, issues, and design features
with regard to the computer workstation were also addressed. With manual work activities, the
physical aspects of the workplace and tools, and the working environment, the cognitive element is
the final aspect of the human operator at work and the analyst is now ready to implement the new
method.
6.14 KEYWORDS
Human Factor Engineering (HFE) - Human Factors Engineering (HFE) is an interdisciplinary approach
to evaluating and improving the safety, efficiency, and robustness of work systems, such as
healthcare delivery
Display of Visual Information- the Information in the form of graphs and bar charts etc.
Display of auditory information- Auditory display is the use of sound to communicate information
from a computer to the user
Human information processing model-Human information processing theory deals with how people
receive, store, integrate, retrieve, and use information
Human Computer Interaction- Human–computer interaction (HCI) involves the study, planning, and
design of the interaction between people (users) and computers. It is often regarded as the
intersection of computer science, behavioral sciences, design and several other fields of study
132
UNIT 7 ANTHROPOMETRY & WORK DESIGN
Objectives
After going through this unit,you will be able to:
design the workplace with anthropometric considerations.
recognize the importance design limit approach and consideration in design.
use clearance dimension at the 95th percentile.
use limiting dimension at 5th percentile.
use the concept of combined data for design criteria.
Structure
7.1 Introduction
7.2 Using Design Limits
7.3 Avoiding Pitfalls in Applying Anthropometric Data
7.4 Solving A Complex Sequence of Design Problems
7.5 Need for Indian Anthropometry
7.6 Guidelines for Design Use
7.7 Percentile Selection for Design Use
7.8 Use of Average
7.9 Concept of Male-Female Combined Data for Design Use
7.10 Practical Applications
7.11 Keywords
7.12 Summary
7.1 INTRODUCTION
Designers and human factors specialists incorporate scientific data on human physical capabilities
into the design of systems and equipment. Human physical characteristics, unlike those of machines,
cannot be designed. However, failure to consider human physical characteristics when designing
systems or equipment can place unnecessary demands and restrictions upon user personnel.
The term anthropometry literally means the measure of humans. From a practical standpoint, the
field of anthropometry is the science of measurement and the art of application that establishes the
physical geometry, mass properties, and strength capabilities of the human body (Roebuck 1995).
Anthropometric data are fundamental in the fields of work physiology (Åstrand and Rodahl 1986),
occupational biomechanics (Chaffin, Anderson, and Martin 1999), and ergonomics/work design
133
(Konzand Johnson 2004). Anthropometric data are used in the evaluation and design of
workstations, equipment, tools, clothing, personal protective equipment, and products, as well as in
biomechanical models and bioengineering applications.
It is a fundamental concept of nature that humans come in a variety of sizes and proportions.
Because there is a reasonable amount of useful anthropometric data available, it is usually not
necessary to collect measurements on a specific workforce. The most common application involves
design for a general occupational population.
Definitions
Anthropometry is the scientific measurement and collection of data about human physical
characteristics and the application (engineering anthropometry) of these data in the design and
evaluation of systems, equipment, manufactured products, human- made environments, and
facilities.
Biomechanicsdescribes the mechanical characteristics of biological systems, in this case the human
body, in terms of physical measures and mechanical models. This field is interdisciplinary (mainly
anthropometry, mechanics, physiology, and engineering). Its applications address mechanical
structure, strength, and mobility of humans for engineering purposes.
Use of data- Anthropometric and biomechanics data shall be used in the design of systems,
equipment (including personal protection equipment), clothing, workplaces, passageways, controls,
access openings, and tools. [Source: National Aeronautics and Space Administration (NASA-STD-
3000A), 1989]
The human's interface with other system components needs to be treated as objectively and
systematically as are other interface and hardware component designs. It is not acceptable to guess
about human physical characteristics or to use the designer's own measurements or the
measurements of associates. Application of appropriate anthropometric and biomechanics data is
expected.
Using population extremes: Designers and human factors specialists shall draw upon the extremes of
the larger male population distribution and the extremes of the smaller female population
distributions to represent the upper and lower range values, respectively, to apply to
anthropometric and biomechanics design problems. [Source: NASA-STD-3000A, 1989].
134
7.2 USING DESIGN LIMITS
Initial rules in this section address the design limits approach. To understand this approach, it is
helpful to consider the overall steps and choices that one makes in applying anthropometric and
biomechanics data. The design limits approach entails selecting the most appropriate percentile
values in population distributions and applying the appropriate associated data in a design solution.
These steps are listed in this introductory material and are explained in detail in the initial three
rules of this subsection. If the reader has applied the design limit approach and understands it, the
reader can skip the rest of this introductory material as well as the explanations associated with the
first three rules. However, the reader should not skip the rules.
The design limits approach is a method of applying population or sample statistics and data about
human physical characteristics to a design so that a desired portion of the user population is
accommodated by the design. The range of users accommodated is a function of limits used in
setting the population portion. To understand the design limits approach, it is helpful to consider
step by step the choices that design personnel make in applying these human physical data.
Select the correct human physical characteristic and its applicable measurement characteristic
(description) for the design problem at hand.
Select the appropriate population, representative sample, or rule information on the selected
human physical characteristic and measurement description to apply to the design problem.
Determine the appropriate statistical point(s), usually percentile points from rule information or
from the sample distribution(s) to accommodate a desired range of the human characteristic
within the distribution of the user population.
Read directly or determine statistically the measurement value(s) that corresponds to the
selected statistical point(s) relevant to the population distribution.
Incorporate the measurement value as a criterion for the design dimension, or in the case of
biomechanics data, for the movement or force solution in the design problem.
135
Design clearance dimensions that must accommodate or allow passage of the body or parts of the
body shall be based upon the 95th percentile of the male distribution data. [Source: Department
of Defense (MIL-STD-1472D), 1989]
Adjustable Dimensions
Any equipment dimensions that need to be adjusted for the comfort or performance of the
individual user shall be adjustable over the range of the 5th to 95th percentiles. [Source: MIL-
STD-1472D, 1989]
Sizing Determinations
Clothing and certain personal equipment dimensions that need to conform closely to the contour of
the body or body parts shall be designed and sized to accommodate at least the 5th through the
95th percentile range.
If necessary, this range shall be accommodated by creating a number of unique sizes, where
each size accommodates a segment of the population distribution. Each segment can be
bounded by a small range of percentile values. [Source: MIL-STD-1472D, 1989]
136
misperception of the typical sized person, (3) generalizing across human characteristics, and (4)
summing of measurement values for like percentile points across adjacent body parts.
The 50th percentile or mean shall not be used as design criteria as it accommodates only half of the
users. [Source: NASA-STD-3000A, 1989].
When the population distribution is Gaussian (normal), the use of either the 50th percentile or the
average for a clearance would, at best, accommodate half the population.
When the middle 30 percent of a population of 4000 men was measured on 10 dimensions, only one
fourth of them were "average" in a single dimension (height), and less than 1 percent were average
in five dimensions (height, chest circumference, arm length, crotch height, and torso
circumference). Keeping in mind that there is not an "average person," one also must realize that
there is neither a “5th percentile person” nor a 95th percentile” person Different body part
dimensions are not necessarily highly correlated. An implication is that one cannot choose a person
who is 95 percentile in stature as a test subject for meeting 95 percentile requirements in reach or
other dimensions Summation of segment dimensions. Summation of like percentile values for body
components shall not be used to represent any human physical characteristic that appears to be a
composite of component characteristics. [Source: NASA-STD-3000A, 1989]
The 95th percentile arm length, for instance, is not the addition of the 95th percentile shoulder-to
elbow length plus the 95th percentile elbow-to-hand length. The actual 95th percentile arm length
will be somewhat less than the erroneous summation. To determine the 95th percentile arm length,
one must use a distribution of arm length rather than component part distributions
137
In this section, rules are presented for approaching complex design problems that require the
consideration of a sequence of relevant design reference locations (such as seat reference points
and eye reference zones), human physical characteristics, statistical points, and measures. The
recommended approach involves identifying the necessary human activities and positions and
establishing reference points and envelopes for the necessary activities. These envelopes impact the
location and design of controls and displays, as well as the placement of work surfaces, equipment,
and seating accommodations. The effects of clothing or carried equipment are then used to expand
the dimensions.
138
positional relationship of a shoulder reference point or arm rotation point to the seat back,
seat reference point, or other posture reference or design reference points
the appropriate samples and anthropometric measurements from the data provided in this
document or in DOD-HDBK-743A, 1991 [Source: DOD-HDBK-743A, 1991]
Effects of Clothing
Because most anthropometric data sources represent nude body measurements (unless otherwise
indicated), suitable allowances shall be made for light, medium, or heavy clothing and for any
special protective equipment that is worn.
The additive effects of clothing on static body dimensions and shows the 95th percentile gloved
hand measures. If special items of protective clothing or equipment are involved, the effects
shall be measured in positions required by the users' tasks. The effects on the extremes of the
population distribution shall be determined. [Source: Department of Defense (MIL-HDBK-759B), 1992;
Johnson, 1984]. Nude dimension and light clothing can be regarded as synonymous for practical
purposes.
139
the percentile value is not given in applicable Human machine-interface data, and
the population distribution for the applicable human physical characteristic is known to be
Gaussian (normal) and the mean and variance are known. [Source: Israelski, 1977]
Using bivariate distribution data- Bivariate data should be professionally applied and interpreted
since knowledge of the population distribution characteristics are necessary to project and
extract design limits and to apply them to design problems. [Source: MIL-HDBK-759B, 1992].
The variability of two body measurements and their interrelationship with each other may be
presented in a graph or a table. Bivariate information includes the ranges of two measurements
and the percentages or frequencies of individuals who are characterized by the various possible
combinations of values of the two measurements. Knowledgeable professionals can tell about
the relationships from the appearance and shape of the joint distribution of measures.
Correlation statistics, when the relationship warrants, provide additional insight, and when
appropriate samples are large enough, may provide predictions of population values.
Although common percentile values may not be used to sum data across adjacent body parts,
regression equations derived from the applicable samples can be used in constructing composite
body measures.
Definition: The correlation coefficient or "r" value describes the degree to which two variables
vary together (positive correlation) or vary inversely (negative correlation). The correlation
coefficient, "r", has a range of values from +1.0 (perfect positive correlation) through -1.0
(perfect negative correlation). Multiple correlations involve the predictable relationship of two
140
or more variables with another criterion variable (such as a composite measurement value). "R"
is the multiple correlation coefficients. It is recommended that only correlations with strong
predictive values be used (that is where r or R is at least or greater than |.7|). (Note: R2 is the
square of the multiple correlation coefficients and equates to the proportion of the variation
accounted for in the prediction. An R of 0.7 would account for about 50 percent of the
variation).
(1) body position, (2) age, health, and body condition, (3) sex, (4) race and national origin, (5)
occupation, and (6) evolutionary trends. These factors affect future population sampling and
encourage the use of the most recent data on the populations of interest. If designers and
human factors specialists need to draw upon other data or accomplish some special purpose
sampling, the following rules related to data variability may assist.
Body slump: In determining body position and eye position zones for seated or standing
positions, a slump factor which accompanies relaxation should be considered. Seated-eye height
measurements can be reduced by as much as 65 mm (2.56 in) when a person sits in a relaxed
position. Body slump, when standing, reduces stature as much as 19 mm (.75 in) from a perfectly
erect position. These slump factors should be considered in designing adjustable seats, visual
envelopes, and display locations. [Source: Israelski, 1977]
141
Use of Anthropometric and Biomechanics Data
If designers and human factors specialists need additional data to solve anthropometric design
problems associated with human physical characteristics, they Task considerations. Designers
and human factors specialists shall take the following task conditions into consideration when
using the human physical characteristic data presented:
the nature, frequency, and difficulty of the related tasks to be performed by the operator or
user of the equipment.
the position of the body during performance of operations and maintenance tasks.
the touch, grasp, torque, lift, and carry requirements of the tasks.
increments in the design-critical dimensions imposed by clothing or equipment, packages,
and tools.
Age becomes a factor after age 60, at which time mobility has decreased 10 % from
youth.
Sex differences favor greater range in females at all joints except knee.
Body build is a significant factor. Joint mobility is decrease significantly as body build
ranges from the very slender, through the muscular, to the obese.
142
Exercise increases movement range. Weight training, jogging, and the like may tend to
shorten certain muscle groups or increase their bulk, so movement is restricted.
Fatigue, disease, body position, clothing, and environment are other factors affecting
mobility. [Source: NASA-STD-3000A, 1989; AFSC DH 1-3, 1980; Israelski, 1977].
This part provides introductory definitions related to the angular motion of skeletal joints.
Knowledge of the range of joint motion helps the designer determine the placement and
allowable movement of controls, tools, and equipment.
Trunk Movement
Workplace designs based upon design-driven body positions shall allow enough space to move the
trunk of the body. The design shall be based upon:
the required tasks and human functions
the need for optimal positions for applying forces
the need for comfortable body adjustments and movements. [Source: MIL-HDBK-759B,
1992]
It is advocated by experts that the anthropometric data to be used for specific design considerations
of specific user's groups, should be based on the same population groups.
Anthropometric data obtained from a specific group may differ in acceptance value, when similar
data are obtained from others. For solving specific design problems of a specific user group,
anthropometric data should come from the same population group, solving specific design.
Problems of a specific user group, anthropometric data should come from the same population
group, using different percentile selections. The use of non-Indian anthropometric data in Indian
designs and other imported readymade designs often results in mismatches with the requirements
of Indian users. Accidents and serious mistakes may occur if any design dimensions do not exactly
match the body dimensions of specific groups.
143
Indian behavior is also not like that of foreigners. Some Indians prefer sitting on the floor and
performing a range of activities there. Non-Indian data sources do not provide the references for
these requirements.
Indian being a multicultural nation with an ethnically diverse population, it would be of direct
relevance to strengthen design practice in India with data on human dimensions collected from
Indian population groups for the specific needs of Indian users.
Data provided here (Experiments) are taken from subjects wearing minimum clothes and without
footwear or headwear, etc. Hence, while using height dimensions for any design application,
appropriate height adjustment for sole and headwear may be an added consideration. Dimensional
allowances and movement restrictions due to wearing of heavy clothes, etc., need to be considered
carefully. Support of anthropometric data (collected from the specific population groups) to design
specific articles, e.g. product, equipment, furniture, machine tools, etc., should be investigated.
It would not be an exaggeration to say that there is no person with all his or her body dimensions in
the 95th or 50th or 5th percentiles. All body parts do not follow the same proportions and even
show different somatotopic (The structure or build of a person, especially to the extent to which it
exhibits the characteristics of an ectomorph, an endomorph, or a mesomorph.)features. A person
with a 50th percentile body height may have 75th percentile hand length, 25th percentile foot
length and 95th percentile abdominal circumference; a person of 75th percentile height may have
25th percentile chest circumference and 50th percentile head circumference; a mesomorph type
general body structure may have hip portion of endomorphic nature. These mixed body types and
proportions among the body parts constitute the common population.
144
Dimensions of equipment or work accessories and workspaces should be considered while designing
in order to achieve effective accommodation layout and for enabling easy handling of equipment by
moving within and around the space provided.
As an example, the height of the work surface for a very small height housewife, using a cooking
platform, must be fixed according to her height to make her feel comfortable while cooking. It
should take into account, the dimensions of the cooking accessories, so that these can come within
the range of her free arm reach and movement while cooking.
Here, any standard may not be appropriate as used by designer-architects for general work surface
height. If it is to be used by others within the same users' population, then the proper percentile
values of the total human dimensions, as well as individual work surface height, should be
considered. Proper allowances should' be made for the different tasks to be performed, so that most
of the population can perform their cooking tasks without problems and uneasiness.
For design purposes to fit an intended user from amongst the known population group, different
percentile values of different human body dimensions should be considered for different design
dimensions. Designing an article or a system with a single percentile value for all the relevant human
dimensions would fail to satisfy all the other dimensional features of the design.
Designing an article for the large-sized users means that the higher percentile values of all
dimensions should be considered, because when the maximum number of the population has lower
values than those of large-sized users, the users with lower percentile values will not be able to get
easy reach and hence will keep away from those unwanted things beyond their reach, thereby
ensuring safety. For designing doors, stature heights of higher than the maximum value, must be
considered with appropriately defined allowances for articles supposed to be carried on the head by
145
intended users. A feeling of psychological clearance may be an added dimension. The higher
percentile value of the maximum body breadth for passageways, etc., maybe considered to provide
free movement facilities. Small-sized users should also be considered while designing things of easy
reach. This must be done keeping in mind the contextual use, the application of strength or any
other consideration involving human endeavor. This means that the lower percentile values of any
dimension should be considered for accommodating the maximum number of people who have
higher values than that, in order to be able to perform physical tasks more easily and with good
control of the body.
Normally, for the moving parts of a machine that are dangerous and not to be touched, i.e. those
which must be kept out of arm's reach, the higher value of the "leaning forward" arm reach, along
with appropriate allowances for safety distances should be considered, in order to ensure safety.
But, if an "on-off" type handle or switch requires to be used only a few times throughout the whole
working period, then it can be placed at a distance of the 75th percentile or so of the normal
"standing forward" grasp reach.
While working, it should not create any obstacle to normal work. When required, it could be grasped
by leaning, as it comes within the 5th percentile of the "leaning forward" grasp reach limit. If
anything has to be operated smoothly, it should be placed nearer, say, within the lower percentile
range (may be the 5th) of the am grasp reach. This means that people having a higher value than
that can reach it easily. Below this value, people can handle it with little difficulty, but the number of
such people would be very limited. Hence, these may be ignored for general purposes, if there is no
specific requirement.
Selection of the average or mean value of a dimension depends on its contextual use and whether it
demands critically to be fitted into the whole range of the users’ population. The terms mean or
average, and median or the 50th percentile value may not be identical. But if there is no dearth of
sample sizes on which data were collected, the median (50th percentile) value normally is closed to
the mean value. Hence, in practice, the average is used as synonym for the 50th percentile.
Average value should not always be considered blindly while designing. As a classical example, while
conceptualizing a common counter height for general purpose use, the 50th percentile elbow height
may be considered, because the height differences of people lying in both halves of the data
distribution may be accommodated. This may be the 95th percentile values, so that, for specific
146
needs the height may be adjusted accordingly. Door heights with 50th percentile stature value may
work, but tall people can only pass through by bending. This creates inconvenience to them and
should not be used. If we consider people of average-weight say, 55 kg and design a lift system to
carry 10 people, it means that the carrying capacity is 550 kg. if 10 people of less-weight board it,
there would be no problem but, when one or more with a 75th or 95th percentile value, say of 60 kg
or 75 kg or more than 100 kg do so, then it may not be safe. Hence, the values tending towards
higher levels, even the highest limits, should be considered, along with the allowances for the
articles supposed to be carried by the people on board, as well as other safety allowances for the
articles supposed to be carried by the people on board, as well as other safety allowances so that,
overloading beyond the lift's capacity is ruled out.
Another example of the same case with reference to weight may be considered while determining
the structural strength of a seat, or in the case of an individual or a family, a design for a suspended
garden swing. Then, not only should the highest value be used but the proper allowances for the
different ways of using the seat should be considered. The user may carry something on his lap; he
may even stand and do some work. This could be an added consideration.
Anthropometric data are obtained separately for males and females and also sometimes also
presented separately. We may require these separate data for designing separately for males and
females. Designers quite often speak about the 5th percentile of the female data as the lowest value
and the 95th percentile of male data as the highest value of the dimensional range of the human
body in general, for various uses of design. It is seen that all the dimensions do not always follow this
rule. Nowadays, as there is very limited scope for such divisions of jobs exclusively for males and
females, for general purposes, percentile values of anybody dimension comprising both males and
females might serve well for producing designed articles or for conceptualizing the work spaces.
147
According to requirements, percentile values may be derived irrespective of whether there is
predominantly a male population or female population.
For example, to avoid reaching towards a dangerously moving part in any equipment, its location in
terms of the human operator should be such that most of the population should not get within easy
reach. The higher percentile value of the forward arm reach in the "standing in front leaning"
posture, say the 95th percentile, is found to be 1336mm for males and 1199mm for females. To
ensure safety, irrespective of this data, whether for males or females, the higher value of the two
must be considered, because we are not sure whether the intended users will be only females,
males or both. Here, the male and female combined value of the 95th percentile figure, 1309mm,
may also be considered in general and the moving part can be fixed at that distance. If the
equipment is unguarded, to avoid all risks, the combined maximum value, 1500mm, irrespective of
for males (1500mm) and for females (1250mm), should be used.
Another example, where easy accommodation is the prime concern, is when designing a seat with
arm rests. The mid-breadth of the seat should accommodate the relaxed mid-thigh to thigh distance.
When the higher values say, the 95th percentile for males and females, are found to be 449mm and
529mm respectively, the combined 95th percentile value, i.e. 479mm for general purposes or the
higher of these two may be considered.
For general use we may select: a) lowest and highest value of any dimension taken from separate
male and female data sources or b) the whole data collected from both males and females may be
computed as a single population data with combined values, irrespective of whether these are
collected from male or female sources
As stated earlier, anthropometric data are useful in work physiology, occupational biomechanics,
and ergonomic/work design applications. Of these, the most common practical use is in the
ergonomic design of workspaces and tools. Here are some examples:
• The optimal power zone for lifting is approximately between standing knuckle height and
elbow height, as close to the body as possible. Always use this zone for strategic lifts and
releases of loads, as well as for carrying loads but minimize the need to carry loads—use
carts, conveyors, and workspace redesign.
• Strive to design work that is lower than shoulder height (preferably elbow height), whether
standing or sitting. (Special requirements for vision, dexterity, frequency, and weight must
also be considered.)
148
• The upper border of the viewable portion of computer monitors should be placed at or
below eye height, whether standing or sitting (Konz and Johnson 2004).
• Computer input devices (keyboard and mouse) should be slightly below elbow height,
whether standing or sitting (Konz and Johnson 2004). Use split keyboards to promote
neutral wrist posture (Konz and Johnson 2004). Learn keyboard shortcuts to minimize
excessive mouse use. Use voice commands—speech recognition software is increasingly
effective for many users and applications.
• For seated computer workspace, the lower edge of the desk or table should leave some
space for thigh clearance (Konz and Johnson 2004).
• For seating, the height of the chair seat pan should be adjusted so the shoe soles can rest
flat on the floor (or on a foot rest), while the thighs are comfortably supported by the length
of the seat pan (Konz and Johnson 2004). Use knowledge of the popliteal (rear surface of the
knee) height, including the shoe sole allowance.
• The chair seat pan should support most of the thigh length (while the lower back is well
supported by the seat back), while leaving somepopliteal(area behind knee) clearance (Konz
and Johnson 2004).In other words, the forward portion of the seat pan should not press
against the calf muscles or back side of the knees.
• For horizontal reach distances, keep controls, tools, and materials within the forward reach
(thumb tip) distance. Use the anthropometric principle of designing for the extreme by
designing the reach distances for the 5th percentile female, thus accommodating 95 percent
of females and virtually 100 percent of males.
7.11 SUMMARY
In the science of anthropometrics, measurements of the population's dimensions are obtained
based on the population's size and strength capabilities and differences. From these measurements,
a set of data is collected that reflects the studied population in terms of size and form. This
population can then be described in terms of a frequency distribution including terms of the mean,
the median, standard deviation, and percentiles. The frequency distribution for each measurement
of the population dimension is expressed in percentiles. The X th percentile indicates that x percent
of the population has the same value or less than that value for a given measurement. The median
or average value for a particular dimension is the 50th percentile. In addition, 100-x of the
population has a value higher than x.
In ergonomic design, we do not design for the average person, or the 50th percentile, we design for
the 95th percentile. In other words, 95% of the population can use the work area safely and
efficiently, and 5% of the population may need to be accommodated. Conventionally, the 95th
149
percentile has been chosen to determine clearance heights or lengths. That means 95% of the
population will be able to pass through a door, while only 5 % of the population may need to be
accommodated. In addition, the 5th percentile female has been chosen to determine the functional
reach distance, that means 95 % of the population will be able to perform this reach, and only 5 % of
the population may need to be accommodated.
7.12 KEYWORDS
Body Slump- To suddenly fall or sit because you are very tired or unconscious
Biomechanics- Biomechanics is the study of the structure and function of biological systems such
as humans, animals, plants, organs, and cells by means of the methods of mechanics.
Correlation- In statistics, dependence refers to any statistical relationship between two random
variables or two sets of data. Correlationrefers to any of a broad class of statistical relationships
involving dependence.
Design limits-The design limits approach entails selecting the most appropriate percentile values in
population distributions and applying the appropriate associated data in a design solution.
150
UNIT 8 MUSCULAR SYSTEMS & WORK
Objectives
After going through this unit,you will be able to:
identify the occupational musculoskeletal disorders.
recognize the risk factors in development of musculoskeletal disorders.
combine different risk categories as risk qualities.
guide on various risk factors to employee and employer.
identify the factors for prevention of musculoskeletal disorders.
Structure
8.1 Introduction
8.2 Characteristics of health problems
8.3 Basic risk factors for the development of musculoskeletal disorders
8.4 Factorscontributingtothedevelopmentofmusculoskeletaldisorders
8.5 Factors tobeconsideredinprevention
8.6 Guidance on main risk factors
8.7 Basic rules for preventive actions in practice
8.8 Summary
8.9 Keywords
8.1 INTRODUCTION
The term musculoskeletal disorders denote health problems of the locomotor apparatus, i.e. of
muscles, tendons, the skeleton, cartilage, ligaments, and nerves. Musculoskeletal disorders include
all forms of ill-health ranging from light, transitory disorders to irreversible, disabling injuries. This
booklet focuses on musculoskeletal disorders, which are induced or aggravated by work and the
circumstances of its performance. Such work-related musculoskeletal disorders are supposed to be
caused or intensified by work, though often activities such as housework or sports may also be
involved.
The severity of these disorders may vary between occasional aches or pain to exactly diagnosed
specific diseases. Occurrence of pain may be interpreted as the result of a reversible acute
overloading or may be a pre-symptom for the beginning of a serious disease.
Disorders of the musculoskeletal system represent a main cause for absence from occupational
work. Musculoskeletal disorders lead to considerable costs for the public health system. Specific
151
disorders of the musculoskeletal system may relate to different body regions and occupational work.
For example, disorders in the lower back are often correlated to lifting and carrying of loads or to the
application of vibration. Upper-limb disorders (at fingers, hands, wrists, arms, elbows, shoulders,
neck) may result from repetitive or long-lasting static force exertion or may be intensified by such
activities.
Health problems occur if the mechanical workload is higher than the load-bearing capacity of the
components of the musculoskeletal system. Injuries of muscles and tendons (e.g. strains, ruptures),
ligaments (e.g. strains, ruptures), and bones (e.g. fractures, unnoticed micro-fractures, degenerative
changes) are typical consequences.
In addition, irritations at the insertion points of muscles and tendons and of tendon sheaths, as well
as functional restrictions and early degeneration of bones and cartilages (e.g. menisci, vertebrae,
inter-vertebral discs, and articulations) may occur.
There are two fundamental types of injuries, one is acute and painful, the other chronic and
lingering. The first type is caused by a strong and short-term heavy load, leading to a sudden failure
in structure and function (e.g. tearing of a muscle due to a heavy lift, or a bone fracture due to a
plunge, or blocking of a vertebral joint due to a vehement movement). The second results from a
permanent overload, leading to continuously increasing pain and dysfunction (e.g. wear and tear of
ligaments, tendon-vaginitis(an acute or chronic inflammation of the tendon sheath, occurring in the
region of the hand, the wrist joint, the forearm (radial and ulnar ten bursitis), the foot, the ankle
joint, and the Achilles tendon) muscle spasm and hardening). Chronic injuries resulting from long-
term loading may be disregarded and ignored by the worker because the injury may seemingly heal
quickly, and it may not result in an actual significant impairment.
The number of such injuries is substantial. In industrialized countries, about one-third of all health-
related absences from work are due to musculoskeletal disorders. Back injuries (e.g. lower back pain,
ischiatic(arises from the sacral plexus and passes about halfway down the thigh where it divides into
the common peroneal and tibial nerves disc degeneration, herniation(To protrude through an
abnormal bodily opening). have the highest proportion (approximately 60%). The second position is
taken by injuries of the neck and the upper extremities (e.g. pain syndromes of the neck, shoulders,
arms, "tennis elbow", tendinitis and tenovaginitis(Tendinitis is a common joint ailment among
professional athletes and workers performing repetitive motions. carpal tunnel syndrome,
152
syndromes related to cumulative traumata, the so-called cumulative trauma disorders (CTDs) or
repetitive strain injuries (RSIs)), followed by injuries of knees (for example, degeneration of menisci,
arthrosis(An arthrosis is a joint, an area where two bones are attached for the purpose of motion of
body parts. An arthrosis (joint) is usually formed of fibrous connective tissue and cartilage and hips
(e.g. arthrosis). It is generally accepted that working conditions and workload are important factors
for the development and continuance of these disorders.
The risk for the musculoskeletal system depends to a great extent on the posture of the operator.
Especially, twisting or bending the trunk can result in an increased risk for the development of
diseases at the lower back. Postural demands play an important role, particularly, when working in
confined spaces.
Besides such types of occupational loading resulting from usual work-site conditions,
musculoskeletal disorders can also be caused by unique, unforeseen, and unplanned situations, e.g.
by accidents. The origin of disorders due to accidents is characterized by a sudden overstrains of the
organs of locomotion.
153
TotalMechanicalLoading
Thetotalloa daffectingthemusculoskeletalsystemdependsonthelevelofthedifferentloadfactors,
mentionedbefore, such as:
thelevelanddirectionofforces
thedurationofexposure
thenumberoftimesanexertionisperformedperunitoftime
posturaldemands
Risk qualities
Accordingtothefactorsmentionedbefore,differentriskcategoriescanbederivedusingdifferentcombina
tionsorqualitiesthereof, suchas:
8.4 FACTORSCONTRIBUTINGTOTHEDEVELOPMENTOF
MUSCULOSKELETALDISORDERS
In the following, musculoskeletal load is characterized with respect to the main influences, such
as the level of force, repetition and duration of execution, postural and muscular effort as well
as environmental and psychosocial factors.
Exertionofhigh-intensityforcesmayresultinacuteoverloadingoftheloadedtissues.High-
intensityforcesareactivewithinthebodytissuesparticularlyduringliftingorcarryingheavyobjects.Fur
thermore,pushing,pulling,holding,orsupportinganobjectoralivingbeingisamatterofhigh-
intensityforces.
Handling loads over long periods of time may lead to musculoskeletal failures if the work is
continued for a considerable part of the working day and is performed for several month or years.
An example is performing manual materials- handling activities over many years, which may result
in degenerative diseases, especially of the lumbar spine. A cumulative dose can be regarded as an
154
adequate measure for the quantification of such types of loadings. Relevant factors for the
description of the dose are duration, frequency, and load level of the performed activities.
Musculoskeletal disorders may also result from frequently- repeated manipulation of objects, even
if the weight of the objects handled or the forces produced are low. Such jobs (e.g. assembling
small work pieces for a long time, long-time typing, and supermarket checkout work) may induce
disadvantages for the musculature, even if the forces applied to the handled objects are low. Under
such conditions, the same parts and fibers of a muscle are activated for long periods of time or with
a high frequency and may be subject to overload. Early fatigue, pain, and possible injuries are the
consequences.
In a well-designed workstation, work can be performed most of the time in an upright posture with
the shoulders un-raised and the arms close to the trunk. Working with a heavily bent, extended or
twisted trunk can result in an overload of spinal structures and increased activity of entire muscles.
If the trunk is simultaneously bent and twisted the risk of spinal injury is considerably increased. If
movements or postures with the hands above shoulder height, below knee level or outstretched
are performed over prolonged periods or recurrently, working conditions should be changed.
Working in a kneeling, crouching, or squatting position augments the risk of overloading
musculoskeletal elements. Also, long-time sitting in a fixed posture is accompanied by long-lasting
muscular activity which may lead to an overload within muscular structures. Such working positions
should be avoided and the time for working in such positions should be kept to a minimum if such
work is not completely avoidable.
Static muscular load is found under conditions where muscles are tensed over long periods of time
in order to keep a certain body posture (e.g. during work with the hands overhead in drilling holes
into the ceiling, holding the arms abducted in hair dressing, holding the arms in typing position
above the keyboard, working in a confined space). A characteristic of static muscular load is that a
muscle or a group of muscles is contracted without the movement of the corresponding joints. If
the muscle has no opportunity to relax during such a task, muscular fatigue may occur even at low-
force levels, and the function of muscles may be impaired, and it may hurt. In addition, static load
leads to a deficiency in blood circulation in muscles. Under normal conditions, the permanent
change between contraction and relaxation acts as a circulation- supporting pump. Continuous
contraction restricts the flow of blood from and to the contracted muscle. Swelling of legs, for
example, is an indicator of such a lack in blood circulation.
155
Muscular inactivity represents an additional factor for the development of musculoskeletal
disorders. Muscles need activation to maintain their functional capacity, and the same is true of
tendons and bones. If activation is lacking, a de-conditioning will develop, which leads to functional
and structural deficits. As a result, a muscle is no longer able to stabilize joints and ligamental
structures adequately. Joint instabilities and failures, in coordination connected with pain,
movement abnormalities and overloading of joints may be the consequences.
Monotonous repetitive manipulations with or without an object over long periods of time may lead
to musculoskeletal failures. Repetitive work occurs when the same body parts are repeatedly
activated and there is no possibility of at least a short period of relaxation, or a variation in
movement is not possible. Relevant determining factors are the duration of the working cycles,
their frequency, and the load-level of the performed activity. Examples of repetitive work are
keyboard use while typing, data entry, clicking or drawing a computer mouse, meat cutting, etc.
Unspecific complaints due to repetitive movements of the upper extremities are often summarized
in the term "repetitive strain injury - RSI".
Strain on the locomotors system may also occur due to the application of vibration. Vibration may
result from hand-held tools (e.g. rock drill) and, therefore, exert vibration strain on the hand-arm
system. Hand-arm vibration may result in the dysfunction of nerves, reduced blood circulation,
especially in the fingers (white finger syndrome) and degenerative disorders of bones and joints of
the arms. Another risk concerns whole body vibration generated by vibrating vehicles and platforms
such as earth-moving machines, low-lift platform trucks or tractors and trucks driving off-road. The
vibration is transferred to the driver via the seat. Whole-body vibration can cause degenerative
disorders, especially in the lumbar spine. The effect of vibration may be intensified if the vehicle is
driven in a twisted body posture. A vibration-attenuating driving seat may help to reduce the effect
of vibration.
Physical environmental factors such as unsuitable climatic conditions can interact with mechanical
load and aggravate the risk of musculoskeletal disorders. In particular, the risk of vibration- induced
disorders of the hands is considerably enhanced if low temperature coincides with the use of hand-
held vibrating tools. Another example of environmental factors influencing the musculoskeletal
strain is the lighting conditions: If lighting and visual conditions are deficient, muscles are strained
more intensively, particularly in the shoulder and neck region.
156
Besides the mechanically induced strain affecting the locomotor organs directly, additional factors
can contribute to the beginning or aggravation of musculoskeletal disorders. Psychosocial factors
can intensify the influence of mechanical strain or may induce musculoskeletal disorders by
themselves due to increasing muscle tension and by affecting motor co- ordination. Furthermore,
psychosocial influences such as time pressure, low job decision latitude or insufficient social support
can augment the influence of physical strain.
Asummaryofthemainfactorscontributingtotheriskofdevelopingwork-
relatedmusculoskeletaldisordersisprovidedinthetable.
Exertion of high- Acute overloading of the tissues Lifting, carrying, pushing, Avoid manual handling of
intensity forces pulling heavy objects heavy objects
Handling heavy Degenerative diseases especially Manual materials- handling Reduce mass of objects or
loads over long of the lumbar spine number of handlings per
periods of time day
Frequently Fatigue and overload of muscular Assembly work long time Reduce repetition
repeated structures typing, check-out work frequency
manipulation of
objects
Working in Overload of skeletal and muscular Working with heavily bent Working with an upright
unfavorable elements or twisted trunk, or hands trunk and the arms close to
posture and arms above shoulders the body
Static muscular Long-lasting muscular activity and Working overhead, working Repeated change between
load possible overload in a confined space activation and relaxation of
muscles
Muscular Loss of functional capacity of Long-term sitting with low Repeated standing up,
inactivity muscles, tendons and bones muscular demands stretching of muscles,
remedial gymnastics,
sports activities
Monotonous Unspecific com- plaints in the Repeated activation of the Repeated interruption of
repetitive upper extremities (RSI) same muscles without activity and pauses
manipulations relaxation alternating tasks
157
Application of Dysfunction of nerves reduced Use of vibrating hand-tools, Use of vibration-
vibration blood flow, degenerative sitting on vibrating vehicles attenuating tools and seats
disorders
Physical Interaction with mechanical load Use of hand-held tools at Use gloves and heated
environmental and aggravation of risks low temperatures tools at low temperatures
factors
Psychosocial Augmentation of physical High time pressure, low job Job rotation, job
factors strain, increase in absence decision latitude, low social enrichment, reduction of
from work support negative social factors
8.5 FACTORSTOBECONSIDEREDINPREVENTION
About maintenance and promotion of health, a weighed balance between activity and rest is necessary.
Rest pauses are a prerequisite for recovery from load-induced strain and for pre- venting accumulation
of fatigue. Movement should be preferred to static holding; the aim should be a combination of active
periods with loading and inactive periods of relaxation. The individual "favorable load" can vary from
subject to subject depending on functional abilities and individual resources. Overload as well as
inactivity should be avoided. Appropriate load effects training of muscles leading to adaptation and thus
an increase in the capacity of muscles, tendons, and bones. This is essential for health and well-being.
CAVEAT: This general view, however, needs refinement in special cases, since parts of the
musculoskeletal system may not adapt to loads in the same way. For example, repetitive lifting of
heavy loads probably does increase muscle capacity, but probably does not increase the capacity of
the spinal discs to withstand mechanical loading. Consequently, strength training could mislead
individuals to believe they could safely lift greater loads and thus risk back problems. Jobs should,
therefore, be so designed that most people are able to carry them out, rather than only a few
strong individuals.
158
continuously at a moderate pace and not during short time-periods with high time-pressure. The
worker must be informed about those possibilities and should be motivated to use them.
Risk frequently results from exposure to mechanical loading. The main influencing factors are high
forces resulting from lifting and pushing or pulling heavy objects, high repetition frequency, and long
duration of force execution, unfavorable posture, uninterrupted muscle force exertion or working on
or with vibrating machines. In some cases, the degree of handling precision, rather than the actual
force exerted, constitutes an additional hazardous factor.
159
Transferring persons in health professions, in old-age care and hospital care.
Holding and moving of heavy loads requires high muscular force; this may lead to acute overload
and/or fatigue of muscles. Examples: Repeated manipulation of heavy bricks in construction
work, loading of coffee sacks, cement bags or other loads in ships, containers, or Lorries.
During holding and moving of heavy loads, high forces occur in the skeletal system, too. Risk of
acute overloading and damage may result. Loadings being incurred over a long period of time
may cause or promote degenerative disorders, especially in the low-back area (e.g. when
handling loads with a bent back). For the individual risk of manual materials-handling activities,
the functional capacity of the working person plays an important role.
160
Provide aids (hoists, or similar devices).
Mark heavy loads.
Mark non-symmetrical load distribution within the object. Mark containers or barrels with
movable content (fluids, granules, etc.).
Suggest and carry out training on "handling".
During such work high forces also occur in the skeletal system. This may lead to an acute
overloading and injury of the skeletal structures. Force exertion, where the force acts
distantly from the body, bears a high risk of damage to the lumbar spine tissues. For tasks
with long-lasting or frequently repeated high force exertion, there is risk of degenerative
diseases especially of the lumbar spine. This is true if force exertion is carried out in
unfavorable body postures.
161
Advice to the employer:
Provide conditions for secure standing.
Provide wheeled vehicles, trolleys, dollies, or similar devices.
Avoid pushing or pulling in confined rooms because of con- strained postures.
Avoid obstacles and uneven ground.
During unfavorable body postures, high forces occur also in the skeletal system. This may lead to
acute overloading and damage of skeletal structures. For long-lasting activities with an inclined
trunk, degenerative disorders, especially in the lumbar region, can arise if such work is executed
over a period of many years.
Maintaining unfavorable body postures for long periods of time relates to long-term activation
of certain muscles which may lead to muscular fatigue and considerable reduction in blood
162
circulation. Such partial decrease in the functional ability of the musculature leads to a reduced
ability to react on sudden impacts and may therefore result in increased accident risk.
Approach the working area and body close enough to enable carrying out the task within
reach; use aids such as scaffolds and ladders, if suitable.
Change posture often to activate different muscles alternately while carrying out tasks;
consider alternating between standing and sitting postures.
Advicetotheemployer
Offer adjustable equipment: chairs, tables, scaffolds, etc.
Set time-limits when constrained postures are unavoidable and/or alternate tasks of
different nature.
Avoid giving tasks that require a kneeling, lying, crouching, or squatting position.
During work the working person has often little influence on the working pace, speed, task
sequence and works and break schedule
163
Commonly, the working person cannot abandon the workplace without being replaced by
another person.
Examples are:
assembly line
cash registration
loading of packing machines
Avoid continuous loading of the same muscles for longer periods of time.
Strive for changes in motion to avoid identical muscular activation patterns. For
strongly monotonous work, changes in the execution of movements may be limited.
164
Examples are:
maintaining a static posture
during bricklaying at floor level; concrete reinforcement work; picking of fruits and
vegetables at floor level; writing; typing; work with computer mouse
In the skeletal system long-lasting loading (e.g. due to long-lasting work in an inclined
posture) can lead to deficient nutrition of the spinal discs.
165
Supply handles or grips which can be used with right as well as left hand.
Place handles/grips to enable use in a neutral position of wrist and arm.
Vibration:
Hand-arm vibration encountered through hand-held tools may lead to degenerative
disorders or to blood circulation problems in the hand (especially the fingers - white
finger syndrome).
Whole-body vibration in vehicles may lead to degenerative disorders, in particular, of
the lumbar and thoracic spine.
Climate:
High temperatures during the manipulation of heavy loads may lead to blood pressure
problems and to an increase in body temperature. At low temperatures, a decrease in
dexterity may occur.
Lighting
Insufficient lighting or dazzling may induce constraint postures. Furthermore, it may increase
the danger of stumbling or falling.
Vibration:
The effect of hand-arm vibration can be reduced by using tools with low vibration, reducing
the time of usage of vibrating equipment, wearing gloves, and avoiding coinciding influence
of low temperatures.
166
The effect of whole-body vibration can be reduced by using vibration-absorbing seats and
reducing the time during which vibration is applied to the body.
Climate
Wearing of appropriate garments, regular change of stay in rooms with high and low
temperatures, limited stay in rooms with high or low temperature
Lighting
Supplying sufficient and un-dazzling lighting equipment
A risk for disorders of the musculoskeletal system appears if the load and the functional capacity of
the worker are not in balance. With regard to maintenance and promotion of health, the following
points must be considered:
There is a need for a weighed balance between physical activity and recovery.
Movement should be preferred to static holding. The aim should be a combination of active
periods with higher load and periods of relaxation.
Overload should be avoided. Effective measures for preventing overload are reducing the
required forces and repetitions.
Too low load should be avoided. An appropriate load for the organs of locomotion is
essential in order to keep up their functional ability.
167
The individual "favorable load" can vary from person to person depending on functional
abilities and individual resources.
The primary aim of ergonomics is the adaptation of working conditions to the capacity of the
worker. High human capabilities of the employees should not be misused as a pretense for
maintaining poorly designed conditions of work or work environment. Thus, it is important to take
into account influencing factors such as age, gender, the level of training, and the state of knowledge
in an occupation. The working conditions should be arranged in such a way that there is no risk from
physical load for anyone at the workplace.
Fundamental points influencing the physical load functions of an employee at worksite are:
requirements of work with respect to body positions and postures of the employee
design of working area
configuration of body supports
light and visual requirements
arrangement of controls and displays
movement sequences of operations
design of work-rest regimen
type of energetic loads with respect to force, repetition, and duration of work
Magnitude of mental loads by increasing the latitude and control of work or job enrichment.
A secondary way is to develop the capacity of the humans to the work by training and vocational
adjustment. The possibility of development of human abilities while executing work should not be
the pretense for keeping a poorly designed condition of work or the work environment. The
selection of workers according to individual capacity should be limited to exceptional situations.
Successful prevention of work-related health risks requires a scheduled and stepwise procedure:
analysis of the working conditions
assessment of the professional risk factors
consideration/provision of measures for diminishing the risk factors by ergonomic design of
the workplace (prevention in the field of working conditions
introduction of measures for the diminution of the risk factors by influencing the behavior of
employees (prevention in the field of behavior)
coordination of the prevention measures with all subjects involved
discussion of alternative prevention approaches
168
specific and scheduled application of the prevention approaches
Control and assessment of the results.
8.8 SUMMARY
Disorders of the musculoskeletal system are a major cause of absence from work and lead,
therefore, to considerable cost for the public health system. Health problems arise if the mechanical
workload is higher than the load-bearing capacity of the components of the musculoskeletal system
(bones, tendons, ligaments, muscles, etc./). Apart from the mechanically-induced strain affecting
the locomotor organs directly, psychosocial factors such as time-pressure, low-job decision latitude
or insufficient social support can augment the influence of mechanical strain or may induce
musculoskeletal disorders by increasing muscle tension and affecting motor coordination.
A reduction of the mechanical loading on the musculoskeletal system during the performance of
occupational work is an important measure for the prevention of musculoskeletal disorders. The
main risk factors are high forces resulting from lifting and pushing or pulling heavy objects, high
repetition frequency or long duration of force execution, unfavorable posture, static muscle forces
or working on or with vibrating machines. Effective measures for the reduction of forces acting
within or on the skeletal and muscular structures include adopting favorable postures, reducing load
weight, limiting exposure time and reducing the number of repetitions.
8.9 KEYWORDS
Musculoskeletal Disorders- Musculoskeletal disorders (MSDs) can affect the body's muscles, joints,
tendons, ligaments, and nerves
169
Mechanical loading- The contribution of mechanical forces--i.e., pressure, friction, and shear--to the
development of pressure ulcers.
Monotonous repetitive tasks- Monotonous, repetitive worktasks are tasks of same fashion, which
creates the pain in joints
Postures- A neutral spine or good posture refers to the "three natural curves [that] are present in a
healthy spine. From the anterior/posterior view the 33 vertebrae in the spinal column should appear
completely vertical
170
UNIT 9 THERMAL ENVIORNMENT
Objectives
After going through this unit,you will be able to:
balance the design of workplace using thermal balance approach.
recognize the importance of core temperature and skin temperature at workplace.
consider conduction and insulation properties of work environment.
differentiate the comfort standards in theory and practice.
design heating systems, considering the essential components.
Structure
1.1 Introduction
1.2 Physiological measurements
1.3 Thermal Balance
1.4 Thermal Indices
1.5 Heating Systems
1.6 Summary
1.7 Keywords
9.1 INTRODUCTION
The main objective of controlling the thermal environment in relation to humans is to match
activities to response to optimize health, comfort, safety and performance.
Several factors interact within the thermal environment. Ultimately, it is the individual's
physiological and psychological responses which indicate whether a particular combination of
these factors produce too much heat gain or loss and results in unacceptable physiological,
psychological, or subjective states.
The values providing criteria of unacceptable strain are based upon research that has shown that the
physical and psychological performance of an increasing proportion of the population will be less
effective as these criteria are exceeded in extreme conditions.
Human beings are homeotherms(An organism that maintains its body temperature at a constant
level, usually above that of the environment, by its metabolic activity. and able to maintain a
constant internal temperature within arbitrary limits of ±2°C despite much larger variations in
171
ambient temperature. In a neutral environment at rest, deep body temperature may be kept
within a much narrower band of control (±0.3°C). The main physiological adjustments involve
changes in heat production, especially by shivering or voluntary muscular activity during physical
exertion, and alterations in heat loss by vasomotor changes which regulate heat flow to the skin
and increased evaporative heat loss by sweating. Evaporation of sweat from the body surface
provides man with one of the most effective methods of heat loss in the animal kingdom. Even so,
temperature extremes can only be tolerated for a limited period depending on the degree of
protection provided by shelter, the availability of clothing insulation in the cold and fluid
replacement in the heat. By behavioral responses, human beings can avoid the effects of such
extremes without recourse to excessive sweating or shivering. Repeated exposure to heat (and to
a lesser extent to cold) can result in acclimatization, which involves reversible physiological
changes that improve the ability to withstand the effects of temperature stress. A permanent
adaptation to climate can occur by the natural selection of beneficial changes in body form and
function ascribable to inherited characteristics.
Conventionally, core temperature is measured by a thermometer placed in the mouth (BS 691:
1987). Errors arise if there is mouth breathing or talking during measurement, or if hot or cold
drinks have been taken just previously, or if the tissues of the mouth are affected by cold or
hot external environments. The rectal temperature is a slowly equilibrating but more reliable
measurement of deep body temperature and on average about 0.5°C higher than mouth
temperature. Cold blood from chilled legs and warm blood from active leg muscles will affect
the rectal temperature. The temperature of the urine is a reliable measure of core temperature
providing it is possible to void a volume of 100 ml or more. For continuous measurement of
core temperature, thermoelectric devices may be placed in the ear or the temperature in the
intestine may be monitored by telemetry using a temperature-sensitive transmitter in the form
of a pill that can be swallowed.
The internal temperature of warm-blooded animals including man does not stay strictly
constant during a day even when keeping constant the generation of heat from food intake
172
and physical activity. In humans it may be 0.5-1.0°C higher in the evening than in the early
morning due to an inherent circadian temperature rhythm. Another natural internal
temperature variation occurs in women at the time of ovulation when core temperature rises
by O.l-0.4°C until the end of the luteal (post-ovulation) phase of the menstrual cycle.
Skin Temperature
Across the shell of the body, from the skin surface to the superficial layers of muscle, there is
a temperature gradient which varies according to the external temperature, the region of the
body surface, and the rate of heat conductance from the core to the shell. When an
individual is thermally comfortable, the skin of the toes may be at 25°C, that of the upper
arms and legs at 31°C, the forehead temperature near 34°C while the core is maintained at
37°C. Average values for skin temperature can be obtained by applying thermistors or
thermocouples to the skin and using weighting factors for the different representative areas.
Regional variations can be visualized and recorded more comprehensively by infra-red
thermograph.
Other Measurements
Severalotherphysiologicalparametersapartfromcoreandshelltemperaturesareofvalueininterpr
etingthecomponentsofthermalstrain.Theseincludeperipheralbloodflowtomeasurevasomotorc
hanges,sweatloss,shiveringresponses,cardiovascularresponsessuchasheartrate,bloodpressure
andcardiacoutputandmetabolicheatproduction.Thechoiceofmeasurementswilldependlargely
onthenatureofthethermalstress,therequirementsandlimitssetbythoseresponsibleforthehealth
andsafetyofemployees,
andthedegreeofacceptanceofthemethodsbythesubjects.Assessmentofthephysiologicalstraini
napersonsubjectedtothermalstressisusuallyrelatedtotwomeasurementsofphysiologicalfunctio
n-thecoretemperatureandtheheartrate.Tolerancelimitsforwork
inadversetemperatureconditionsarecommonlybasedonacceptable'safe'levelsofthesetwofunct
ions-
intheheatalimitof38.0°Ccoretemperatureand180beatsperminuteheartrateinnormalhealthyad
ults,andinthecold36.0°Ccoretemperature.Closemonitoringandgreaterrestrictionoftheselimits
mayneedtobeappliedtoolderworkers andunfitpersonnel.
173
sweat loss, with an appraisal of their technical limitations and the discomfort and risks they
involve, can be found in ISO 9886 (1992).
higher absolute humidity than inspired air. For a person expending energy at 400 W.m-2 in an air
)
temperature of minus 10°C, the RES will be about 44 W (25 W.m-2 . For normal indoor activities
(seated/standing) in 20°C ambient temperature, the heat loss by respiration is small (2 to 5 W.m-
2) and hence is sometimes neglected.
MetabolicHeatProduction(M)
The human body may be a chemical engine, and foods with different energy content, the fuel.
At rest, some of the chemical energy of food is transformed into mechanical work eg in the
heart beat and respiratory movements. This accounts for less than I 0% of the energy
produced at rest, the remainder being used in maintaining ionic gradients in the tissues and in
chemical reactions in the cells, tissues, and body fluids. All this energy is ultimately lost from
the body in the form of heat and the balance of intake and loss maintained during daily
physical activity. In general, energy intake from food balances energy expenditure, except in
those cases where body weight is changing rapidly. In the absence of marked weight changes,
measurement of food consumption may be used in assessing habitual activity or energy
expenditure, though in practice, energy balance is only achieved over a period of more than
one week.
Energy released in the body by metabolism can be derived from measurements of oxygen
consumption using indirect calorimetry (see BS EN 28996: 1994; Determination of metabolic
heat production). The value of metabolic heat production in the basal state with complete
174
physical and mental rest is about 45 W.m-2 (ie per m2 of body surface area) for an adult male
of30 years and 41 W.m-2 for a female of the same age. Maximum values are obtained during
severe muscular work and may be as high as 900 W, m2 for brief periods. Such a high rate can
seldom be maintained and performance at 400-500 W.m-2 is very heavy exercise but an overall
rate that may be continued for about one hour. Metabolic heat is largely determined by
muscle activity during physical work but may be increased at rest in the cold by involuntary
muscle contractions during shivering.
In the heat balance equation given previously, M - W is the actual heat gain by the body during
work, or M + W when negative work is performed. In positive work, some of the metabolic
energy appears as external work so that the actual heat production in the body is less than the
metabolic energy produced. With negative work eg. 'braking' while walking downstairs, the
active muscle is stretched instead of shortening so that work is done by the external
environment on the muscles and appears as heat energy. Thus, the total heat liberated in the
body during negative work is greater than the metabolic energy production.
Conduction(K)andInsulation(I)
Heat is conducted between the body and static solids or fluids with which it is in contact. The
rate at which heat is transferred by conduction depends on the temperature difference
between the body and the surrounding medium, the conductance (k) and the area of contact.
This can be expressed as
K= k (t1 –t2)
where K is the heat loss in watts per surface area of the body (W.m-2 ,) t1 and h the
temperatures of the body and environment respectively (0C), and k is a constant, conductance
(W.m-2.oC-1). In considering conductance at the body surface it is usually more convenient to
refer to insulation (I), the reciprocal of k, where I is a measure of resistance to heat flow (m2
.°C.W1 ).
For the human body there are three different components of I. Icl is the insulation of the
tissues affecting the flow of heat from the core at a temperature t∞ to the skin at temperature
tsk, and lcl and Ia the insulation of clothing and air affecting the heat flow from the skin to air
at temperature ta. .
An arbitrary unit of insulation, the clo, is used for assessing the insulation value of clothing. By
definition 1.0 clo is the insulation provided by clothing sufficient to allow a person to be
comfortable when sitting in still air at a temperature of 21°C. 1.0 clo is equivalent to an lcl of
0.155 m2 .°C.W1 . Examples of typical clothing insulation values are given in Table 9.1.
175
Table 9.1: Basic insulation values, L∞ of a range of clothing ensembles (adapted from Fanger,
1970)
Forfull listingsofensemblesandindividualgarmentsseeBSISO9920:1995
Clothingensemble lei(clo)
Nude 0
Shortsonly 0.1
Lightsummerclothing 0.5
Typicalindoorclothing 1.0
·Heavybusinesstypesuit 1.5
Businessclothes overcoatplushat 2.0
Polarweathersuit 3to4
When a person is fully vasoconstrictor, Itl is about 0.6 clo. When fully vasodilator at rest in the
heat Itl falls to 0.15 clo and when exercising hard in the heat it may fall to about 0.075 clo.
These figures show that increasing tissue insulation by vasoconstriction can play only a small
part relative to clothing in protecting an individual against cold, but decreased tissue insulation
significantly helps the loss of heat in a hot environment. The amount of subcutaneous fat is an
important variable determining cooling rate by tissue insulation and it is especially effective in
cold water immersion. The thermal conductance of 10 mm thickness of freshly excised
humanfat tissue is reported to be 16.7 W.m-2. °C-1 (Beckman, Reeves and Goldman, 1966).
The thickness and distribution of the subcutaneous layer of fat differs from person to person,
but for a mean thickness of 10 mm the insulation value would be equivalent to 0.96 clo.
Convection(C)
Normally, the surface temperature of a person is higher than that of the surrounding air so
that heated air close to the body will move upwards by natural convection as colder air takes
its place. The expression for heat exchange by convection is similar to that for conduction and
is given by:
c=hc(tl-t2)
Where C is the convection loss per unit area, he is the convective
. heat transfer coefficient and
t1 and t2 the temperature of the body surface and the air respectively. The value of he
depends on the nature of the surrounding fluid and how it is flowing. Natural (free) convection
applies in most cases when the relative air velocity is < 0.1 m. s-1
The transfer coefficient depends then on the temperature difference between clothing (te1)
and air (t,) (in °C) as given (in units of W.m-2. oc-l) by
0.25
hc=2.38(tcl-ta)
176
Relative air velocity is increased when the arms or legs are moved through the environment as
for example in walking. The convective heat transfer coefficient is increased further when air
movement induced by a fan or draught causes forced convection. For forced convection over a
range of air speeds up to 4 m.s-1 the best practical value
forthemean.convectiveheattransfercoefficient given by
0.5
hc=8.3(v)
Where v =air velocity in m.s-1
.
Radiation (R)
Radiant heat emission from a surface depends on the absolute temperature T (in Kelvin, K ie °C
+ 273) of the surface to the fourth power ie proportional to T4. The radiation transfer, R, (in
W.m-2 ) between similar objects 1 and 2 is then given by the expression
= ( 1 − 2 )
Where cr is the Stefan-Boltzmann constant (5.67x 10-8 W.m-2.K-4) and e is the emissivity of
the objects.
An approximation is often permissible where the rate of heat transfer between surfaces is
related to their temperature difference and the first power of this difference may then be used
in the same form as that for heat transfer by conduction and convection
= ℎ ( 1 − 2)
Where R is radiant heat transfer per unit area, h, the radiant heat transfer coefficient and t 1
and h the temperatures of the two surfaces. h, depends on the nature of the two surfaces,
their temperature difference and the geometrical relationship between them.
For many indoor situations the surrounding surfaces are at a fairly uniform temperature and
the radiant environment may be described by the mean radiant temperature. The radiant heat
exchange between the body surface area (clothed) and surrounding surfaces in W.m-2 is given
by
4 4
R= feff fcl[(tcl+273) -(tr+273) ]
where E is the emissivity of the outer surface of the clothed body; feff is the effective radiation
area factor ie the ratio of the effective radiation area of the clothed body to the surface area of
the clothed body; fcl is the clothing area factor i.e. the ratio of the surface area of the clothed
177
body to the surface area of the nude body; tcl is the clothing surface temperature (°C) and tr
the mean radiant temperature (°C).
The value of feff is found by experiment to be 0.696 for seated persons and 0.725 for standing
persons. The emissivity for human skin is close to I.0 and most types of clothing have an
emissivity of about 0.95. These values are influenced by color for short wave radiation such as
solar radiation.
The mean radiant temperature t is defined as the temperature of uniform surrounding surfaces
which will result in the same heat exchange by radiation from a person as in the actual
environment. The mean radiant temperature is estimated from· the temperature of the
surrounding surfaces weighted according to their relative influence on a person by the angle
factor between the person and the radiating surface, tr is therefore dependent on both a
person's posture and their location in a room.
Evaporation(E)
At rest in a comfortable ambient temperature an individual loses weight by evaporation of
water diffusing through the skin (insensible cutaneous water loss) and from the respiratory.
Passages (RES)Total insensible water loss in these conditions is approximately 30 g.h-1
Water diffusion through the skin will normally result in a heat loss equal to approximately 10
w.m-2.
The latent heat of vaporization of water is 2453 kJ.kg- 1 at 20°C and a sweat rate of I
liters per hour will dissipate about 680 W. This value of heat loss is only obtained if all the
sweat is evaporated from the body surface; sweat that' drips from the body is not providing
effective cooling.
Evaporation is expressed in terms of the latent heat taken up by the environment as the result
of evaporative loss and the vapor pressure difference which constitutes the driving force for
diffusion
E=hc(psk-pa)
2 )
WhereEistherateofheatlossbyevaporationperunitareaofbodysurface(W.m- ,h.the mean
evaporation coefficient and Psk and Pa the partial pressure of water vapor at the skin surface
and in the ambient air (Kpa)
178
The direct determination of the mean evaporation coefficient (hc) is based on measurement of
the rate of evaporation from a subject whose skin is completely wet with sweat. Since the
production of sweat is not even over the body surface this requires that total sweat rate must
exceed evaporative loss by a considerable margin - a state that is difficult to maintain for any
length of time.
Air movement (v) and body posture are also important in making the measurement, and in
surveying the results of various researchers, Kerslake (1972) recommended that for practical
purposes h. (in units of W.m-2.kPa-1) be represented by
hc=124(v)0.5
If the actual rate of evaporation E1 is less than the maximum rate possible Emax at the
prevailing hc, psk and pa, the ratio E1/Emax can be used as a measure of skin wittedness. The
skin surface may be considered as a mosaic of wet and dry areas. With wittedness value of 0.5
the rate of evaporation achieved would be equivalent to half the skin surface being covered
with a film of water, the other half being dry. For insensible cutaneous water loss, the value is
about 0.06 and at the maximum value 1.0 the skin is fully wet.
Heatstorage(S)
The specific heat of the human body is 3.5 kJ.kg- 1 If a 65 kg individual has a change in mean
.
body temperature of 1oc over a period of 1 h, the rate of heat storage is 230 kJ.h-\ or 64 W. In
the equation for thermal balance, S can be either positive or negative, but in determining
storage the difficulty is to assess the change in mean body temperature. The change in mean
deep body temperature alone is not acceptable because of the different weightings
contributed by the core and shell. ·
Variousformulaehavebeensuggestedtocombinemeasurementsofskinandcoretemperaturetogiv
eameanbodytemperaturee.g.
0.90 t core...+ 0.10 t skin in hot conditions and
The volume of the warm core during vasoconstriction in cold surroundings is effectively
reduced thereby altering the weighting coefficients.
179
InterpretationofCoreTemperature
The limiting values of core temperature which are regarded as 'safe' in adverse temperature
conditions eg. in the heat, 38.0°C is usually stated as the body temperature at which work
should cease. In practice, there are usually two principal situations which lead to raised core
temperature.
ExtremeHeat Stress
Extreme heat stress due to hot environmental conditions when little physical work is performed
(environmental heat stress). Core temperature (!c) might eventually reach 38°C when mean skin
temperature (tsk) has risen to perhaps 35°C or higher. Under these conditions, maximum sweating
(Emax) is achieved as the result of the combined central nervous drive from high tc. and high tsk.
9.4 THERMALINDICES
A useful tool for describing, designing and assessing thermal environments is the thermal index. The
principle is that factors that influence human response to thermal environments are integrated to
provide a single index value, The aim is that the single index value varies as human response varies
and can be used to predict the effects of the environment.
The principal factors that influence human response to thermal environments are:
air temperature,
radiant temperature,
air velocity,
humidity and
180
The clothing and activity of the individual.
A comprehensive thermal index will integrate these factors to provide a single index value.
Numerous thermal indices have been proposed for assessing heat stress, cold stress, and thermal
comfort. The indices can be divided into three types, rational, empirical, and direct.
Using the above, the following body heat equation can be proposed
M ± K± C± R-E = S
If the net heat storage (S) is zero, then the body can be said to be in heat balance and hence
internal body temperature can be maintained. The analysis requires the values represented in
this equation to be calculated from knowledge of the physical environment, clothing, activity,
etc. Rational thermal indices use heat transfer equations (and sometimes mathematical
representations of the human thermoregulatory system) to ‘predict’ human response to thermal
environments.
A comprehensive mathematical and physical appraisal of the heat balance equation represents
the approach taken by Fanger (1970) which is the basis oflSO Standard BS EN ISO 7730: 1995
'Moderate thermal environments- Determination of the PMV and PPD indices and specification
of the conditions for thermal comfort'. The purpose of this Standard is to present a method for
predicting the thermal sensation, Predicted Mean Vote (PMV), and the degree of discomfort
(thermal dissatisfaction), Predicted Percentage Dissatisfied (PPD), of people exposed to
181
moderate thermal environments and to specify acceptable thermal environmental conditions for
comfort.
BS EN ISO 7730: 1995 provides a method of assessing moderate thermal environments using the
PMV thermal comfort index, Fanger (1970). The PMV is the Predicted Mean Vote of a large
group of persons, if they had been exposed to the thermal conditions under assessment, on the
+3 (hot) to -3 (cold) through 0 (neutral) scales.
The PMV is calculated from:
the air temperature
mean radiant temperature
humidity and air velocity of the environment and
Estimates of metabolic rate and clothing insulation
The PMV equation involves the heat balance equation for the human body and additional
conditions for thermal comfort. The PPD index is calculated from the PMV and provides the
Predicted Percentage of thermally dissatisfied persons. The Annex of the Standard gives
recommendations that the PMV should lie between -0.5 and +0.5, giving a PPD of less than 10%.
Tables and a computer program are provided in BS EN ISO 7730 to allow ease of calculation and
efficient use of the standard. This rational method for assessing moderate environments allows
identification of the relative contribution different components of the thermal environment
make to thermal comfort (or discomfort) and hence can be used in environmental design. It is
important to remember that this Standard standardizes the method and not the limits. The
recommendations made for thermal comfort conditions are produced in an annex which is for
information and not part of the standard.
Heat balance is not a sufficient condition for thermal· comfort. In warm environments sweating
(or skin wittedness), and in cold environments mean skin temperature, must be within limits for
thermal comfort. Rational predictions of the body's physiological state can be used with
empirical equations which relate mean skin temperature, sweat rate and skin wittedness to
comfort. Recommendations for limits to air movement, temperature gradients, etc. are given in
BS EN ISO 7730.
Empirical Indices
182
Empirical thermal indices are based upon data collected from human subjects who have been
exposed to a range of environmental conditions. Examples are the Effective Temperature (ET)
and Corrected Effective Temperature (CET) Scales. These scales were derived from subjective
studies on US marines; environments providing the same sensation were allocated equal ET/CET
values. These scales consider dry bulb/globe temperature, wet bulb and air movement; two
levels of clothing are considered (egChrenko, 1974).
For this type of index, the index must be 'fitted' to values which experience predicts will provide
‘comfort’.
DirectIndices
Direct indices are measurements taken on a simple instrument which responds to similar
environmental components to those to which humans respond. For example, a wet, black globe
with a thermometer placed at its centre will respond to air temperature, radiant temperature,
air velocity and humidity. The temperature of the globe will therefore provide a simple thermal
index which with experience of use can provide a method of assessment of hot environments.
Other instruments of this type include the temperature of a heated ellipse and the integrated
value of wet bulb temperature, air temperature and black globe temperature (WBGT).
An engineering approach employing what is known as the dry resultant temperature (CIBSE,
1987) is to use simply the equilibrium temperature of a 100 mni globe thermometer placed in
the environment. The temperature of the globe approximates to an average of the air
temperature and mean radiant temperature. The index needs to be corrected for air movement,
greater than 0.1 m.s-\ and assumes that relative humidity lies in the range 40-60%.
SelectionofAppropriateThermalIndices
A first step in the selection of a thermal index is to determine whether a heat stress index,
comfort index or cold stress index is required. There are numerous thermal indices, and
most will provide a value which will be related to human response (if used in the appropriate
environment). An important point is that experience with the use of an index should be
gained in a particular industry. A practical approach is to gain experience with a simple direct
index; this can then be used for day to day monitoring. If more detailed analysis is required a
rational index can be used (again experience should be gained in a particular industry) and if
necessary subjective and objective measurements can be taken.
183
Thermal Comfort/discomfort
Thermal discomfort can be divided into whole-body discomfort and local thermal discomfort
(ie of part of the body). Thermal comfort is subjective and has been defined by ASHRAE as
'that condition of mind which expresses satisfaction with the thermal environment'. Comfort
indices relate measures of the physical environment to the subjective feelings of sensation
comfort or discomfort.
It is generally accepted that there are three conditions for whole-body thermal comfort.
These are that the body should be in heat balance, and that the sweat rate (or skin
wittedness) and the mean skin temperature are within limits for comfort. Rational indices
such as the predicted mean vote (PMV- BS EN ISO 7730) use the criteria to allow predictions
of thermal sensation. In many instances a simple direct index, such as the temperature of a
black globe, will be sufficient. The dry resultant temperature is related to this.
BS EN ISO 7730 considers issues relating to local thermal discomfort, such as vertical
temperature gradients and radiation asymmetry. Recommended values are given in the
Annex of the Standard to limit these effects. Discomfort caused by draught is also examined.
Draught is defined as an unwanted local cooling of the body caused by air movement. A
method for predicting the percentage of people affected by draught is provided in terms of
air temperature, air velocity and turbulence intensity (i.e. a measure of the variation of air
movement with time). The model applies over a specified range of thermal conditions and
for people performing light, mainly sedentary activity, with a thermal sensation for the
whole body close to neutral. Guidance is provided on how to determine acceptable
thermal conditions for comfort based on the methods provided in the Standard.
SubjectiveJudgmentScales
BS ISO 10551: 1995 'Assessment of the influence of the thermal environment using
subjective judgment scales' provides a set of specifications on direct expert assessment of
subjective thermal comfort/discomfort expressed by persons subjected to thermal stress.
The methods supplement physical and physiological methods of assessing thermal loads.
Subjective scales are useful in the measurement of subjective responses of persons exposed
to thermal environments. They are particularly useful in moderate environments and can be
used independently or to complement the use of objective methods (eg. thermal indices).
184
This Standard presents the principles and methodology behind · the construction and use of
subjective scales and provides examples of scales which can be used to assess thermal
environments. Examples of scales are presented in several languages.
Scales are divided into the five types illustrated by example questions:
Perceptual - How do you feel now?
Affective- How do you find it?
Thermal preference - How would you prefer to be?
Personal acceptance - Is the environment acceptable/unacceptable?
Personal tolerance - Is the environment tolerable?
The principle of the Standard is to provide background information to allow ergonomists and
others to construct and use subjective scales as part of the assessment of thermal
environments. Examples of the construction, application and analysis of subjective scales are
provided in the Annex to the standard.
Thermal comfort has been the subject of much international research over many years. A
great deal is known about principles and practice and some of the knowledge has been
incorporated into international standards. These include BS EN ISO 7730. 'Moderate thermal
185
environments - calculation of the PMV and PPD thermal comfort indices', and the American
Society of Heating, Refrigerating and Air Conditioning Engineers (ASHRAE) Standard 55-1992,
'Thermal environmental conditions for human occupancy'. Such has been the acceptance
and use of the standards that the observer could conclude that all questions concerning
thermal comfort standards have been answered, and that neither laboratory nor field
research is required.
However, many studies have demonstrated that knowledge is not complete and that
problems have not been solved. People in buildings suffer thermal discomfort, and this is not
a minor problem for the occupants of the buildings or for those interested in productivity
and the economic consequences of having an unsatisfied workforce. Is the problem because
standards have not been correctly updated or are, they not correctly used and there is a
presentation and training issue? Maybe all of these apply. Field studies provide practical
data but when designing buildings to provide thermal comfort; can we improve upon
current standards? Are the standards applicable universally or do individual factors, regions
and culture, for example, greatly influence comfort requirements? Are people more
adaptable and/or less predictable than the standards suggest (depending on circumstances).
Are we consuming more capital and running cost resources in buildings to provide
recommended levels of thermal comfort than is really needed? If current standards are
being brought into question, what can be used instead; guidance of some kind is required
for building designers, owners and operators.
Control
Much attention has been concentrated in recent years on the ‘quality’ of internal
environments, particularly offices (eg WHO, 1984; HSE, 1995). Buildings themselves and
active control systems can influence conditions detrimentally (e.g. Youle, 1986). Those
factors which are likely to influence the thermal conditions occurring within a space or
building are described here. This is by no means a comprehensive treatment but represents
typical problem areas. Any one, or combination, of these factors may require attention to
improve thermally unsatisfactory conditions.
It is likely that a range of personnel will be required for such investigations, in particular
those responsible for the design, operation and maintenance of mechanical services
plant (i.e. building services engineers).
Building Fabric
186
The fabric of a building can influences thermal conditions in several ways:
Poor thermal insulation will result in low surface temperatures in winter, and
high values in summer, with direct effect on radiant conditions. Variations
and asymmetry of temperature will also be influenced.
Single glazing will take up a low internal surface temperature when it is cold
externally causing potential discomfort and radiation asymmetry; The effects
are reduced (but not eliminated) with double glazing.
Cold down draughts from glazing (due to cold window surfaces) can be
counteracted by the sitting of heat emitters under glazing, or by installing
double or triple glazing.
Direct solar gain through glazing is a major source of discomfort. This can be
reduced/controlled by for example:
- modification of glazing type (e.g. 'solar control glass' or applied
tinted film)
- use of external solar shading devices (by far the most effective method).
It should be noted that the glazing itself may absorb heat and therefore rise in
temperature causing an increase in mean radiant temperature. This is particularly
relevant with ‘solar control glasses. Also use of such glass often means that internal
natural lighting levels are reduced, leading to more use of artificial lighting and
associated heat gain into the space, which could contribute to thermal discomfort.
Unwanted air movement (and local low air temperatures) in winter time can
arise from poor window and fabric seals, external doors, etc. Draught-
proofing techniques can be employed to reduce this effect.
The sitting and nature of internal partitions can affect local conditions. If the
interior of a space has high thermal mass (thermal capacity) then temperature
fluctuations are reduced.
187
A range of factors needs to be checked to ensure that a heating system is designed and
functioning appropriately. Advice from suitably qualified engineers may be required. Examples
are:
Overall output from central boiler plant or local output from heat emitters in individual
spaces needs to match the building and its requirements.
The position of heat emitters can assist in counteracting discomfort eg siting of radiators
under windows counteracts cold window surfaces and cold down draughts.
The heat output from emitters should be appropriately controlled. Control systems may
vary from a. simple thermostat in one room to multiple sensing systems and computer
control throughout the building.
Check the value of the supply air temperature. Note that increased air movement may
mean higher air temperature requirements for given comfort conditions.
Look for air temperature gradients, particularly arising from air distribution patterns. If the
supply temperature is too high, buoyancy effects may lead to large temperature gradients,
and poor supply air distribution.
Air volumes: if ventilation is providing heating, then air volume flow (and air temperature)
must be sufficient to counteract heat losses in the space.
Ensure local adjustments in one area do not adversely affect adjacent areas or overall
plant operation.
Low values of relative humidity may result during winter heating. Humidification may be
provided in central plant. If so, check that it is operating and being controlled correctly.
188
'Full air conditioning systems' can be very complex and sophisticated and it is likely that
specialist expertise is required in their assessment. The types of point to be questioned or
require checking are:
What is the principle of operation? There are many types e.g.:
Fixed air volume, variable temperature.
Variable air volume (often referred to as VAV), fixed temperature; Fan assisted terminals.
Twin duct; Chilled ceilings, chilled beams; Displacement ventilation; floor air supply.
These issues fall into the domain of the building services engineer - but the basic
principle of operation and control needs to be established.
If conditions are not satisfactory, check for under- or over- capacity in both main
plant and local emitters. This is particularly relevant with cooling systems, which
usually have little capacity in hand.
Also check, or have checked, the operation of local valves to heater and chiller
batteries; these may jam, Jet-by or operate incorrectly, e.g. in reverse to that
expected.
Assess the temperature and velocity of air leaving grilles. Are the values likely to
lead to local discomfort, and is there sufficient air distribution?
Adjustments made for summer conditions (eg. enhancing air movement) may
lead to discomfort in winter (and vice versa). Also, the air distribution pattern is
likely to differ for cooling and heating modes.
189
Again, many questions will need to be asked to establish the principle of operation of control
systems, and whether they are performing as intended. Much of this will necessitate
consultation with other experts.
What is the mode of control? Space sensors, return air sensors? How many sensors are
involved? Do they operate independently, or do they average detected readings?
Are sensors suitably positioned to control the occupied space? Are they responding
primarily to air or surface temperatures?
Are sensors set at appropriate control values, eg. return air sensor control set point
should be higher than the room temperature required. What are the set-point values?
The type of control provided may influence 'perceived' comfort eg. adjustment of a
thermostat may induce a perceived improvement of conditions even if the thermostat is
disconnected and has no effect on control.
Check functioning and calibration etc. of sensors, particularly relative humidity, and duct air
Overall plant controlstarts up times, temperature values, etc. are often controlled by such
systems or by a localized optimum start and stop controller. Check set-point values and
operational logic.
Plant Maintenance
Plant must be checked and maintained on a regular basis to ensure connect operation. Daily
inspection is recommended for large plant. Air handling systems (including ductwork) require
periodic inspection and cleaning.
Plant should be fully documented so that the nature of operation, and system and control
logic can be ascertained.
Maintenance and condition monitoring records should be kept and be available for
190
inspection. Building management systems are of value here, although sometimes too
much information becomes available.
There may be difficulty in obtaining all the explanations/answers required concerning the
building and its services. In this case the services and expertise of outside consultants may
be required to assess the systems.
9.6 SUMMARY
Manual handling injuries are a major occupational health problem. The risk factors associated
with manual handling in hot and cold environments were identified as a gap in knowledge under
the Health and Safety Executive’s priority programme for musculoskeletal disorders (MDS’s). At
present the guidance does not offer specific guidance regarding manual handling in non-neutral
thermal environments other than to say that extremes of temperature and humidity should be
avoided. The thermal environments are defined as manual handling events that occur within
sub-ranges of a 0°C to 40°C range. Here cold environment is defined as between 0°C – 10°C (44%
– 60% relative humidity) and a hot environment is defined as between 29°C – 39°C (25% - 72%
relative humidity).
9.7 KEYWORDS
Thermal Balance- is the condition of mind that expresses satisfaction with the thermal
environment and is assessed by subjective evaluation
Thermal Indices- comfort indicators, some more important than others, many attempts have
been made to devise indices which combine some or all of these variables into one value which
can be used to evaluate how comfortable people fee
Predictive mean vote- The PMV index predicts the mean response of a larger group of people
according to the ASHRAE thermal sensation scale where
+3 hot
+2 warm
191
+1 slightly warm
0 neutral
-1 slightly cool
-2 cool
-3 cold
where
M = metabolic rate
L = thermal load - defined as the difference between the internal heat production and the heat
loss to the actual environment - for a person at comfort skin temperature and evaporative heat
loss by sweating at the actual activity level.
Conduction- insulation- In heat transfer, conduction (or heat conduction) is the transfer
of heat energy by microscopic diffusion and collisions of particles or quasi-particles within a
body due to a temperature gradient.
192
UNIT 10 STANDARD DATA AND FORMULAS
Objectives
After going through this unit,you will be able to:
Construct the formula from empirical data
Analyze the data from tabular form
Calculate the cutting time, completion time for different operations
Use the standard data in day to day and atworkplace environment
Use the data in nomograms (nomograph, alignment chart, is a graphical calculating device, a
two-dimensional diagram designed to allow the approximate graphical computation of a
function) and plots for simplicity
Structure
10.1 Introduction
8.2 Standard Time Data Development
8.3 Tabular Data
8.4 Using Nomograms and Plots
8.5 Formula Construction from empirical data
8.6 Plot data and compute variable expressions
8.7 Analytical formulas
8.8 Standard Data usage
8.9 Summary
8.10 Keywords
10.1 INTRODUCTION
Standard time data (or elemental standard data) are developed for groups of motions that are
commonly performed together, such as drilling a hole or painting a square foot of surface area.
Standard time data can be developed using time studies or predetermined leveled times. After
development, the analyst can use the standard time data instead of developing an estimate for the
group of motions each time they occur.
Typically, the use of standard time data improves accuracy because the standard deviations for
groups of motions tend to be smaller than those for individual basic motions. In addition, their use
speeds standard development by reducing the number of calculations required. Estimate
193
development using standard time data is much like using predetermined leveled times except that
groups of motions are estimated as a single element instead of individual body motions.
Standard time data are elemental times obtained from time studies that have been stored for later
use. The principle of applying standard data was established many years ago by Frederick W. Taylor,
who proposed that each elemental time be properly indexed so that it could be used to establish
future time standards. When we speak of standard data today, we refer to all the tabulated element
standards, plots, nomograms, and tables that allow the measurement of a specific job without the
use of a timing device.
Standard data can have several levels of refinement: (i) Motion (ii) Element (iii) Task
The more refined the standard data element, the broader its range of usage. Thus, motion standard
data have the greater application, but it takes longer to develop such a standard than either element
or task standard data. Element standard data are widely applicable and allow the faster
development of a standard than motion data.
A time study formula is an alternate and, typically, simpler presentation of standard data, especially
for variable elements. Formula construction involves the design of an algebraic expression that
establishes a time standard in advance of production by substituting known values peculiar to the
job for the variable elements.
To develop standard time data, analysts must distinguish constant elements from variable elements.
A constant elementis one whose time remains approximately the same, cycle after cycle.
A variable element is one whose time varies within a specified range of work.
Thus, the element “start machine” would be a constant, while the element “drill 3/8-inch diameter
hole” would vary with the depth of the hole, the feed, and the speed of the drill.
Standard data are indexed and filed as they are developed. Also, setup elements are kept separate
from elements incorporated into each piece time, and constant elements are separated from
variable elements. Typical standard data for machine operation would be tabulated as follows:
Setup
Constants
194
Variables
Each piece
Constants
Variables
Standard data are compiled from different elements in time studies of a given process over a period.
In tabulating standard data, the analyst must be careful to define the endpoints clearly. Otherwise,
there may be a time overlap in the recorded data.
For example, in the element “out stock to stop” on the bar feed No. 3 Warner & Swasey turret lathe,
the element could include reaching for the feed lever, grasping the lever, feeding the bar stock
through the collet to a stock stop located in the hex turret, closing the collet, and reaching for the
turret handle.
Then again, this element may involve only the feeding of bar stock through the collet to a stock stop.
Since standard data elements are compiled from a great number of studies taken by different time
study observers, the limits or end points of each element should be carefully defined.
Figure 10-1 illustrates a form for summarizing data taken from an individual time study to develop
standard data on die-casting machines.
Element a is “pick up small casting,” element b is “place in leaf jig,” c is “ close cover of jig,” d “
position jig,” e “ advance spindle,” and so on. These elements are timed in groups as follows:
3a+3b+3c+3d+3e=A+B+C+D+E
3a+3b+3c+3d+3e=T=0.039 min
Therefore
195
A+d+e = 0.113
Likewised+e+a = 0.061
d=0.067-(0.022+0.03)=0.015 min
E=0.073-(0.015+0.03)=0.028 min
DIE-CASTING MACHINE
Or ………No..of Parts in tote pan…………Method of placing ……. Totalwt of fish, parts, gate
parts in tote pan….
……No of parts per shot…………. Liquid metal/ plastic metal……. Chill…..Skim……Drain…
Capacity in Lbs…………………………………………………….Describe Greasing
………………………………………………………………………………………………………
Describe loosening of part…………………………………………………………………………
Describe Location…………………………………………………………………………………
Elements Time End Points
Get Metal in holding pot ………. All waiting time while metal is being poured in
pot
Chill metal ………. From time operator starts adding cold metal to
liquid metal in pot until operator stops adding
cold metal to liquid metal in pot.
Skim Metal ………. From time operator starts skimming until all
scum has been removed
Get Ladle ful of metal ………. From time ladle starts to dip down into metal
196
until ladleful of metal reaches edge of machine
or until ladle starts to tip for draining
Drain metal ………. From time ladle starts to tip for draining coal
ladleful reaches edge of machine
Pour ladle of metal in machine ………. From time ladleful of metal reaches % of
machine until foot starts to trip press
Trip press ………. From time foot starts moving toward pedal until
knees starts downward
For example, when developing standard data times for machine elements, the analyst may need to
tabulate horsepower requirements for various materials in relation to depth of cut, cutting speeds,
and feeds.
To avoid overloading existing equipment, the analyst should have information on the workload being
assigned to each machine for the conditions under which the material is being removed.
For example, in the machining of high-alloy steel forgings on a lathe capable of a developed
horsepower of 10, it would not be feasible to take a 3/8-inch depth of cut while operating at a feed
of 0.011 inch per revolution and a speed of 200 surface feet per minute. Tabular data, either from
the machine tool manufacturer or from empirical studies indicate a horsepower requirement are
found conditions. Consequently, the work would need to be planned for a feed of 0.009 inch at a
speed of 200 surface feet; this would only require a horsepower rating of 8.7. Such tabular data are
best stored, retrieved, and accumulated into a final standard time using commercially available
spreadsheet programs (e.g., Microsoft Excel).
Horsepower Requirements of Turning High-Alloy Steel Forgings for Cuts 3/8-inch and ½-inch
Deep at Varying Speeds and Feeds
Surface 3/8-in depth cut (feeds, in/rev) ½-in depth cut (feeds, in/rev)
feet
0.009 0.011 0.015 0.018 0.020 0.022 0.009 0.011 0.015 0.018 0.020 0
.
197
0
2
2
150…. 6.5 8.0 10.9 13.0 14.5 16.0 8.7 10.6 14.5 17.3 19.3 2
1
.
3
175…. 8.0 9.3 12.7 15.2 16.9 18.6 10.1 12.4 16.9 20.2 22.5 2
4
.
8
200…. 8.7 10.6 14.5 17.4 19.3 21.3 11.6 14.1 19.3 23.1 25.7 2
8
.
4
225…. 9.8 11.9 16.3 19.6 21.7 23.9 13.0 15.9 21.7 26.1 28.9 2
1
.
8
250…. 10.9 13.2 18.1 21.8 24.1 26.6 14.5 17.7 24.1 29.0 32.1 3
5
.
4
275….. 12.0 14.6 19.9 23.9 26.5 29.3 15.9 19.4 26.5 31.8 35.3 3
9
.
0
300….. 13.0 16.0 21.8 26.1 29.0 31.9 17.4 21.2 29.0 34.7 38.6 4
2
.
5
400….. 17.4 21.4 29.1 34.8 38.7 42.5 23.2 28.2 38.7 46.3 51.5 5
198
6
.
7
Because of space limitations, tabularizing values for variable elements is not always convenient. By
plotting a curve or a system of curves in the form of an alignment chart, the analyst can express
considerable standard data graphically on one page.
For example, if the problem is to determine the production in pieces per hour to turn 5 linear inches
of a 4-inch diameter shaft of medium carbon steel on a machine utilizing 0.015-inch feed per
revolution and having a cutting time of 55 percent of the cycle time, the answer could be readily
determined graphically.
199
illustrates a plot of forming time in hours per hundred pieces for a certain gage stock over a range
of sizes expressed in square inches.
(Source: Aeronautical Engineer's Data, 2011)
Each of the 12 points in this plot represents a separate time study. The plotted points indicate a
straight-line relationship, which can be expressed as a formula:
200
Forming time for different stocks
(Source: Aeronautical Engineer's Data, 2011)
The first and most basic step in formula construction is identifying the critical variables
201
This process includes separating those that are independent from those that are dependent and also
determining the range of each variable. For example, a formula might be developed for curing
bonded rubber parts between 2 and 8 ounces in weight. The independent variable is the weight of
the rubber, while the dependent variable is the time to cure. The range for the dependent variable
would be 2 to 8 ounces, while the dependent variable of time would have to be quantified from
studies.
After the initial identification is finished, the next step is collecting data for the formula.
This step involves gathering previous studies with standardized work elements that are applicable to
the desired formula, as well as taking new studies, to obtain a sufficiently large sample to cover the
range of work for the formula.
Note: It is important that like elements in the different studies have consistent endpoints.
The number of studies needed to construct a formula is influenced by the range of work for which
the formula is to be used, the relative consistency of like constant elements in the various studies,
and the number of factors that influence the time required to perform the variable elements.
At least 10 studies should be available before a formula is constructed. If fewer than 10 are used, the
accuracy of the formula may be impaired through poor model fits.
The more studies used, the more data will be available, and the more normal will be the conditions
reflected.
In summary, the analyst should take note of the maxim “garbage in – garbage out.” The formula will
only be as accurate as the data used to construct it.
Next, the data are posted to a spreadsheet for analysis of the constants and variables. The constants
are identified and combined, and the variables analysedso as to have the factors influencing time
expressed in an algebraic form. By plotting a curve of time versus the independent variable, the
analyst may reduce potential algebraic relationships.
202
For example, plotted data may take several forms: a straight line, nonlinear increasing trend, a
nonlinear decreasing trend, or no obviously regular geometric form.
y=a+bx
With the constants a and b determined from least-squares regression analysis. If the plot shows a
nonlinear increasing trend, then power relationships of the form x2, x3, xn, or ex should be attempted.
For nonlinear decreasing trends, negative power or negative exponentials should be attempted. For
asymptotic trends, log relationships or negative exponentials of the form:
Note that adding additional terms to the model will always produce a better model with a higher
percentage of the variance in the data explained. However, the model may not be statistically
significantly better, that is, statistically there is no difference in the quality of the predicted value
between the two models.
Furthermore, the simpler the formula, the better it can be understood and applied. The range of
each variable should be specifically identified. The limitations of the formula must be noted by
describing its applicable range in detail. There is a formalized procedure for computing the best
model, termed the general linear test.
It computes the decrease in unexplained variance between the simpler model, termed the reduced
model, and the more complex model, termed the full model.
The decrease in variance is tested statistically and, only if the decrease is significant, the more
complex model is used.
In the element “strike arc and weld,” analysis obtained the following data from 10 detailed studies:
Data for variance testing
Study Number Size of weld Minutes per inch of weld
1 1/8 0.12
2 3/16 0.13
3 1/4 0.15
4 3/8 0.24
5 ½ 0.37
203
6 5/8 0.59
7 11/16 0.80
8 3/4 0.93
9 7/8 1.14
10 1 1.52
Plotting the data resulted in the smooth curve shown in figure 12-4. A simple linear regression of
the dependent variable “minutes” against the independent variable “weld” yield:
Y= -0.245+1.57 x (1)
Since figure 10.4 indicates a definite non-linear trend to the data, adding a quadratic component to
the model seems reasonable. Regression now yields:
With r2 = 0.993 and sum of squares (SSE) = 0.012. The increase in r2 would seem to indicate a
definite improvement in the fit of the model. This improvement can be tested statistically with the
general linear test:
[ ( ) ( )]
F=
SSE(F)/dfF
Where SSE (R) =Sum of squares error for the reduced (i.e. simpler model.
SSE(F)= Sum of Square error for the full (i.e. more complex) model
[ . . ]
F= = 71.98
0.012/7
Since 71.98 is considerably larger than F (1, 7) = 5.59, the full model is a significantly better model.
The process can be repeated by adding another term with a higher power (e.g x3 ) which yields the
following model:
With r2 =0.994 and SSE =0.00873. However, in this case, the general linear test does not yield a
statistically significant improvement:
204
[ . . ]
F= = 2.25
0.00873/6
The F- value of 2.25 is smaller than the critical F (1,6) = 5.99
Y = 0.0624+1.45 x 2 (4)
With r2 =0.993 and SSE =0.0133 yields the best and simplest model. Comparing: this model (eq 4)
with second model (Eq 2) yields
[ . . ]
F= = 0.758
0.012/7
The F- value is not significant and the extra linear term in x does not yield a better model.
The best fitting quadratic model can be checked by substituting a 1-inch weld to yield
This, checks, quite closely with the time study value of 1.52 minutes
The easiest and fastest way to check the formula is to use it to check existing time studies. Any
marked differences (roughly 5 percent) between the formula value the expected validity, the
analyst should accumulate additional data by taking more stopwatch and/or standard data
studies. The final step in the formula development process is to write theformula report. The
205
analyst should consolidate all data, calculations, derivations, and applications of the formula and
present this information in a complete report prior to putting the formula into use.
Standard times can be calculated using analytical formulas found in technical handbooks or from
information provided by machine tool manufacturers. By finding the appropriate feeds and speeds
for different types and thicknesses of materials, analysts can calculate cutting times for different
machining operations.
(1) Drill Press Work (2) Lathe Work (3) Milling Machine Work
Drill Press Work A drillis a fluted end-cutting tool used to originate or enlarge a hole in solidmaterial
In drilling operations on a flat surface, the axis of the drill is at 90 degrees to the surface being
drilled.
Since the commercial standard for the included angle of drill points is 118 degrees, the lead of the
drill may be readily found through the following expression:
206
r
=
tan A
where:
l = Lead of drill.
r = Radius of drill.
= 0.5/1.6643
After determining the total length that the drill must move, we divide this distance by the feed ofthe
drill in inches per minute, to find the drill cutting timein minutes.
Distance L Indicate the drill must travel when drilling through (illustration at left) and when drilling
blind holes (illustration at right) (Lead of drill is shown by distance).
Drill speed is expressed in feet per minute (fpm), and feed in thousandths of an inch per revolution.
To change the feed into inches per minute when the feed per revolution and the speed in feet per
minute is known, the following equation can be used:
Fm = 3.82fsf/d
where:
207
Fm = Feed in inches per minute.
For example, to determine the feed in inches per minute of a 1-inch drill running at a surface speed
of 100 feet per minute and a feed of 0.013 inch per revolution, we have
. ( . )( )
= = 4.97 inches per minute
To determine how long it would take for this 1-inch drill running at the same speed and feed to drill
through 2 inches of a malleable iron casting, we use the equation:
= /
= 2( ℎ ) + 0.3 ( )/(4.97)
The cutting time thus calculated does not include an allowance, which must be added to
determine the standard time.
The allowance should include time for variations in material thickness and for tolerance in
setting the stops, both of which affect the cycle cutting time.
Personal and unavoidable delay allowances should also be added to arrive at an equitable
standard time.
For example, the recommended spindle speed for a given job might be 1,550 rpm, but the
machine may be capable of running only 1,200 rpm. In that case, 1,200 rpm should be used
as the basis for computing standard times.
Lathe Work
208
Lathe Work
(Source: Aeronautical Engineer's Data, 2011)
All of these lathes are used primarily with stationary tools or with tools that translate over the
surface to remove material from the revolving work, which includes
• Forgings
• Castings, or
• Bar Stock
Factors that alter speeds and feeds:
• the condition and design of the machine tool
• the material being cut
• the condition and design of the cutting tool
• the coolant used for cutting
• the method of holding work, and
• The method of mounting the cutting tool.
As in drill press work, feeds are expressed in thousandths of an inch per revolution, and speeds in
surface feet per minute.
209
To determine the cutting time for L inches of cut, the length of cut in inches is divided by the feed in
inches per minute, or:
Where
T= Cutting time in minutes, L= Total length of cut, Fm= Feed in inches per minute
3.82
=
Where
f = Feed in inches per rev.
Milling refers to the removal of material with a rotating multiple-toothed cutter. While the
cutter rotates, the work is fed past the cutter. This differs from a drill press for which the work is
usually stationary.
210
Milling Machine work
(Source: Aeronautical Engineer's Data, 2011)
In milling work, as in drill press and lathe work, the speed of the cutter is expressed in surfacefeet
per minute. Feeds or table travel are usually expressed in thousandths of an inch per tooth.
To determine the cutter speed in revolutions per minutefrom the surface feet per minuteand the
diameter of the cutter, use the following expression:
3.82
=
Where:
To determine the feed of the workin inches per minuteinto the cutter, use the expression:
Where:
The number of cutter teeth suitable for a particular application may be expressed as:
Where:
Ft = Chip thickness
To compute the cutting time on milling operations, the analyst must take into consideration the lead
of the milling cutter when figuring the total length of cut under power feed.
211
This can be determined by triangulation, as illustrated in Figure 10.9, which shows the slab-milling of
a pad.
In this case, to arrive at the total length of the work (8 inches). By knowing the diameter of the
cutter, you can determine AC as being the cutter radius, and you can then calculate the height of the
right triangle ABC by subtracting the depth of cut BE from the cutter radius AE, as follows:
BC = √AC2 – AB2
In preceding example, suppose we assume that the cutter diameter is 4 inches and that it has 22
teeth. The teeth per tooth are 0.008 inch, and the cutting speed is 60 feet per minute. We can
compute the cutting time by using the equation:
L
T=
Fm
where:
BC = √4-3.06 = 0.975
Therefore,
L = 8.975
Fm = fntNr
212
Fm = (0.008) (22) Nr
Or
3.82 3.82(60)
= = = 57.3
4
Then
Fm= (0.008)(22)(57.3)
8.975
= − 0.888
10.1
In some instances, for which standard data are broken down to cover a given machine and class of
operation, it may be desirable to combine constants with variables and tabularize the summary. This
quick-reference data expresses the time allowed to perform a given operation completely.Table 10.3
illustrates standard data for a given facility and operation class for which elements have been
combined.
Standard Data for Blanking and Piercing Strip Stock Hand Feed with Piece Automatically
Removed on Toledo 76 Punch Press
L (Distance in inches) T (time in hours per hundred hits
1 0.075
2 0.082
3 0.088
4 0.095
5 0.103
213
6 0.110
7 0.117
8 0.123
9 0.130
10 0.137
10.9 SUMMARY
If organizations are to remain competitive, it is imperative that they develop a consistent plan to
limit operating cost. A part of that plan should include an effective measurement tool. Standard
data provides such a tool, particularly in manufacturing environments where controlling labor cost is
vital to profitability.
10.10 KEYWORDS
Standard data- Standard time data (or elemental standard data) are developed for groups of
motions that are commonly performed together, such as drilling a hole or painting a square foot of
surface area
Plot data- A plot is a graphical technique for representing a data set, usually as a graph showing the
relationship between two or more variables
Standard time-standard time is the result of synchronizing clocks in different geographical locations
within a time zone to the same time rather than using the local meridian as in local mean
time or solar time
214
UNIT 11 OCCUPATIONAL NOISE ENVIRONEMNT
Objectives
After completion of this unit, you should be able to:
measure the occupational noise.
design the noise prevention and control strategies.
access the impact of noise on health outcomes.
calculate the attributable fraction to control the noise.
estimate the uncertainty in diseaseburden for occupational noise.
Structure
11.1 Introduction
11.2 The Risk Factor and its health outcome
11.3 Health outcomes to include in the burden of disease assessment
11.4 Exposure Indicator
11.5 Estimating relative risks for health outcomes, by exposure level
11.6 Estimating the attributable fraction and the disease burden
11.7 Uncertainty in exposure estimates
11.8 Policy implications
11.9 Summary
11.10 Keywords
11.1 INTRODUCTION
Physically,thereisnodifferencebetweensoundandnoise.Soundisasensory
perceptionandnoisecorrespondstoundesiredsound.Byextension,noiseisanyunwarranteddisturba
ncewithinausefulfrequencyband(NIOSH,1991).Noiseispresentineveryhumanactivity,andwhenass
essingitsimpactonhumanwell-
beingitisusuallyclassifiedeitherasoccupationalnoise(i.e.noiseintheworkplace),oras
environmentalnoise,whichincludesnoiseinallothersettings,whetheratthecommunity,residential,o
rdomesticlevel(e.g.traffic,playgrounds,sports,music).Thisguideconcernsonlyoccupationalnoise;th
ehealtheffectsofenvironmentalnoisearecoveredinaseparatepublication(deHollanderetal.,2004).
215
.OccupationsathighestriskforNIHLincludethosein
manufacturing,transportation,mining,construction,agriculture,andthemilitary.
Thesituationisimprovingindevelopedcountries,asmorewidespreadappreciationof
thehazardhasledtotheintroductionofprotectivemeasures.Datafordevelopingcountriesarescarce,b
utavailableevidence suggests
thataveragenoiselevelsarewellabovetheoccupationallevelrecommendedinmanydevelopednation
s(Suter,2000;WHO/FIOH,2001).Theaveragenoiselevelsindevelopingcountriesmaybeincreasingbec
auseindustrializationisnotalwaysaccompaniedbyprotection.
Therearethereforeseveralreasonstoassesstheburdenofdiseasefromoccupationalnoiseatcountryo
rsub national levels. Occupational noise isa
widespreadriskfactor,withastrongevidencebaselinkingittoanimportanthealthoutcome(hearinglos
s). Itisalsodistinctfromenvironmentalnoise,inthatitisbydefinitionassociatedwiththeworkplace
andisthereforetheresponsibilityofemployersaswellasindividuals.Anassessmentoftheburdenofdise
aseassociatedwithoccupationalnoisecanhelpguidepolicyandfocusresearchonthisproblem.Thisisp
articularlyimportantinlightofthefactthatpolicyandpracticalmeasurescanbeusedtoreduceexposure
tooccupationalnoise(WHO/FIOH,2001).
SoundPressureLevel
Thesoundpressurelevel(L)isameasureoftheairvibrationsthatmakeupsound.
Becausethehumanearcandetectawiderangeofsound
pressurelevels(from20µPato200Pa),theyaremeasuredonalogarithmicscalewithunitsofdec
ibels(dB)toindicatetheloudnessofasound.
SoundLevel
216
Thehumanearisnotequallysensitivetosoundsatdifferentfrequencies.Toaccountfortheperceive
dloudnessofasound,aspectralsensitivityfactorisusedtoweightthesoundpressurelevelatdif
ferentfrequencies(A-filter).TheseA-
weightedsoundpressurelevelsareexpressedinunitsofdB(A).
EquivalentSoundLevel
Whensoundlevels
fluctuateintime,whichisoftenthecaseforoccupationalnoise,theequivalentsoundlevelisdet
erminedoveraspecifictime. Inthisguide,theA-
weightedsoundlevelisaveragedoveraperiodoftime (T) and is
designatedbyLAeq,T.Acommonexposureperiod,T,inoccupationalstudiesandregulationsis8
h,andtheparameterisdesignatedbythesymbol,LAeq, 8h.
Thereviewoftheliteratureindicatesthatnoisehasaseriesofhealtheffects,inadditiontohearingim
pairment(Table 11. 1).
Someofthese,suchassleepdeprivation,areimportantinthecontextofenvironmentalnoise,butar
elesslikelytobeassociatedwithnoiseintheworkplace.Otherconsequencesofworkplacenoise,su
chas annoyance,hypertension,disturbanceofpsychosocialwell-
being,andpsychiatricdisordershavealsobeendescribed(deHollanderetal.,2004).
Foroccupationalnoise,thebestcharacterizedhealthoutcomeishearingimpairment.Thefirsteffe
ctsofexposuretoexcessnoisearetypicallyanincreaseinthethreshold of hearing (thresholdshift),
asassessed byeudiometry.Thisisdefinedasa
changeinhearingthresholdsofanaverage10dBormoreat2000,3000and4000Hzineitherear(poor
erhearing)(NIOSH,1998). NIHLismeasuredbycomparingthethreshold ofhearing ata specified
frequencywith a specifiedstandard ofnormal hearing
andisreportedinunitsofdecibelhearingloss(dBHL).
ThresholdshiftistheprecursorofNIHL,themainoutcomeofoccupationalnoise.Itcorrespondstoa
permanentincreaseinthethresholdofhearingthatmaybeaccompaniedbytinnitus.Becauseheari
ngimpairmentisusuallygradual,theaffectedworkerwillnotnoticechangesinhearingabilityuntilal
argethresholdshifthas occurred.Noise-
217
inducedhearingimpairmentoccurspredominantlyathigherfrequencies(3000−6000Hz),withthel
argesteffectat4000Hz.Itisirreversibleandincreasesinseveritywithcontinuedexposure.
TheconsequencesofNIHLinclude:
socialisolation.
impairedcommunicationwithcoworkersandfamily.
decreasedabilitytomonitortheworkenvironment(warningsignals,equipmentsounds).
increasedinjuriesfromimpairedcommunicationandisolation.
anxiety,irritability,decreasedself-esteem.
lostproductivity.
Expensesforworkers’compensationandhearingaids.
Evidenceisusuallyassessedonthegroundsofbiologicalplausibility,strengthandconsistencyofass
ociation,independenceofconfoundingvariablesandreversibility(Hill,1965).Fromareviewoftheli
terature,deHollanderetal.(2004)concludedthatpsychosocialwell-
being,psychiatricdisorders,andeffectsonperformanceareplausibleoutcomes,butareonlyweakl
ysupportedbyepidemiologicalevidence.Otherplausibleoutcomesincludebiochemicaleffects,i
mmunesystemeffects,andbirth-
weighteffects,butagainthereislimitedevidencetosupporttheseoutcomes.
Performance limited
Biochemical effects limited
Immune effects limited
Birthweight limited
Annoyance sufficient <55
218
Hearing loss (adults) sufficient 75
(unborn children) sufficient <85
(Source: adaptedfromHCN (1999) andde Hollander et al. (2004))
Evidencedescribes thestrengthofevidence fora causalrelationship betweennoiseexposureand the
specified health endpoint.
Thereisstrongerevidenceofnoise-basedannoyance,definedas“afeelingof
resentment,displeasure,discomfort,dissatisfactionoroffencewhichoccurswhen
noiseinterfereswithsomeone’sthoughts,feelingsordailyactivities”(Passchier-
Vermeer,1993).Noiseannoyanceisalwaysassessedatthelevelofpopulations,usingquestionnai
res.Thereisconsistentevidenceforannoyanceinpopulations
exposedformorethanoneyeartosoundlevelsof37dB(A),andsevereannoyanceatabout42dB(A)
.StudieshavebeencarriedoutinWesternEurope,Australia,andtheUSA,buttherearenocompara
blestudiesindevelopingcountries.Thereislittledoubtthatannoyancefromnoiseadverselyaffect
shumanwell-being.
Arecentmeta-
analysisreviewedtheeffectsofoccupationalandenvironmentalnoiseonavarietyofcardiovascul
arrisks,includinghypertension,useofanti-
hypertensiondrugs,consultationwithageneralpractitionerorspecialist,useofcardiovascular
medicines,anginapectoris,myocardialinfarctionandprevalenceofischemicheartdisease(vanKe
mpenetal.,2002).Theanalysisshowedanassociationwithhypertension,butonlylimited
evidenceforanassociation
withtheotherhealthoutcomes.Reasonsforthelimitedevidenceincludedmethodologicalweakn
esses,
suchaspoor(retrospective)exposureassessment,poorlycontrolledconfoundingvariables,ands
electionbias(suchasthe“healthyworker”effect,wherethestudiedpopulationsexcludetheleast
healthyindividuals,whomayalreadybeabsentfromworkthroughdisability).
Themeta-analysisshowed
inconsistenciesamongindividualstudies,andsummaryrelativeriskswerestatisticallysignificanti
nonlyalimitednumberofcases.Overall,thecausallinkisplausible,andthemeta-
analysisprovidessupportforfurtherinvestigationofcardiovasculareffectsinthefuture.However,
theevidencebasewasnotconsideredtobestrongenoughforinclusionin themeta-
analysis.Consequently,cardiovasculareffectswerenotincludedintheGlobalBurdenofDiseasest
udy,andmethodsforestimatingthe cardiovasculareffectsofnoisewerenotdefined(Concha-
Barrientosetal.,2004).Thisguidedoes
notthereforeprovideinformationforassessingthecardiovasculareffectsofnoiseatnationalorlo
callevels.
219
Incontrast,itisgenerallyacceptedthatthelinkbetweenoccupationalnoiseand
hearinglossisbiologicallyobvious
(i.e.thereisaclearmechanisticpathwaybetweenthephysicalpropertiesofnoiseanddamagetothe
hearingsystem).Thelinkisalsosupportedbyepidemiologicalstudiesthatcomparedtheprevalenc
eofhearinglossindifferentcategoriesofoccupations,orinparticularlynoisyoccupations(e.g.Arndt
etal.,1996;Waitzman&Smith,1998;Hessel,2000;Palmer,Pannett&Griffin,2001).
ThestudiesshowedastrongassociationbetweenoccupationalnoiseandNIHL,aneffectthatincrea
sedwiththedurationandmagnitudeofthenoiseexposure.For example,theriskfor“blue-
collar”constructionworkerswas2to>3.5-foldgreaterthanthatfor“white-
collar”workersinotherindustries(Waitzman&Smith,1998) (Table 11.2). Although other factors
may also contribute to hearing loss, such as exposureto vibrations, ototoxic drugs and
some chemicals, the association
withoccupationalnoiseremainsrobustafteraccountingfortheseinfluences.Thereisalsoepidemi
ologicalevidenceforaneffectofhighlevelsofoccupationalexposureon
hearinglossinunbornchildren(e.g.Lalande,Hetu&Lambert,1986),buttherewas
notconsideredtobeenoughinformationtocalculateassociatedimpactsfortheGlobalBurdenofDi
seasestudy,.
Populationgrou Prevalence
Country Source Definitionof hearing impairment
p 95%CIb
ratio
Plumbers 1 1.19−1.85
.
220
Painters 1.20 0.96−1.49
Plasterers 1 1.05−1.59
.
Blue- 1.00
collarworkers
Overall 1 1.29−1.82
.
Canada Hessel Greaterthan105dBHLat2,3or4kHz(corres Plumbers 2.91 c
NA
(2000) pondsto>35 dBHL).
Boilermakers 3.88 NA
Electricians 1.46 NA
White- 1.00 NA
collarworkers
Female 2.90 NA
a the data are taken from all available studies. b CI = confidence interval. c NA
= not available in the original study.
221
Depending on the aim of the study, it may be preferable to assess disease burden in terms of
attributable disease incidence, or overall disease burden, using summary measures of
population health such as DALYs (Murray, Salomon & Mathers, 2000). This will allow the health
burden to be compared for different geographical areas, and with the health burden from other
risk factors. A goal of burden of disease assessments is to maximize the compatibility of
frameworks for assessing the burden of disease for risk factors. Using the same framework
promotes this goal by ensuring that the same method is used to measure the incidence and
severity of disability associated with each disease.
Applying these criteria, it is clear that NIHL should be included in any national assessment, as it
is strongly supported by epidemiological evidence, and is one of the health outcomes often
assessed in national health statistics and as part of WHO burden of disease assessments. It is
generally most straightforward to exclude outcomes such as annoyance, as they are not a
formally defined health outcome per se. Should annoyance cause other health outcomes, such
as hypertension and associated cardiovascular disease, then other outcomes could be
considered. If there is a strong local reason for including such outcomes, then it is possible
either to assess comparative disability weights independently, to take them from other studies
(e.g. de Hollander et al., 2004), or to extrapolate them from similar health outcomes. You should
be aware that an independent assessment of the severity of such outcomes introduces
additional uncertainty when the results are compared with other risk factors or geographical
areas.
This guide follows the previous global assessment of occupational noise, in that only the effects
of occupational noise on NIHL are assessed. Several definitions of hearing impairment are
available in the literature. In the occupational setting, hearing impairment is generally defined
as “a binaural pure-tone average for the frequencies of 1000, 2000, 3000 and 4000 Hz of greater
than 25 dBHL” (NIOSH, 1998; Sriwattanatamma&Breysse, 2000). While this definition is widely
used, it does not correspond to the WHO definition of disabling hearing loss (i.e. with an
associated disability weight and corresponding to a quantifiable burden of disease). This level of
hearing impairment is defined as “permanent unaided hearing threshold level for the better ear
of 41 dBHL or greater for the four frequencies 500, 1000, 2000 and 4000 kHz” (Table 11.3). In
this guide, we describe the steps necessary to calculate a prevalence of hearing loss that
corresponds to the WHO definition, as it is preferable for burden of disease assessments. A
straightforward procedure for convertingbetween the different levels of impairment. This
conversion procedure is supported by large epidemiological studies and should therefore
introduce only a small additional uncertainty into the estimation.
222
0 no impairment <25 dB (better ear) No, or veryslight, hearing problems
Able to hearwhispers
1 slight impairment 26−40 dB (better ear) Able to hear and repeatwords spoken
in normal voiceat 1 m.
2 moderate impairment 41−60 dB (betterear) Able to hear and repeatwords using
raised voice at1 m.
3 severe impairment 61−80 dB (better ear) Able to hear somewordswhen
shouted into better ear
– minimumnoiseexposure:<85dB(A)
– moderatelyhighnoiseexposure:85−90dB(A)
– highnoiseexposure:>90dB(A)
– areasurveys:noiselevelsaremeasuredatdifferentsitesacrossanarea,suchas
sitesthroughoutafactory.
223
– dosimetry(dosimetry is the measurement and calculation of the radiation dose
received by matter and tissue resulting from the exposure to indirect and
direct ionizing radiation:aperson’scumulativeexposure to noise overa period of time
is measuredwithadosimeter;
– Engineeringsurveys:noiseismeasuredusingarangeofinstruments.
Ideally,representativedatawillbeavailableontheaveragelevelsofoccupationalnoiseforall
majoroccupationswithinthecountry,eitherfromthepublishedscientificliterature or fromother
sources of data. If such data are not
available,epidemiologicalsurveyscanbecarriedouttodeterminethedistributionofnoiseexposur
ebyoccupation.Inpractice,suchdataoftenwillnotbeavailable,andthedistributionwillhavetobee
stimated fromexistingsourcesofinformation.Todoso,
assumptionswillneedtobemade,whichwillincreasetheuncertaintyoftheestimation,andthissho
uldbemadeexplicitintheresults.
Areasonableestimateoftheexposuredistributioncanbeobtainedbyextrapolatingfromexistingd
ataforstudiesundertakenelsewhere,providedthatthedataarefromsimilaroccupationalenviron
ments. Studieshaveshownthatthemostimportant
determinantofexposurelevelisworkeroccupation.Industry-
specificstudiesintheUSAshowedthat44%ofcarpentersand48%ofplumbersreportedtheyhadap
erceivedhearingloss,and90%ofcoalminershavehearingimpairmentbyage52
years.Also,itisestimatedthat70%ofmalemetal/nonmetalminerswillhavehearing
impairmentbyage60years(NIOSH,1991).Withinanoccupation,severalworkplace-
specificfactorswillalsoinfluencethelevelofexposure.Thesefactors
includethetypeoffacilityandprocess;therawmaterials,machineryandtoolsused;whetherthere
are engineering andwork practicecontrols; and whether
personalprotectivedevicesareusedandproperlymaintained.Thesefactorsarelikelytovary
betweencountries(e.g.personalprotectivedevicesmaybemorecommonlyusedindevelopedcou
ntriesthanindevelopingcountries).Suchfactorsshouldbetakenintoconsiderationwhenestimati
ng thedistributionof exposurefora
workforce,andextrapolationsshouldbemadefromdataforcomparableoccupationsincomparabl
ecountries.
TheGlobalBurdenofDiseasestudyestimatedexposuredistributionsusinganoccupationalcatego
ryapproach,modifiedtoreflectthedifferentnoiseexposuresfor
occupationsindifferenteconomicsubsectors.Thisapproachcanbeappliedatthenationallevel,usi
ngcountrydatawereavailable,orbyextrapolatingfromdatafor
otherstudiesiflocaldataarenotavailable.
224
Thefirst step isto assess theproportionofworkers ineachoccupationalcategory that is
exposed to at least moderatelyhighoccupationalnoiselevels(>85dB(A
Itmaybenecessarytoadjusttheseproportions,dependingonthecharacteristicsof
thecountryinwhichtheassessmentisundertaken.Indevelopingcountries,becausehearingconser
vationprogramarerare,theglobalassessmentassumedthatonly5%oftheproductionworkerswo
uldbeexposedatthe85–90dB(A)level,and95%
wouldbeexposedatthe>90dB(A)level.Also,95%oftheagriculturalworkers exposedat
orabove85dB(A)areassignedtothe85–
90dB(A)level,becausemechanizationisnotwidespreadincountriesinWHOdevelopingsub-
regions
Thesecondstepconsistsofdefiningtheproportionsofworkersineacheconomicsubsector,byoccu
pationalcategory.Thesedatamaybeavailablefromnationallabouroffices,orfromstatisticsreport
edtotheILO.Thethird stepsimplyconsistsof
multiplyingtheprevioustablestogether(i.e.foreacheconomicsubsector,theproportionofworke
rsineachoccupationismultipliedbytheproportionofworkersintheoccupationexposedtomodera
telyhigh,orhigh,noiselevels).Next,theproportionoftheworkingpopulationineacheconomicsub
sectorisdeterminedby
gender.Inthefifthstep,thesevaluesaremultipliedbytheproportionofworkersintheoccupational
categoryexposedtothespecificnoiselevel.Theseriesof
225
calculationsisperformedforalleconomicsubsectors,andtheresultssummedtogivetheproportio
nofthetotalworkingpopulationthatisexposedateachnoiselevel.
Thenextstepaccountsforthefactthatnotallthepopulationisinvolvedinformalwork,bydefiningth
eproportionoftheworking-
agepopulationthatiscurrentlyemployed.Thisshouldbedoneseparatelyformalesandfemales.Ac
curacycanbefurtherimprovedbyspecifyinglevelsofemploymentfordifferentagegroupswithinth
eworking-agepopulation.Finally,theoverallpopulationexposureisgivenby
multiplyingtheproportionoftheworkingpopulationexposedateachexposurelevel,bythepropor
tionofthetotalpopulationinwork.
Table 114summarizesthesestepsandthesourcesofdatanecessarytocompletethem
andgivesexamplecalculationsfortheproportion of the male working-age population
intheUSAthatisexposedtomoderatelyhighnoiselevels.Togiveacompleteassessmentoftheexpo
suredistribution,thecalculationswouldberepeatedfor
exposuretohighnoiselevels,andforfemalesaswellasmales.
Source
Estimatesfortheprevalenceofnoiseexposure,determinedusingthedescribedmethod,areshowninT
able11.5.Thefiguresassumethereareequalemploymentratesinallagegroupsoftheworking-
agepopulation.
Working-agepopulation
227
11.5 ESTIMATING RELATIVE RISKS FOR HEALTH OUTCOMES,
BYEXPOSURELEVEL
Itmaybepossibletoobtainrelativerisksbyexposurelevelfromtheliterature,or
fromepidemiologicalsurveysinyourownpopulation,orpopulationswithsimilarsocioeconomicandw
orkingconditions.However,asforevidenceofcausality,thereislittlereasontobelievethattherelativeri
sksofhearinglossshoulddifferbetweencountries,sothatinmostcasesitwillbemorestraightforwarda
ndprobablymoreaccurate,touserelativerisksbasedonallpreviousstudies.
Aswithotherdiseaseburdens,themajorchallengeinestimatingrelativerisksfor
NIHLisinconvertingdifferentmeasuresofhearinglossintoasinglestandardizeddefinitionforassessing
exposure,asisdoneinthemethodpresentedhere.As
outlinedinSection3,thecriteriausedbyWHOtodefinedisablinghearingimpairmentisdifferentfromth
ecriteriausedbymostofthestudiesintheoccupationalfield,soanadjustmentofthepublishedrelativeri
skvaluesisusuallynecessarytocalculateburdenofdiseaseinDALYs.Again,theconversionprocedurede
scribedinStep2belowshouldbeequallyapplicableinallcountries.
Aproceduretoestimatetheincreaseinriskassociatedwithdifferentexposurelevelshasbeendefinedin
theGlobalBurdenofDiseasestudy(Cocha-Barrientosetal.,2004).Inrife,the mainstepsare:
Estimatetheexcessriskfordifferentlevelsofexposure,andfordifferentages.
ThedatacanbeobtainedfromalargestudycarriedoutintheUSA(Princeetal.,1997).Thestudyuse
sanaverageof1000,2000,3000and4000Hz,andahearingloss>25dBHLtodefinehearingimpairm
ent.Excessriskisdefinedas
thepercentageofworkerswithahearingimpairmentinthepopulationexposedtooccupationaln
oise,minusthepercentageofpeopleinanunexposedpopulationwhohaveahearingimpairmentt
hatisthenaturalresultofaging.MoststudiesfollowtheNIOSHpracticeofmeasuringtheoutcome
as“materialhearingimpairment”(i.e.atthelevelof25dB).
Adjustthehearinglevels.AcorrectionfactorcanbeusedtoadjusttheexcessrisksmeasuredusingtheNI
OSHdefinitionofthethreshold,tothelevelatwhichWHOdefinesanassociateddisabilityweightingforb
urdenofdiseasecalculations(>41dB;Concha-
Barrientosetal.,2004).Inthisguide,weuseacorrectionfactorof 0.446,which istheratio
ofthenumberofexcesscasesat>40dBdividedbythenumberofexcesscasesat>25dB(NIOSH,1991).
Estimaterelativeriskbyage.
Therelativeriskvaluesbyagecanbeestimatedusingtheformula:relativerisk=1+(excessrisk/expected
228
risk).TheexpectedriskintheGlobalBurdenofDiseasestudyisbasedonastudyoftheprevalenceof
hearinglossasafunctionofageintheadultpopulationofGreatBritain(Davis, 1989).
ThefinalrelativerisksofhearinglossatvariousexposurelevelsdefinedbythisprocedurearegiveninTab
le 11.6. Unlessthereisstrongevidencethattherelativerisks
aredifferentinyourcountryofinterest,thenitisadvisabletousethesevalues.
Relative risks for hearing loss by sex, age group and level of
occupational exposure
– theproportionofpeopleexposedtothedefinednoiselevels.
– therelativeriskofdevelopingNIHLforeachexposurelevel.
– thetotaldiseaseburden(incidenceornumberofDALYs)fromNIHLwithinthecountry,obtainedfro
mothersources(e.g.Prüss-Üstünetal.,2003).
AF= ΣPiRRi-1
ΣPiRRi
where:
229
Pi = proportionofthepopulationineachexposurecategory,i(i.e.
Punexposed,Plowexposure,Phighexposure).
RRi=relativeriskatexposurecategory,i,comparedtothereferencelevel(=1.0foranunexp
osedpopulation).
Forexample,thefractionofNIHLintheUSAmalepopulation15−29yearsoldthatisattributabletooccup
ationalnoiseisgivenby:
AF= (91%×1)+(5.7%×1.96)+(3.3%×7.96)–1
(91%×1)+(5.7%×1.96)+(3.3%×7.96)
=22%
230
11.7 UNCERTAINTY
Therearetwoprincipalsourcesofuncertaintyinthediseaseburdenestimatesfor occupationalnoise.
Errors in exposure estimates may arise because most studies of the association between noise
and hearing impairment are retrospective measurements of the hearing sensitivities of
individuals, correlated with their noise exposure over an extended period (typically, many years).
Noise exposure often varies over time, so that it may be difficult to measure the precise level
that the subject has experienced, particularly if they have been subject to intermittent
exposures. Uncertainty is also introduced by variation in the subject (e.g. previous audiometric
experience, attention, motivation, upper respiratory problems, and drugs). However, well-
designed epidemiological studies (of the type used to define the relative risks in this guide)
should account for the most important confounding factors (e.g. age and sex), and ensure that
the relative risks are reasonably accurate. The large populations of the studies used to calculate
the relative risks in this guide, and the consistency of results between studies, suggest that
thedatacloselyapproximatetherisksofnoiseexposure.
Someadditionaluncertaintyinthemethodusedinthisguidecomesfromadjustinghearinglossmeasure
mentsmadeatdifferentthresholds(e.g.25dBLand41dBL).Theuncertaintyshouldberelativelysmall,as
theadjustmentisbasedonalargesamplesize,butitcouldbereducedfurtherifmorestudiesmeasurehea
ringlossatboth25dBLand41dBL.
231
11.8 POLICY IMPLICATIONS
Estimatesoftheincidenceofoccupationally-
relatedNIHLinyourcountryorstudypopulation,orofthenumberofattributing
DALYs,willprovidequantitativeinformationontheimportanceoftheprobleminthestudyarea,andcanhelp
tomotivateinterventionstoreducetheserisksandassociatedhealthimpacts.NIHLis,atpresent,incurablea
ndirreversible.Itispreventable,however,anditis essential
thatpreventiveprogrambeimplemented.Thefollowingrecommendationsoneffectivehazardprevention
andcontrolmechanismsarebasedonGoelzer(2001).
Hearingconservationprogramshouldnotbeisolatedefforts,butshouldbeintegratedintotheoverallhazard
preventionandcontrolprogramfortheworkplace.
Hazard prevention and control programs require:
the work process (including tools and machinery): for example, install quieter
equipment, promote good maintenance.
the workers: for example, set up work practices and other administrative controls on
noise exposures, and provide eudiometry tests and hearing protection, and workers’
education programs
Control measures should be realistically designed to meet the needs of each situation, and the
different options should be considered in view of factors such as effectiveness, cost, technical
232
feasibility, and socio-cultural aspects. Control interventions should follow the following hierarchy:
control the noise source → control the noise propaga on → control noise at the worker level.
The priority is to reduce noise through technical measures. When engineering controls are not
applicable or are insufficient, exposure to noise can be reduced through measures such as
introducing hearing protection for workers. The protective equipment must be properly selected,
worn and maintained. Administrative controls can also be used. These are changes in the work
schedule, or in the order of operations and tasks. For example, the time spent in a noisy
environment can be limited (in addition to wearing hearing protection), and noisy operations can be
performed outside the normal shift, or during a shift with very few workers (wearing hearing
protection), or at a distant location. Some better-known measures for reducing noise, such as noise
enclosures and personal protective equipment, may be too expensive, impractical, inefficient or
unacceptable to workers, particularly in hot jobs or climates. Approaches to prevention should be
broadened, with proper consideration of other control options, particularly options for source
control, such as substituting materials and modifying processes, as well as for good work practices.
Finally, it should be recognized that in developing countries a large proportion of the population
works in the informal sector. A major challenge is to extend occupational hazard prevention and
control programs to this section of the population.
11.9 SUMMARY
Estimates of the proportions of populations exposed in each sub region were based on the
distribution of the Economically Active Population1 into nine economic subsectors. The estimates
took into account the proportion of workers in each economic subsector with exposure to the risk
factor, and the workers were partitioned into high and low exposure levels. Turnover accounted for
previous exposures. The primary data sources for estimating exposures included the World Bank
(World Bank, 2001), the International Labor Organization (ILO, 1995, 2001, 2002), and literature on
the prevalence and level of exposure.
The exposure variable used in this analysis is a direct measure of the risk factor (occupational
exposure to noise is the causative agent of NIHL). As global data on the frequency of occurrence,
duration and intensity of noise exposure do not exist, it was necessary to model this exposure for
workers employed in various occupational categories. The theoretical minimum is based on
233
expected background levels of noise and is consistent with national and international standards.
Most experts agree that levels below 80 dB(A) result in minimal risk of developing hearing loss
11.10 KEYWORDS
Sound level- Sound pressure or acoustic pressure is the local pressure deviation from the ambient
(average, or equilibrium) atmospheric pressure, caused by a sound wave. A sound level
meter or sound meter is an instrument which measures sound pressure level, commonly used
in noise pollution studies for the quantification of different kinds of noise
Noise prevention- To avoid hazardous noise from the workplace whenever possible and using
hearing protectors in those situations where dangerous noise exposures
Occupational Noise- Occupational noise has been a hazard linked to heavy industries such as ship-
building and associated only with noise-induced hearing loss (NIHL)
Hearing impairment- hearing impairment, or hearing loss is a partial or total inability to hear.
234
UNIT 12 OCCUPATIONAL VIBRATIONS
Objectives
After going through this unit,you will be able to:
measure the equipment vibrations at workplace.
calibrate and accurate the equipment’s.
design the occupational vibration limits.
identify the risk associate with occupational vibrations.
recognize the importance of the personnel protective equipment to avoid vibrations.
Structure
12.1 Introduction
12.2 Vibration, whole-body
12.3 Vibration measurement
12.4 Vibration Limits
12.5 Identification of risks areas
12.6 Vibration s prevention and control
12.7 Vibration control in the working environment
12.8 Vibration measurement system
12.9 Summary
12.10 Keywords
12.1 INTRODUCTION
Vibration as this physical phenomenon may affect the human body, and has a detrimental effect
relating to occupational safety and health.
The effects of direct vibration on the human body can be serious. Vibration can typically cause
blurred vision, loss of balance, loss of concentration etc. In some cases, certain frequencies and
levels of vibration can permanently damage internal body organs. In this respect "white fingers
syndrome" is a particular concern.
Human vibration is normally defined as the vibration experienced by the human body because of
direct contact with vibrating surfaces. These surfaces include the floor of buildings, the seats of
vehicles and the handles of power-driven tools.
235
Vibration may cause discomfort, a reduction in work output or even physical damage. To quantify
and assess safe levels of human exposure to vibration, measurements and analysis of vibrations are
therefore necessary.
Human vibration can be categorized by response into two major areas of evaluation, i.e. whole-body
and hand-arm.
Excursive vibration can be transmitted from vibrating tools, vibrating machinery or vibrating
workplace to the hand and arms of operators. These situations occur, for example, in the
manufacturing, mining and construction industry when handling pneumatic and electrical tools, and
in forestry work when handling chain saws. These vibrations are transmitted through the hand-arm
to the shoulder. Depending on the work situation, these vibrations can be transmitted to the arms
simultaneously. Hand-transmitted vibrations generally occur in the frequency range of 8-100Hz.
Vibration transmitted to the body as a whole through the supporting surface, namely the feet of a
standing man, the buttocks of a seated man or the supporting area of a reclining man. Such form of
vibration transmission is common in vehicles, vibrating building structures and working in the vicinity
machinery. In principle, it applies to vibration transmitted from solid surfaces to the human body in
the frequency range of 1- 80Hz. Vibration in the frequency range below 1Hz creates special problem,
associated with symptoms such as kinetosis (motion sickness), which are of a character different
from the effects of higher frequency vibrations. The inception of such symptoms depends on
complicated individual factors not simply related to the intensity, frequency, or duration of the
provocative action.
Frequency of Vibration
Frequency of vibration is the number of times a complete motion cycle (oscillating motion) take
place during the period of a second and is measured in cycles per second termed hertz (Hz).
'X'· axis
This is an orthogonal axis in the forward-facing direction of a standing person.
236
'Y'· axis
This is an orthogonal axis in the transverse direction (at right angle to the ‘x’ axis) of a standing
person.
'Z'· axis
This is an orthogonal axis in the vertical direction of a standing person.
Vibration measurement shall be undertaken to quantify the level of vibration to which orders are
exposed.
The measurement of vibration in general shall normally be taken on a structural surface supporting a
human body or on some surface other than the point of entry to the human subject.
The method of measuring the whole-body and hand-arm vibration is explained below:
Vibration shall be measured as close as possible to the point or area through which it is
transmitted to the body.
If such transmission has to pass through a cushion or depends on other factors, the
vibration attenuation of these factors shall be taken into account in the vibration
measurement.
Measuring Equipment
Vibration measuring equipment shall typically consist of the following: a transducer or
pick -up sensor, an amplifying device (electrical, mechanical, or optical), an amplitude or
level indicator or recorder, and a signal analyzer.
Where appropriate, networks or filter may be included to limit the frequency range of
the equipment, and to apply the recommended frequency-weighting to the input signal.
An RMS-rectifying device may also be included for convenience such that RMS values
may be read off and quantified directly.
The measuring and analyzing equipment used shall comply with the relevant International and
National Standards. Vibration transducer for example shall be in compliance to International
237
Electro-technical Commission (IEC) Publication 184, and auxiliary equipment (amplifiers,
frequency selective equipment and carrier systems) in compliance to IEC Publication
Vibration frequency analyzers or signal analyzers (of either type based on instrumentation
hardware , or computer software digital signal processor) with one third octave filter sets or
narrow band FFT (fast Fourier transform) bandwidth , shall be used for vibration frequency
analysis in the frequency range 1 to 100 Hz minimum.
Measuring and analyzing equipment shall be used in accordance with the manufacture’s
instructions.
Measuring and analyzing equipment shall be tested at suitable intervals. Qualified personnel
responsible for the upkeep of the measure instruments shall draw up a calibration and testing
schedule to be kept with the instrument.
The person responsible for maintenance and testing of measuring and analyzing equipment’s
shall be specially trained, and shall be responsible for ensuring that the equipment is kept in
good condition.
Recording of Data
When vibration is measured at places of work, comprehensive data shall be collected,
regarding.
the characteristics of the source of vibration investigated, and the type of work being
performed.
the characteristics of the path or way vibration is transmitted to human body (Hand-arm or
Whole-body vibration).
the point at which (including the description of any intermediary elements such as sheets)
and the pick-up device with which measurement were made, and the values obtained.
238
the numbers of workers exposed to vibration.
The data collected shall be suitably recorded. It is recommended that specific test record form shall
be used for this purpose.
Recommended vibration limits as stipulated herein this guideline shall be used with the
intention for protecting the safety and health of workers relating to:
- vibration affecting the hands and arms (vibration tools); and
- whole-body vibration transmitted through the supporting surface.
Vibration limits shall be laid down depending on the work to be done, and to avoid human
fatigue.
The limits shall be reviewed from time to time in the light of new scientific knowledge, technical
progress and possibilities of prevention.
Common powers tools include chipping hammers, powers grinders, hammer drills, and chain
saws found in widespread use in the mining, construction, manufacturing, and forestry
industries.
239
Vibration may be transmitted into the body from a vibrating tool or hand-held work piece via
one or both arms simultaneously. At lower vibrations levels, consequential effects are typically
discomfort and reduced working efficiency.
At higher vibration levels and longer exposure periods, diseases affecting the blood vessel,
joint and circulation are often resulted.
Severe exposure typically leads to a progressive circulation disorder in the part of the body
suffering the highest level of vibration, usually the fingers or hand where hand-held tools are
concerned. This disorder is known as Raynaud 's disease, or commonly referred to as 'dead
hand ', vibration -induced white finger or Raynaud 's disease.
- These values shall be used as a basis for the control of hand-arm vibration exposure.
Because of individual susceptibility, these values however should not be regarded as
defining a boundary between safe and dangerous levels.
Whole-body Vibration
- Vibration levels encountered in whole-body motion are transmitted to the body, usually
through the supporting surface (feet, buttocks, back, etc.).
Exposure to whole-body vibration can either cause permanent physical damage or disturb the
nervous system.
Daily exposure to this vibration over a period (typically several years) can result in
serious physical damage, for example, ischemic lumbago. This is a condition affecting the
lower spinal region and can also affect the exposed person's circulatory and/or
urological system.
240
People suffering from the effect of long-term exposure to whole-body vibration would have
usually been exposed to this damaging vibration related to some task at work.
- Over exposure can also disturb the central nervous system. Symptoms of this
disturbance usually appear during, or shortly after exposure in the form of fatigue,
insomnia, headache and "shakiness ".
- Many people have experienced these nervous symptoms after they have completed a
long car trip or boat trip. The symptoms however usually disappear after a period of
rest.
The recommended "Threshold Limit Values'' which refer to component acceleration levels and
duration of exposure for whole body vibration:
The vibration levels indicated by the curves in figures - and - are given in terms of RMS
acceleration level which indicates equal fatigue- decreased proficiency.
Exceeding the exposure specified by the curves will, in most situations cause noticeable
fatigue and decreased job proficiency in most tasks.
The degree of task interference depends on the subject and the complexity of the task.
These curves however indicate the general range for onset of such interference and the
time dependency observed.
The criteria are presented as recommended guidelines or trend curves and are not
intended to be firm boundaries classifying quantitative biological or physiological limits.
The above defined criteria are intended only for situations involving health for normal
people considered fit for normal living routines and the stress of an average working
day.
Occupational Safetyand Health audit or inspection disclosure that such risks may exist;
and
the workers maintain that they are subjected to an uncomfortable or disturbing level of
241
vibration
Source of Vibration
The sources of vibration shall be identified by suitable measurements and investigation. of the
vibration level vary widely because of a change in working condition (as when machine-tool runs
unload and the start working), account shall be taken for the least favorable condition. It is
nevertheless suggested that separate series of measurements be undertaken.
Assessment of Exposure
Measurements shall be made at locations normally occupied y the workers in the area under
observation.
Measurements shall be made of the vibration transmitted to the whole-body and of the
vibration transmitted to a particular part of the body.
The aim of vibration prevention programs is to eliminate safety and health risks arising from the
effect of vibration, or to reduce these vibrations to the lowest feasible levels by all appropriate
means.
The vibration to which workers are exposed, and the time during which they are exposed, shall not
exceed the recommended threshold limits as stipulated in this document. (See section on vibration
limits).
Control
All appropriate measures shall be taken at the source to prevent vibration generation,
transmission, and amplification of vibration when machinery and equipment is designed or
242
selected. Vibration emissions shall be a factor to be taken into account when machinery and
equipment is to be procured.
Attempts shall be made to ascertain at which locations, if any, vibration will exceed the
recommended threshold limits.
There locations shall be identified, marked out, and suitably indicated.
Engineering measures shall be taken to control vibration with a view to reducing vibration level
below the recommended limits.
When the above prove impossible, provisions shall be made by a reorganization of work, the use
of personal protective equipment or any other suitable means to reduce the exposure below the
recommended safe exposure limits.
The health of workers likely to be exposed to vibration, at levels exceeding the recommended
threshold limits, including the workers whose exposure is limited by personal protective
equipment or by administrative arrangements which reduce exposure time, shall be strictly
supervised.
The monitoring of the working environment shall be systematic and repeated as often as
necessary to ensure that vibration risks are kept under control. The monitoring shall be carried
out by a competent person.
Implementation
Every organization shall implement a general prevention program that take due consideration to
its own specific features. In this respect the employer should define and assign responsibilities in
the prevention program.
The organization may assign the Safety and Health Officer or other qualified persons
responsibilities and duties in connection with vibration prevention to include, but not limited to:
- the design of new plants and equipment, or studies of new processes.
- the purchase of machinery or equipment.
- contracts entered into with contractors.
- the information and training given to workers; and
- The purchase of personal protective equipment, and the provision of instruction in
regard to its use
Vibration control shall preferably be achieved by collective measures with the assistance of the
qualified persons. Improvement that is recommended shall be made forthwith by the
243
Employer.The personnel, who is responsible for vibration measurements in the working
environment shall:
Have received appropriate training in the measurement and control of vibration; and be
equipped with suitable measurement instruments
Manufactures shall provide full information about levels of vibration emitted, as well as on
means of controlling them. When ordering equipment, the purchaser shall specify maximum
limits for the vibration emitted.
Testing for new equipment to assess vibration shall be performed, and the levels of vibration
shall be below the safe vibration threshold limits.
It shall be preferable to purchase equipment that is quieter or produce less vibration at the
initial onset, as compared to undertaking remedial measure in addressing excessive vibration.
244
Design and Installation
Vibration control shall begin with the design and planning of new plants, installations and
processes. Control measure shall be based on relevant information, and in particular-
a knowledge of the vibration characteristics of the equipment and processes to be used
the isolation of operations or plant giving rise to high vibration that is difficult to control
As far as possible, preference shall be given to materials and structures having a high
isolating factor or attenuation factor, or combination of both.
Once a suitable equipment has been chosen, its installation hall be studied with due regard to:
the kind of vibration likely to be emitted.
the number and type of machines and other equipment; and
the number of workers employed on the work premises
Measurement shall be made as soon as the machine and equipment are installed, to establish
the resulting vibration levels.
In the work premise, the strategy of controlling vibration shall be made used as the first level of
control strategy. In order to further reduce the vibration level, or if the initial strategy did not
sufficiently reduce the vibration to levels below the limits stipulated herein this document, actions
propos ed hereafter can be taken.
Vibration shall also be attenuated, as appropriate in particular cases, ensuring safe distances of
workers from the vibration source; or by isolating the workers exposed to the vibration, either
by collective measures or by personal protective equipment
245
The various control methods may be combined to achieve a suitable reduction in vibration levels.
The control of vibration at source shall be drawn between the following three main types of
vibration according to their source:
Vibration attributable to the vibration of a solid or liquid (mechanical forces).
Vibration attributable to turbulence occurring in a gaseous medium (aerodynamic forces);
and
Vibration attributable to electro-dynamic or magneto-dynamic forces, or to electrical arc or
corona discharge (electrical forces).
Good work practices shall be used and shall include instructing workers to employ a minimum
hand grip force consistent with safe operation of the power tool or processes, to keep their body
and hands warm and dry, and to use anti-vibration tools and gloves which are effective for
damping vibration at high frequencies.
Methods of controlling sources of vibration shall include but not limited to:
reducing the intensity of vibration by dynamic balancing of machinery components, reducing
the driving force acting on vibrating part, reducing machinery speed, and rectifying any
mechanical faults (misalignment, etc.);
reducing the vibration emission efficiency of the vibrating parts by increasing their damping
capacity and improving manner machinery components are attached.
reducing turbulence and the rate of flow of fluids at inlets, in ducts or pipes, and at outlet.
changing from cylindrical toothed gears to helical gears, and from metal gears to gears of
other materials where feasible.
design of the shape and speed of tools with due regard to the characteristic of the material
246
worked.
Adequate design of air tubing and ducting system (compressed air, ventilation air). and gas
or liquid tubing system to prevent the propagation of vibration and resonance excitation:
and
The use of vibration isolation device or systems (such as spring vibration isolators, elastomeric pads,
viscos-damping systems), and flexible connections within the equipment.
The maintenance of equipment shall receive special attention to prevent any abnormal increase
in vibration emitted by the source.
the use of mechanical vibration isolators (typically steel spring. elastomeric or pneumatic
isolating elements) for the mounting machinery, pipe works, etc.
the use of resilient materials between building elements (control rooms, floor-joints, etc.):
the sitting of vibrating machinery so that it does not come into contact with other parts of
the installation and the workrooms: and
247
Ensuring that rotating or reciprocating machinery does not operate at speeds in close
proximity to structural resonant frequencies.
The anti-vibration qualities of the material used for the construction of premises. equipment and
enclosures should be carefully considered.
Protective Equipment and Reduction of Exposure Time: In cases when vibration levels cannot be
brought below the danger limit either by suitable installation, the following measures shall be
taken:
Personal protective equipment and limitation of exposure time should bring workers' exposure
within permissible limits.
Anti-vibration equipment shall on no account be regarded as adequate substitutes for
prevention.
The above stipulated measure is to be regarded as interim measure keep risks within
acceptable limits, until such time as vibration control can be made more effective through
engineering measures.
Every effort shall be made to ensure that workers use the personal protective equipment
provided.
When vibration levels cannot be brought within permissible limits, the duration in which the
workers are exposed to the vibration must be reduced.
The following option shall be considered with a view to reducing the duration of vibration
exposure.
rotation of jobs
reorganization of job function such that part of the work can be done without exposure
to the risks
Provision of breaks
248
12.8 VIBRATION MEASUREMENT SYSTEM
A typical human-vibration measurement system is illustrated in the diagram bellow:-
TRANSDUCER
(Acceleromete Pick-upvibrationsignal
Amplifying Devices
Signal Analyser
Acceleration in RMS values
12.9 SUMMARY
Many workers do not think that their exposure to vibration could be a health hazard. Vibration
exposure is more than just a nuisance. Constant exposure to vibration has been known to cause
serious health problems such as back pain, carpal tunnel syndrome, and vascular disorders. Vibration
related injury is especially prevalent in occupations that require outdoor work, such as forestry,
farming, transportation, shipping, and construction. There are two classifications for vibration
249
exposure: whole-body vibration and hand and arm vibration. These two types of vibration have
different sources, affect different areas of the body, and produce different symptoms.Whole-body
vibration is vibration transmitted to the entire body via the seat or the feet, or both, often through
driving or riding in motor vehicles (including fork trucks and off-road vehicles) or through standing
on vibrating floors (e.g., near power presses in a stamping plant or near shakeout equipment in a
foundry).Hand and arm vibration, on the other hand, is limited to the hands and arms and usually
results from the use of power hand tools (e.g., screwdrivers, nut runners, grinders, jackhammers,
and chippers) and from vehicle controls.Whole-body vibration levels can often be reduced by using
vibration isolation and by installing suspension systems between the operator and the vibrating
source.
Hand and arm vibration may be more difficult to control, but the proper selection and maintenance
of tools can dramatically decrease vibration exposure. Vibration levels associated with power hand
tools depend on tool properties, including size, weight, method of propulsion, handle location, and
the tool drive mechanism. Primary prevention through eliminating excessive vibration and shocks
can be accomplished through better ergonomic tool designs.
12.10 KEYWORDS
Occupational Vibrations- Constant exposure to vibration has been known to cause serious health
problems such as back pain, carpal tunnel syndrome, and vascular disorders
Vibration- an instance of vibratory motion; oscillation; quiver; tremor
Vibration Measurement- Measurements should be made to produce the data needed to draw
meaningful conclusions from the system under test. These data can be used to minimize or eliminate
the vibration and thus the resultant noise. There are also examples where the noise is not the
controlling parameter, but rather the quality of the product produced by the system
Whole Body Vibration- Whole-body vibration is vibration transmitted to the entire body via the seat
or the feet, or both, often through driving or riding in motor vehicles (including fork trucks and off-
road vehicles) or through standing on vibrating floors (e.g., near power presses in a stamping plant
or near shakeout equipment in a foundry)
250
251
UNIT 13 CREATING HIGH PERFORMANCE WORK SYSTEM
Objectives
After going through this unit,you will be able to:
discuss the underlying principles of high-performance work systems.
identify the components that make up a high-performance work system.
describe how the components fit together and support strategy.
recommend processes for implementing high-performance.
explain how the principles of high-performance work systems apply to small and medium
sized organizations.
Structure
13.1 Introduction
13.2 Principle of shared information and knowledge development
13.3 The principle of performance –reward linkage and egalitarianism
13.4 Anatomy of high- performance work systems
13.5 Fit it all together
13.6 Implementing the System
13.7 Navigating the transition to high-performance work systems
13.8 Outcomes of high-performance work systems
13.9 Summary
13.10 Keywords
13.1 INTRODUCTION
252
balancing these sometimes competing demands; they create work environments that blend these
concerns to simultaneously get the most from employees, contribute to their needs, and meet the
short-term and long-term goals of the organization.
Linkages to
Strategy System Design
Outcomes
Work Flow The Implementation
HRM Practices Process Organizational
Principles of
Support Technology Employees
High
Involvement
Developing High –Performance Work system
th
(Source: George and Scott Managing human Resource 14 Edition (02-2006))
The notion of high-performance work systems was originally developed by David Nadler to capture
an organization’s “architecture” that integrates technical and social aspects of work. Edward Lawler
and his associates at the Center for Effective Organization at the University of Southern California
have worked with Fortune 1000 corporations to identify the primary principles that support high-
performance work systems. There are four simple, but powerful principles as follows:
Shared information
Knowledge development
Performance–reward linkage
Egalitarianism
In many ways, these principles have become the building blocks for managers who want to create
high-performance work systems
The principle of shared information is critical for the success of empowerment and involvement
initiatives in organizations. In the past, employees traditionally were not given—and did not ask
for—information about the organization. People were hired to perform narrowly defined jobs with
clearly specified duties, and not much else was asked of them. One of the underlying ideas of high-
performance work systems is that workers are intimately acquainted with the nature of their own
work and are therefore in the best position to recognize problems and devise solutions to them.
Today organizations are relying on the expertise and initiative of employees to react quickly to
incipient problems and opportunities. Without timely and accurate information about the business,
253
employees can do little more than simply carry out orders and perform their roles in a relatively
perfunctory way. They are unlikely to understand the overall direction of the business or contribute
to organizational success.
On the other hand, when employees are given timely information about business performance,
plans, and strategies, they are more likely to make good suggestions for improving the business and
to cooperate in major organizational changes. They are also likely to feel more committed to new
courses of action if they have input in decision making. The principle of shared information typifies a
shift in organizations away from the mentality of command and control toward one more focused on
employee commitment. It represents a fundamental shift in the relationship between employer and
employee. If executives do a good job of communicating with employees and create a culture of
information sharing, employees are perhaps more likely to be willing (and able) to work toward the
goals for the organization. They will “know more, do more, and contribute more.” At FedEx Canada,
at every single station across Canada, company officers and managing directors meet with
employees at 5:30 a.m. and 10:00 p.m. to review the business data and answer questions.
Knowledge development is the twin sister of information sharing. As Richard Teerlink, former CEO of
Harley-Davidson, noted, “The only thing you get when you empower dummies is bad decisions
faster.” Throughout this text, we have noted that the number of jobs requiring little knowledge and
skill is declining whiles the number of jobs requiring greater knowledge and skill is growing rapidly.
As organizations attempt to compete through people, they must invest in employee development.
This includes both selecting the best and the brightest candidates available in the labor market and
providing all employees opportunities to continually hone their talents.
High-performance work systems depend on the shift from touch labor to knowledge work.
Employees today need a broad range of technical, problem solving, and interpersonal skills to work
either individually or in teams on cutting edge projects. Because of the speed of change, knowledge
and skill requirements must also change rapidly. In the contemporary work environment, employees
must learn continuously. Stopgap training programs may not be enough. Companies such as
DaimlerChrysler and Roche have found that employees in high-performance work systems need to
learn in “real time,” on the job, using innovative new approaches to solve novel problems. Likewise,
at Ocean Spray’s Henderson, Nevada, plant, making employees aware of the plant’s progress has
been a major focus. A real-time scoreboard on the Henderson plant floor provides workers with
254
streaming updates of the plant’s vital stats, including average cost per case, case volumes filled,
filling speeds, and injuries to date. When people are better informed, they do better work. “We
operate in real time and we need real-time information to be able to know what we have achieved
and what we are working towards,” says an Ocean Spray manager. (See Case Study 1 at the end of
the chapter for more on Ocean Spray’s HPWS initiative.
Connecting rewards to organizational performance also ensures fairness and tends to focus
employees on the organization. Equally important, performance-based rewards ensure that
employees share in the gains that result from any performance improvement. For instance, Lincoln
Electric has long been recognized for its efforts in linking employee pay and performance.
People want a sense that they are members, not just workers, in an organization. Status and power
differences tend to separate people and magnify whatever disparities exist between them. The “us
versus them” battles that have traditionally raged between managers, employees, and labor unions
are increasingly being replaced by more cooperative approaches to managing work. More egalitarian
work environments eliminate status and power differences and, in the process, increase
collaboration and teamwork. When this happens, productivity can improve if people who once
worked in isolation from (or in opposition to) one another begin to work together. Nucor Steel has
an enviable reputation not only for establishing an egalitarian work environment but also for the
employee loyalty and productivity that stem from that environment. Upper levels of management
255
do not enjoy better insurance programs, vacation schedules, or holidays. In fact, certain benefits
such as Nucor’s profit-sharing plan, scholarship program, employee stock purchase plan,
extraordinary bonus plan, and service awards program are not available to Nucor’s officers at all.
Senior executives do not enjoy traditional perquisites such as company cars, corporate jets,
executive dining rooms, or executive parking places. On the other hand, every Nucor employee is
eligible for incentive pay and is listed alphabetically on the company’s annual report.
High-performance work systems frequently begin with the way work is designed. Total quality
management (TQM) and reengineering have driven many organizations to redesign their work flows.
Instead of separating jobs into discrete units, most experts now advise managers to focus on the key
business processes that drive customer value—and then create teams that are responsible for those
processes. Federal Express, for example, redesigned its delivery process to give truck drivers
responsibility for scheduling their own routes and for making necessary changes quickly. Because the
256
drivers have detailed knowledge of customers and routes, Federal Express managers empowered
them to inform existing customers of new products and services. In so doing, drivers now fill a type
of sales representative role for the company. In addition, FedEx drivers also work together as a team
to identify bottlenecks and solve problems that slow delivery. To facilitate this, advanced
communications equipment was installed in the delivery trucks to help teams of driver’s balance
routes among those with larger or lighter loads.
By redesigning the work flow around key business processes, companies such as Federal Express and
Colgate-Palmolive have been able to establish a work environment that facilitates teamwork, takes
advantage of employee skills and knowledge, empowers employees to make decisions, and provides
them with more meaningful work.
257
Anatomy of High-Performance Work Systems
th
(Source: George and Scott Managing human Resource 14 Edition (02-2006)
258
number of cases, leadership is shared among team members. Kodak, for example, rotates team
leaders at various stages in team development. Alternatively, different individuals can assume
functional leadership roles when their particular expertise is needed most.
But information technologies need not always be so high-tech. The richest communication
occurs face to face. The important point is that high-performance work systems cannot succeed
without timely and accurate communications. (Recall the principle of shared information.)
Typically the information needs to be about business plans and goals, unit and corporate
operating results, incipient problems and opportunities, and competitive threats.
Each of these practices highlights the individual pieces of a high-performance work system. This
philosophy is reflected in the mission statement of Saturn Motors, a model organization for HPWS.
Saturn’s mission is to “Market vehicles developed and manufactured in the United States that are
world leaders . . . through the integration of people, technology, and business systems Figure
summarizes the internal and external linkages needed to fit high-performance work systems
together
259
Achieving Strategic Fit
th
Source: George and Scott Managing human Resource 14 Edition (02-2006)
260
Establishing External Fit
To achieve external fit, high-performance work systems must support the organization’s goals
and strategies. This begins with an analysis and discussion of competitive challenges,
organizational values, and the concerns of employees and results in a statement of the
strategies being pursued by the organization.16 Xerox, for example, uses a planning process
known as “Managing for Results,” which begins with a statement of corporate values and
priorities. These values and priorities are the foundation for establishing three-to-five-year goals
for the organization.
Each business unit establishes annual objectives based on these goals, and the process cascades
down through every level of management. Ultimately, each employee within Xerox has a clear
“line of sight” to the values and goals of the organization so he or she can see how individual
effort makes a difference. Efforts such as this to achieve vertical fit help focus the design of high-
performance work systems on strategic priorities. Objectives such as cost containment, quality
enhancement, customer service, and speed to market directly influence what is expected of
employees and the skills they need to be successful. Terms such as involvement, flexibility,
efficiency, problem solving, and teamwork are not just buzzwords. They are translated directly
from the strategic requirements of today’s organizations. High-performance work systems are
designed to link employee initiatives to those strategies.
261
Many of these recommendations are applicable to almost any change initiative, but they are
especially important for broad-based change efforts that characterize high-performance work
systems. Some of the most critical issues are discussed next
One of the best ways to communicate business needs is to show employees where the business
is today—its current performance and capabilities. Then show them where the organization
needs to be in the future. The gap between today and the future represents a starting point for
262
discussion. When executives at TRW wanted to make a case for change to high-performance
work systems, they used employee attitude surveys and data on turnover costs. The data
provided enough ammunition to get conversation going about needed changes and sparked
some suggestions about how they could be implemented. Highlights in Exhibit HPWS 13.1 shows
what happened when BMW bought British Land Rover and began making changes without first
talking through the business concerns.
Ironically, in this case, BMW unwittingly dismantled an effective high-performance work system.
Now that Ford owns the company, will things work differently?
As a result of these changes, productivity soared by 25 percent, quality action team’s netted
savings worth millions of dollars, and the quality of products climbed. Operating in a very
competitive environment, Land Rover produced and sold one-third more vehicles. On the basis
of these changes, the company was certified as an “Investors in People—U.K.” designee. This
national standard recognizes organizations that place the involvement and development of
people at the heart of their business strategy.
So far, so good;then BMW bought the company. In spite of massive evidence documenting the
effectiveness of the new management methods and changed culture, BMW began to dictate
changes within a manner of months. Unfortunately, the changes undid the cultural
transformation.
Land Rover never fully recovered under the new management. After losing more than $6 billion,
BMW sold off the company. Land Rover was later purchased by Ford Motor Company. Ford
bought Land Rover and put it under one roof with Volvo, Jaguar, and Aston Martin to create the
Premier Auto Group. Ford has continued to manufacture the Land Rover in England while
improving its quality, but this hasn’t been enough to turn Land Rover around.
263
In 2003, Land Rover finished near the bottom of the J.D. Power Initial Quality Study, 36th out of
37 brands, and the division has been showing losses. Land Rover’s 8000-strong workforce in
Solihull, England, has been put on notice by group chairman Mark Fields that it needs to alter its
culture and working practices to match those Embraced by Ford. In the Ford production system,
teams of workers are supposed to take charge of and improve quality in their areas. Fields wants
the Solihull plant to operate like a nearby Jaguar plant in Halewood, England, which formerly
built Ford Escorts. The Halewood plant had been notorious for militancy, work stoppages,
absenteeism, and quality problems. Halewood later became Ford’s top factory after a decision
was made to transform into a Jaguar factory and a sweeping series of cultural, productivity and
working practice changes were put into place.
“We’ve taken a very positive set of first steps but there’s a lot of pavement in front of us,” said
Fields about Land Rover.
264
performance work system. They resented the loss of status and control that accompanied the
use of empowered teams.
If Solectron managers had participated in discussions about operational and financial aspects of
the business, they might not have felt so threatened by the change. Open exchange and
communication at an early stage pay off later as the system unfolds. Ongoing dialogue at all
levels helps reaffirm commitment, answer questions that come up, and identify areas for
improvement throughout implementation. Recall that one of the principles of high-performance
work systems is sharing information. This principle is instrumental to success both during
implementation and once the system is in place.
Involving Union
The autocratic styles of management and confrontational approaches to labor negotiations are
being challenged by more enlightened approaches that promote cooperation and collaboration.
Given the sometimes radical changes involved in implementing high-performance work systems,
it makes good sense to involve union members early and to keep them as close partners in the
design and implementation process.
To establish an alliance, managers and labour representatives should try to create “win–win”
situations, in which all parties gain from the implementation of high-performance work systems.
In such cases, organizations such as Shell and Weyerhaeuser have found that “interest-based”
(integrative) negotiation rather than positional bargaining leads to better relationships and
outcomes with union representatives.
Trust is a fragile component of an alliance and is reflected in the degree to which parties are
comfortable sharing information and decision making. Manitoba Telecom Services has involved
union members in decisions about work practices, and because of this, company managers have
been able to build mutual trust and respect with the union. This relationship has matured to a
point at which union and company managers now design, select, and implement new
technologies together. By working hard to develop trust up front, in either a union or a
nonunion setting, it is more likely that each party will understand how high-performance work
systems will benefit everyone; the organization will be more competitive, employees will have a
higher quality of work life, and unions will have a stronger role in representing employees.
Most labor–management alliances are made legitimate through some tangible symbol of
commitment. This might include a policy document that spells out union involvement, letters of
265
understanding, clauses in a collective bargaining agreement, or the establishment of joint
forums with explicit mandates. MacMillan Bloedel, a Canadian wood product company now
owned by Weyerhaeuser, formed a joint operations committee of senior management and labor
representatives to routinely discuss a wide range of operational issues related to high-
performance work systems. These types of formal commitments, with investments of tangible
resources, serve two purposes: (1) They are an outward sign of management commitment, and
(2) they institutionalize the relationship so that it keeps going even if key project champions
leave.
In addition to union leadership, it is critical to have the support of other key constituents.
Leaders must ensure that understanding and support are solid at all levels, not just among those
in the executive suite. To achieve this commitment, some organizations have decentralized the
labor relations function, giving responsibility to local line managers and human resources
generalists, to make certain that they are accountable and are committed to nurturing a high-
performance work environment. Nortel Networks, for example, formally transferred
accountability for labor relations to its plant managers through its collective bargaining
agreement with the union. Line managers became members of the Employee Relations Council,
which is responsible for local bargaining as well as for grievance hearings that would formerly
have been mediated by HR. Apart from the commitment that these changes create, perhaps the
most important reason for giving line managers responsibility for employee relations is that it
helps them establish a direct working relationship with the union.
Once processes, agreements, and ground rules are established, they are vital to the integrity of
the relationship. As Ruth Wright, manager of the Council for Senior Human Resource Executives,
puts it, “Procedure is the ‘rug’ on which alliances stand. Pull it out by making a unilateral
management determination or otherwise changing the rules of the game, and the initiative will
falter. Procedure keeps the parties focused, and it is an effective means of ensuring that
democracy and fairness prevail.” In most cases, a “home-grown” process works better than one
that is adopted from elsewhere. Each organization has unique circumstances, and parties are
more likely to commit to procedures they create and own.
266
implementation gets under way. One reason is that pieces of the system are changed incrementally
rather than as a total program. Xerox Corporation found that when it implemented teams without
also changing the compensation system to support teamwork, it got caught in a bad transition. The
teams showed poorer performance than did employees working in settings that supported
individual contributions. Company executives concluded that they needed to change the entire
system at once, because piecemeal changes were detrimental. The other mistake organizations
often make is to focus on either top-down change driven by executives or bottom-up change
cultivated by the employees. Firms such as Champion International, now a part of International
Paper, and ASDA, a low-cost British retailer, are among the many companies that have found that
the best results occur when managers and employees work together. The top-down approach
communicates manager support and clarity, while the bottom-up approach ensures employee
acceptance and commitment.
267
Are employees getting the information they need to make empowered decisions?
Are training programs developing the knowledge and skills employees need?
Are employees being rewarded for good performance and useful suggestions?
Are employees treated fairly so that power differences are minimal?
Second, the evaluation process should focus on the goals of high-performance work systems. To
determine whether the program is succeeding, managers should look at such issues as the
following:
Are desired behaviors being exhibited on the job?
Are quality, productivity, flexibility, and customer service objectives being met?
Are quality-of-life goals being achieved for employees?
Is the organization more competitive than in the past?
Organizations achieve a wide variety of outcomes from high-performance work systems and
effective human resources management. We have categorized these outcomes in terms of either
employee concerns such as quality-of-work-life issues and job security or competitive challenges
such as performance, productivity, and profitability. Throughout the text we have emphasized that
the best organizations find ways to achieve a balance between these two sets of outcomes and
pursue activities that improve both.
There are a myriad of potential benefits to employees from high-performance work systems. In high-
performing workplaces, employees have the latitude to decide how to achieve their goals. In a
learning environment, people can take risks; generate new ideas, and make mistakes, which in turn
lead to new products, services, and markets. Because employees are more involved in their work,
they are likely to be more satisfied and find that their needs for growth are more fully met. Because
268
they are more informed and empowered, they are likely to feel that they have a fuller role to play in
the organization and that their opinions and expertise are valued more. This of course underlies
greater commitment. With higher skills and greater potential for contribution, they are likely to have
more job security as well as be more marketable to other organizations.
If employees with advanced education are to achieve their potential, they must be allowed to utilize
their skills and abilities in ways that contribute to organizational success while fulfilling personal job
growth and work satisfaction needs. High-performance work systems serve to mesh organizational
objectives with employee contributions. Conversely, when employees are underutilized,
organizations operate at less than full performance, while employees develop poor work attitudes
and habits
Several organizational outcomes also result from using high-performance work systems. These
include higher productivity, lower costs, and better responsiveness to customers, greater flexibility,
and higher profitability. Highlights in Exhibit 7.2 provide a sample of the success stories that
American companies have shared about their use of high-performance work systems.
269
• The Tennessee Eastman Division of the Eastman Chemical Company experienced an increase
in productivity of nearly 70 percent, and 75 percent of its customers ranked it as the top
chemical company in customer satisfaction.
• A study by John Paul MacDuffie of 62 automobile plants showed that those implementing
high-performance work systems had 47 percent better quality and 43 percent better
productivity. • A study by Jeff Arthur of 30 steel minimills showed a 34-percent increase in
productivity, 63 percent less scrap, and 57 percent less turnover.
• A study by Mark Huselid of 962 firms in multiple industries showed that high-performance
work systems resulted in an annual increase in profits of more than $3,800 per employee.
(Source: Martha A. Gephart and Mark E. Van Buren, “The Power of High Performance Work Systems,”
Training &
Development 50, no. 10 (October 1996): 21–36.)
Organizations can create a sustainable competitive advantage through people if they focus on four
criteria. They must develop competencies in their employees that have the following qualities:
Valuable: High-performance work systems increase value by establishing ways to increase
efficiency, decrease costs, improve processes, and provide something unique to customers.
Rare: High-performance work systems help organizations develop and harness skills, knowledge,
and abilities that are not equally available to all organizations.
Difficult to imitate: High-performance work systems are designed around team processes
and capabilities that cannot be transported, duplicated, or copied by rival firms.
Organized: High-performance work systems combine the talents of employees and rapidly
deploy them in new assignments with maximum flexibility
These criteria clearly show how high-performance work systems, and human resources management
in general, are instrumental in achieving competitive advantage through people. However, for all
their potential, implementing high-performance work systems is not an easy task. The systems are
complex and require a good deal of close partnering among executives, line managers, HR
professionals, union representatives, and employees. Ironically, this very complexity leads to
competitive advantage. Because high-performance work systems are difficult to implement,
successful organizations are difficult to copy. The ability to integrate business and employee
concerns is indeed rare, and doing it in a way that adds value to customers is especially noteworthy.
270
Organizations such as Wal-Mart, Microsoft, and Southwest Airlines have been able to do it, and as a
result they enjoy a competitive advantage
13.9 SUMMARY
High-performance work systems are specific combinations of HR practices, work structures, and
processes that maximize employee knowledge, skill, commitment, and flexibility. They are based on
contemporary principles of High-involvement organizations. These principles include shared
information, knowledge development, performance–reward linkages, and egalitarianism.
High-performance work systems are composed of several interrelated components. Typically, the
system begins with designing empowered work teams to carry out key business processes. Team
members are selected and trained in technical, problem-solving, and interpersonal skills. To align the
interests of employees with those of the organization, reward systems are connected to
performance and often have group and organizational incentives. Skill-based pay is regularly used to
increase flexibility and salaried pay plans are used to enhance an egalitarian environment.
Leadership tends to be shared among team members, and information technology is used to ensure
that employees have the information they need to make timely and productive decisions.
The pieces of the system are important only in terms of how they help the entire system function.
When all the pieces support and complement one another, high-performance work systems achieve
internal fit. When the system is aligned with the competitive priorities of the organization, it
achieves external fit as well. Implementing high-performance work systems represents a
multidimensional change initiative. High-performance work systems are much more likely to go
smoothly if a business case is first made. Top-management support is critical, and so too is the
support of union representatives and other important constituents. HR representatives are often
helpful in establishing a transition structure to help the implementation progress through its various
stages. Once the system is in place, it should be evaluated in terms of its processes, outcomes, and
ongoing fit with strategic objectives of the organization. When implemented effectively, high-
performance work systems benefit both the employees and the organization. Employees have more
involvement in the organization, experience growth and satisfaction, and become more valuable as
contributors. The organization also benefits from high productivity, quality, flexibility, and customer
satisfaction. These features together can provide an organization with a sustainable competitive
advantage.
271
Case Study-1
HPWS Transforms Nevada Plant into One in a Million
In the world of beverage plants, this milestone at Ocean Spray’s Henderson, Nevada, plant is
virtually unheard of—1 million operating hours without a lost-time accident. “This is an
accomplishment that very few in our industry ever achieve,” says Mike Stamatakos, vice-
president of operations for Ocean Spray. “A plant’s safety record is a reflection of how well it
is run. This milestone is an indication that Henderson does most—if not everything—well.”
The fact that Ocean Spray Henderson is one of the safest beverage plants around is no
accident. The plant’s impressive operations milestone is the result of a two and- a-half-year
effort to improve safety awareness, uptime, and overall operations.
When the plant was built in 1994 to serve Ocean Spray customers west of the Rockies,
Ocean Spray had a vision to create a high-performance work system. The goal was to have
an educated and involved workforce that would raise the bar in terms of plant performance
and operations. As part of that effort, in 2001 Henderson managers began a dedicated
environmental health and safety (EHS) program. An early step in the process was bringing in
an occupational therapist to perform a job safety analysis on the plant. The EHS program
ranges from formal computer-based training—required of every employee—to fun
promotions designed to get employees engaged with the safety message. The Ocean Spray
Henderson plant staff is divided into four teams and each is measured on just how well it
performs. A bulletin board posts each team’s days without a recordable accident. A real-
time scoreboard on the plant floor provides workers a streaming update of the plant’s vital
performance statistics. The idea is that an informed worker is a stronger team member. The
plant operates on a just-in-time delivery and shipment schedule that helps keep things
running on time and within budget. Reaching the 1-million-hour milestone was a 25-year
journey. “It’s not just a case of the people in the front office talking the talk. It is the people
on the floor and everyone in the facility walking the walk,” says Jim Colmey, a safety
specialist at the plant.
QUESTIONS
1. What are the key aspects of Ocean Spray’s high-performance work system?
2. Do you think the system achieves both internal and external fit?
3. What other HR practices might the company consider implementing?
Source: Condensed from Andrea Foote, “One in a Million: Ocean Spray Henderson Has Parlayed
Hard
272
Work and Dedication into a Remarkable Operations Milestone,” Beverage World 122, no. 8 (August
15,2003): 22–29.One
13.10 KEYWORDS
High Performance Work System (HPWS)-combination of HR practices, work structures, and
processes that maximizes employee knowledge, skill, commitment, and flexibility
Labor structure- is the structure how labour has been distributed in the facility
Egalitarianism- is a trend of thought that favors equality for particular categories of, or for all, living
entities
273
UNIT 14 WORK STATION STUDY AND ERGONOMICS
Objectives
After going through this unit,you will be able to:
design the workplace according to nature of work.
prepare a checklist for job design assessment.
estimate the normal working requirements with respect to area and tools.
recognize the symptoms of occupational overuse syndrome.
use the tools and diagrams at appropriate place.
apply ergonomics in each corner of your workplace.
Structure
14.1 Introduction
14.2 Method Study
14.3 Charts and Diagram used in Method study
14.4 Occupational Overuse Syndrome
14.5 Job Design
14.6 Work Measurement
14.7 Summary
14.8 Keywords
14.1 INTRODUCTION
Productivity has now become an everyday watch word. It is crucial to the welfare of industrial firm
as well as for the economic progress of the country. High productivity refers to doing the work in a
shortest possible time with least expenditure on inputs without sacrificing quality and with minimum
wastage of resources.
Work-study forms the basis for work system design. The purpose of work design is to identify the
most effective means of achieving necessary functions. This work-study aims at improving the
existing and proposed ways of doing work and establishing standard times for work performance.
Work-study is encompassed by two techniques, i.e., method study and work measurement.
“Method study is the systematic recording and critical examination of existing and proposed ways of
doing work, as a means of developing and applying easier and more effective methods and reducing
costs.”
274
“Work measurement is the application or techniques designed to establish the time for a qualified
worker to carry out a specified job at a defined level or performance.”
There is a close link between method study and work measurement. Method study is concerned
with the reduction of the work content and establishing the one best way of doing the job whereas
work measurement is concerned with investigation and reduction of any ineffective time associated
with the job and establishing time standards for an operation carried out as per the standard
method.
Method study is the technique of systematic recording and critical examination of existing and
proposed ways of doing work and developing an easier and economical method.
Objectives of Method Study
Improvement of manufacturing processes and procedures.
Improvement of working conditions.
Improvement of plant layout and work place layout.
Reducing the human effort and fatigue.
Reducing material handling
Improvement of plant and equipment design.
Improvement in the utility of material, machines and manpower.
Standardization of method.
Improvement in safety standard.
275
Economic factors
Human factors
Technical factors
Economic Factors
The money saved because of method study should be sufficiently more. Then only the study
will be worthwhile. Based on the economic factors, generally the following jobs are selected.
Operations having bottlenecks (which hold up other production activities).
Operations done repetitively.
Operations having a great amount of manual work.
Operations where materials are moved for a long distance.
Human Factors
The method study will be successful only with the co-operation of all people concerned viz,
workers, supervisor, trade unions etc.Workers may resist method study due to
The fear of unemployment
The fear of reduction in wages
The fear of increased work load
Then if they do not accept method study, the study should be postponed.
Technical Factors
To improve the method of work all the technical details about the job should be available.
Every machine tool will have its own capacity. Beyond this, it cannot be improved. For
example, a work study man feels that speed of the machine tool may be increased and HSS
tool may be used. But the capacity of the machine may not permit increased speed. In this
case, the suggestion of the work study man cannot be implemented. These types of
technical factors should be considered.
Record
All the details about the existing method are recorded. This is done by directly observing the
work. Symbols are used to represent the activities like operation, inspection, transport, storage
and delay. Different charts and diagrams are used in recording. They are:
Operation process chart: All the operations and inspections are recorded.
Flow process chart
- Man type All the activities of man are recorded
276
- Material type All the activities of the material are recorded
- Equipment types all the activities of equipment or machine are recorded.
Two-handed process chart: Motions of both lands of worker are Right hand-Left hand
chart recorded independently.
Multiple activity charts: Activities of a group of workers doing a single job or the
activities of a single worker operating several machines are recorded.
Flow diagram: This is drawn to suitable scale. Path of flow of material in the shop is
recorded.
String diagram: The movements of workers are recorded using a string in a diagram
drawn to scale.
Examine
Critical examination is done by questioning technique. This step comes after the method is
recorded by suitable charts and diagrams. The individual activity is examined by putting several
questions. The following factors are questioned
Purpose – To eliminate the activity, if possible
Place – To combine or re-arrange the activities
Sequence -do-
Person -do-
Means – To simplify the activity
277
- Why does that person do it?
- Who else could do it?
- Who should do it?
Means – How is it done?
- Why is it done that way?
- How else could it be done?
- How should it be done?
By doing this questioning
- Unwanted activities can be eliminated
- Number of activities can be combined or re-arranged
- Method can be simplified.
Define
Once a complete study of a job has been made and a new method is developed, it is necessary
to obtain the approval of the management before installing it. The work study man should
prepare a report giving details of the existing and proposed methods. He should give his reasons
for the changes suggested. The report should show
278
The cost of installing the new method including
- Cost of new tools and equipment
- Cost of re-layout of the shop
- Cost of training the workers in the new method
- Cost of improving the working conditions
Written standard practice: Before installing the new method, an operator ‘s instructions sheet
called written standard practice is prepared. It serves the following purposes:
It records the improved method for future reference in as much detail as may be
necessary.
It is used to explain the new method to the management foreman and
operators.
It gives the details of changes required in the layout of machine and work places.
It is used as an aid to training or retraining operators.
It forms the basis for time studies.
The written standard practice will contain the following information:
Tools and equipment to be used in the new method.
General operating conditions.
Description of the new method in detail.
Diagram of the workplace layout and sketches of special tools, jigs or fixtures
required.
Install
This step is the most difficult stage in method study. Here the active support of both
management and trade union is required. Here the work study man requires skill in getting along
with other people and winning their trust. Install stage consists of
Gaining acceptance of the change by supervisor.
Getting approval of management.
Gaining the acceptance of change by workers and trade unions.
Giving training to operators in the new method.
To be in close contact with the progress of the job until it is satisfactorily executed.
Maintain
The work study man must see that the new method introduced is followed. The workers after
some time may slip back to the old methods. This should not be allowed. The new method may
279
have defects. There may be difficulties also. This should be rectified in time by the work study
man. Periodical review is made. The reactions and suggestions from workers and supervisors are
noted. This may lead to further improvement. The differences between the new written
standard practice and the actual practice are found out. Reasons for variations are analyzed.
Changes due to valid reasons are accepted. The instructions are suitably modified.
Operation
A large circle indicates operation. An operation takes place when there is a change in
physical or chemical characteristics of an object. An assembly or disassembly is also an
operation.
When information is given or received or when planning or calculating takes place it is
also called operation.
Example 1.1
Reducing the diameter of an object in a lathe, Hardening the surface of an object by
heat treatment.
280
Inspection
A square indicates inspection. Inspection is checking an object for its quality, quantity or
identifications.
Example 1.2
Checking the diameter of a rod, Counting the number of products produced.
Transport
An arrow indicates transport. This refers to the movement of an object or operator or
equipment from one place to another. When the movement takes place during an
operation, it is not called transport.
Example 1.3
Moving the material by a trolley
Operator going to the stores to get some tool.
Permanent storage
An equilateral triangle standing on its vertex represents storage. Storage takes place
when an object is stored and protected against unauthorized removal.
Example 1.5
Raw material in the store room
Combined activity
When two activities take place at the same time or done by the same operator or at the
same place, the two symbols of activities are combined.
Example 1.6
281
Reading and recording a pressure gauge, here a circle inside a square represents the
combined activity of operation and inspection.
The condition Occupational Overuse Syndrome (OOS) is a collective term for a range of conditions,
including injury, characterized by discomfort or persistent pain in muscles, tendons and other soft
tissues.
The Symptoms
It is necessary to distinguish the symptoms of OOS from the normal pains of living, such as
muscle soreness after unaccustomed exercise or activity. OOS pains must also be distinguished
from the pain of arthritis or some other condition. The early symptoms of OOS include:
• Muscle discomfort
• Fatigue
• Aches and pains
• Soreness
• Hot and cold feelings
• Muscle tightness
• Numbness and tingling
• Stiffness
282
• Muscle weakness.
The Causes
OOS often develops over a period. It is usually caused or aggravated by some types of work. The
same conditions can be produced by activities away from the workplace. The work that may
produce OOS often involves repetitive movement, sustained or constrained postures and/or
forceful movements. The development of OOS may include other factors such as stress and
working conditions. Some conditions that fall within the scope of OOS are well defined and
understood medically, but many are not, and the reasons for their cause and development are
yet to be determined. There are several theories about the causes of OOS. One of these is as
follows and gives a useful picture which leads on to prevention strategies:
Muscles and tendons are nourished by blood which travels through blood vessels inside the
muscle. A tense muscle squeezes on these blood vessels, making them smaller and slowing
the flow of blood. The muscle can store a little oxygen to cope with momentary tension, but
when this is used up the muscle must switch to a very inefficient form of energy production.
This uses the stored energy very quickly, tires the muscle, and leads to a build-up of acid
waste products, which make the muscle hurt. As these wastes build up in the muscle, it
becomes mechanically stiff and this makes it still harder for the muscle to work. The muscle
and tendons can withstand fatigue and are able to recover if they are given a variety of
tasks, and regular rest breaks. It may be the absence of variety and rest breaks that strains
the muscles and tendons beyond their capacity for short-term recovery.
283
Absence of variety and rest breaks may strain muscles and tendons beyond their capacity for
short-term recovery.
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)
Occupational Overuse Syndrome has been known for centuries and affects people in a wide range
of occupations.
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)
284
• Hairdressers
• Typists
• Mail sorters
• Supermarket workers
• carpenters
History shows that OOS has occurred in a variety of occupations. An Italian physician described
OOS in 18th-century scribes and clerks’, while in the 19th century terms such as
“Upholsterer’sHand“and “Fisherwoman’s Finger” are examples of OOS.
Prevention
There are five main areas in which we can prevent Occupational Overuse Syndrome. These are:
• The design of equipment and tasks
• The organization of work
• The work environment
• Training and education
• The development of policies
Prevention is always better than cure, but it is particularly important when dealing with OOS.
This is because of its widespread nature, the difficulty of treatment and its potentially
debilitating consequences. A sample checklist (see Appendix B) is one of a series of seven
covering policy development for OOS, work organization, workplace design, keyboard
workstation design and working technique. It illustrates how the above aspects of work may be
assessed using such checklists.
285
14.5 JOB DESIGN
Where possible, job rotation, automation and task modification should be considered to reduce the
effects of sustained postures and repetitive movements,good job design will incorporate a range of
factors, including consultation with the worker, so that there is a match between the individual and
the job.
A good physical match between the person and the workplace is illustrated by:
• The head inclines only slightly forward.
• The arms fall naturally on to the work surface.
• The back is properly supported.
• There is good knee and leg room.
Good workstation design allows the operator’s joints to be comfortable and free from strain
(Source: Occupational Safety and Health Service of the Department of Labor, New Zealand)
Work measurement is a technique to establish the time required for a qualified worker to carry out a
specified job at a defined level of performance.
Objectives of work measurement
286
To reduce or eliminate non-productive time.
To fix the standard time for doing a job.
To develop standard data for future reference.
To improve methods.
Uses of work measurements
To compare the efficiency of alternate methods. When two or more methods are
available for doing the same job, the time for each method is found out by work
measurement. The method which takes minimum time is selected.
It helps for the estimation of cost. Knowing the time standards, it is possible to work out
the cost of the product. This helps to quote rates for tenders.
It helps to determine the requirement of men and machine. When we know the time to
produce one piece and the quantity to be produced, it is easy to calculate the total
requirement of men and machines.
It helps in better production control. Time standards help accurate scheduling. So, the
production control can be done efficiently.
It helps to control the cost of production. With the help of time standards, the cost of
production can be worked out. This cost is used as a basis for control.
It helps to fix the delivery date to the customer. By knowing the standard time, we will
be able to calculate the time required for manufacturing the required quantity of
products.
287
Production study.
Work sampling or Ratio delay study.
Synthesis from standard data.
Analytical estimating.
Predetermined motion time system.
Objectives
The objectives of the study of ergonomics are to optimize the integration of man and machine to
increase work rate and accuracy. It involves
The design of a work place befitting the needs and requirements of the worker.
The design of equipment, machinery, and controls in such a manner to minimize
mental and physical strain on the worker thereby increasing the efficiency,
The design of a conductive environment for executing the task most effectively.
Both work study and Ergonomics are complementary and try to fit the job to the workers;
however, Ergonomics adequately takes care of factors governing physical and mental strains.
Applications
In practice, ergonomics has been applied to several areas as discussed below
Working environments
The work places
Other areas
Working environments
288
The environment aspect includes considerations regarding light, climatic conditions (i.e.,
temperature, humidity and fresh air circulation), noise, bad odor, smokes, fumes, etc.,
which affect the health and efficiency of a worker.
Day light should be reinforced with artificial lights, depending upon the nature of work.
Dust and fume collectors should preferably be attached with the equipment’s giving rise
to them.
Glares and reflections coming from glazed and polished surfaces should be avoided.
Excessive contrast, owing of color or badly located windows, etc., should be eluded.
Noise, no doubt distracts the attention (thoughts, mind) but if it is slow and continuous,
workers become habituated to it. When the noise is high pitched, intermittent, or
sudden, it is more dangerous and needs to be dampened by isolating the place of noise
and through the use of sound absorbing materials.
Tools and materials should preferably be in the order in which they will be used.
The supply of materials or parts, if similar work is to be done by each hand, should be
duplicated. That is materials or parts to be assembled by right hand should be kept on
289
right hand side and those to be assembled by the left hand should be kept on left hand
side.
Gravity should be employed, wherever possible, to make raw materials reach the
operator and to deliver material at its destination (e.g., dropping material through a
chute).
Height of the chair and work bench should be arranged in a way that permits
comfortable work posture. To ensure this:-
- Height of the chair should be such that top of the work table is about 50 mm
below the elbow level of the operator.
- Height of the table should be such that worker can work in both standing and
sitting positions.
- Flat foot rests should be provided for sitting workers.
- Figure 14.5 shows the situation with respect to bench heights and seat heights.
- The height and back of the chair should be adjustable.
- Display panel should be at right angles to the line or sight of the operator.
An instrument with a pointer should be employed for check readings whereas for
quantitative readings, digital type of instrument should be preferred.
Hand tools should be possible to be picked up with least disturbance or rhythm and
symmetry of movements.
Foot pedals should be used, wherever possible, for clamping de-clamping and for
disposal of finished work.
- Handles, levers, and foot pedals should be possible to be operated without
changing body position.
Work place must be properly illuminated and free from glare to avoid eye strain.
Work place should be free from the presence of disagreeable elements like heat,
smoke, dust, noise, excess humidity, vibrations etc.
290
Bench and Seat Heights
(Source: Process, Planning and Cost Estimation, 2nd Edition, 2008)
Suggested Work Place Layout
Figure shows a work place layout with different areas and typical dimensions. It shows the
left hand covering the maximum working area and the right hand covering the normal
working area.
291
and commonly employed tools (CET) (like screw driver, pliers, etc.) lie in the normal working
area A-2. ORT
Other areas
Other areas include studies related to fatigue, losses caused due to fatigue, rest pauses, amount
of energy consumed, shift work and age considerations.
SAMPLE CHECKLIST
Table Risk factors
A “No” answer can indicate an increased risk Tends to Tends to
decrease risk increase risk
of OOS, but all factors should be considered of OOS of oos
Yes No
A Task Specification:
292
performance? *
B Task Nature:
C Task Organization:
3 If any recent changes have been made to work/tasks, was the risk
of OOS taken into consideration?*
D Amount/Rate of Work
1 Does the method of payment avoid systems which may increase the
risk of OOS?*
E Organizational Practices:
293
A2 A performance specification removes uncertainty and gives everyone concrete goals to aim
for. For example: a 1-3 page document will be done by 4.30 pm if presented by noon, otherwise,
it will be ready by noon the next day.
A3 and A4 Positive feedback on good performance always improves morale, regardless of the
person’s position. Sometimes upward feedback needs to be formalized. B5 Where a job allows
for and/or requires decision-making, creativity, initiative and leads to further learning, workers
are likely to be more involved. B6 Stress is an important feature in the development of OOS. B7 a
person with two supervisors can end up having to meet conflicting deadlines.
C2 the micro-pause technique consists of using a 5-10 second complete relaxation for every
three minutes of work. In line with ergonomic theory, productivity increases may be expected
when the micro-pause technique is carried out properly. Micro-pauses are ineffective unless the
person relaxes fully during them.
C3 Changes which are often associated with the development of OOS are speeding up the work,
the introduction of heavier workloads, overtime or a bonus system of payment, the arrival of a
new supervisor or being assigned to new duties.
Dl Bonus systems and the job and the finish payment method are both likely to increase the risk
of OOS because they may encourage people to work beyond their natural capacity.
D2 Overtime increases the amount of work and decreases the time for recovery.
14.7 SUMMARY
Ergonomics is the study of work in relation to the environment in which it is performed (the
workplace) and those who perform it (workers). Ergonomics is a broad science encompassing the
wide variety of working conditions that can affect worker comfort and health, including factors such
as lighting, noise, temperature, vibration, workstation design, tool design, machine design, chair
design and footwear, and job design, including factors such as shift work, breaks, and meal
schedules. The information in this Module will be limited to basic ergonomic principles for sitting
and standing work, tools, heavy physical work and job design
14.8 KEYWORDS
Method Study- Method study is the process of subjecting work to systematic, critical scrutiny to
make it more effective and/or more efficient. It is one of the keys to achieving productivity
294
improvement.
Flow Process Chart- The process flow chart provides a visual representation of the steps in
a process.Flow charts are also referred to as Process Mapping or Flow Diagrams
Process Symbols- Process symbol that indicates the actions that take place or the locations where
processes occur.
Ergonomics- Ergonomics (or human factors) is the scientific discipline concerned with the
understanding of the interactions among humans and other elements of a system, and the
profession that applies theoretical principles, data and methods to design in order to optimize
human wellbeing and overall system performance.
295