Unit – II: Assistive Technology Assessments
Syllabus: Overview of assessment issues, overview of general assessments, assistive
technology assessments, assessment components.
2.1. OVERVIEW OF GENERAL ASSESSMENT ISSUES
Most assessments are conducted to identify strengths and struggles, determine program
eligibility, document progress, select interventions, and/or conduct research. In reality, AT
assessments may be conducted for all these reasons. The assessment concepts include
reliability, validity, and frame of reference.
Reliability is defined as the consistency with which an assessment instrument (test, rating
scale, and observation checklist) measures a particular construct. During AT assessments, it
is important that the evaluation instruments used yield consistent results so that the results
obtained today would be the same if the instruments were administered tomorrow or the next
day. When unreliable instruments are used, important decisions can be made based on
obtained data, only to end with the discovery that the data were wrong and the decisions
flawed.
Validity occurs when an instrument measures what it intends to measure. When teachers are
asked to complete rating scales on a student’s reading abilities, the AT evaluation team
expects that the results of the scale will be indicative of the student’s reading skills. If the
rating scale’s results indicate that the student is a poor reader but, when asked to read, the
student turns out to be an excellent reader, the scale’s results are not valid.
Every evaluation has a frame of reference. Norm-referenced tests compare student
performance to that of the student’s peers (i.e., the test’s standardization or normative
sample). Criterion-referenced instruments evaluate performance in terms of mastery of
specific skills. Non-referenced tests do neither; instead, they provide information about
performance intrinsic to the individual. Examples of non-referenced measures are reading
miscue analyses and error analyses of math problems.
Three key assessment concepts are of particular relevance to AT assessments. First, the
assessments must be ecological. Second, AT assessments should be practical. Finally, the
assessments must be ongoing.
Ecological Assessment
Ecological assessment has as its core philosophy the idea that behavior of any type does not
occur in a vacuum or in any single location. Thus, ecological assessments consider the
person’s multiple environments in which behaviors occur. Ecological assessment is
particularly applicable to AT assessments because devices are used in a variety of settings
and involve a number of significant people (e.g., professionals, peers, family members). As a
result, effective assessments consider the various contexts where the device will be used and
the people with whom the user will interact.
Consider a seventh-grade student with learning disabilities who is being evaluated for AT
adaptations to print access (reading) problems. In what contexts or settings does the student
have to read text? Typical middle school students attend science class in one room, social
studies in another, English class in another, and so forth. So the assessment has to consider
each classroom environment and each content-area teacher. It is critical for two reasons that
each teacher be a part of the assessment process. First, each teacher has important
information about the demands of the class and his or her expectations (how much material is
read, how information is presented during class time) and the student’s abilities and behaviors
that have been demonstrated in the classroom. Second, the teacher’s input, as well as that
from parents, peers, and other professionals, demonstrates a level of involvement in the
assessment process that will more likely lead to device use after the technology is
determined. Teachers who feel that they are involved in the assessment process, whose input
was considered, and who participated in on-site trial use of the device are more likely to “buy
into” the device’s use and participate in training activities concerning its use.
There are a variety of ways in which data can be gathered from teachers, including the use of
teacher interviews and ratings scales. Teachers provide information about a variety of tasks
that are assigned to students or expectations of student participation and also can identify,
from the teacher’s perspective and experience, student strengths and struggles in reading and
other academic areas. By having each of the student’s teachers complete the scale, an
ecological perspective can be gained about the student’s characteristics and how they match
the demands of the academic environment.
Assistive technology team members may wish to create and use their own rating scales. This
task is rather easy and can be accomplished by a simple review of various developmental
checklists and scope-and-sequence charts. A sample rating scale that looks at spoken
language skills is provided in Table, which was designed after examining language
development texts to identify pertinent skills for examination. If desired, members of the
assessment team can meet with teachers and use the rating scales during an interview as a
way to gather data from the teacher. Interviews have the advantage over typical rating scale
completion (where the scale is completed by the rater alone) in that the interviewer can probe
responses to gather additional information. Regardless of the manner in which data are
gathered, the key ingredient is that information is being obtained from a number of people in
a variety of settings. This is the importance of ecological assessment.
Practical Assessment
With respect to AT assessments the term practical is used in its dictionary sense: “pertaining
to or concerned with practice or action”. Thus, ideas presented under ecological assessment
are continued by actually using the device in the settings (the classroom, the workplace)
where the student’s actions will take place. Practical assessments allow the user to gain
experience using the device in natural environments and receive training on the device
simultaneously. Practical assessment also allows for training on the device to be conducted
with various people (teachers, co-workers, supervisors) in the user’s multiple environments,
which is another AT service. After a device has been selected and matched to the user,
assessment continues in multiple contexts where the device will be used.
Assistive technology assessment team members may wish to create their own checklists that
most appropriately match the contexts of the individual. Either way, gathering data across
multiple environments, while training people within those environments, can be time well
spent during the AT assessment.
A second feature of practical assessment is seeking information and participation from
specialists. Think of AT assessment teams in terms of “cafeteria plans.” Cafeteria plan is a
useful term that describes the availability of a number of options. The insurance industry uses
the term to describe plans that have a variety of options (disability riders, unemployment
compensations) that policy holders consider as they purchase the policy, in the same way that
one goes through the line at the cafeteria and selects items from among those laid out by
servers.
Table lists professionals who should be “on call” for membership on the AT assessment
team. As is depicted in the table, the user should always be a member of the team. A member
of the user’s family, if available, should also be a member of the team. An AT specialist, if
available, should be a member of every team. If the user is a student, at least one teacher
should be a team member. If the student is receiving special education services while also
attending classes in general education, then a special educator and a general educator should
be team members. We would also suggest that building principals serve on AT assessment
teams, if for no other reason than to demonstrate support of the team’s actions and provide
leadership in AT implementation in the school. Speech/language pathologists should always
be key team members when augmentative and alternative communication devices are being
considered. Assistive technology considerations involving seating and positioning issues
necessitate team inclusion of physical and occupational therapists. When decisions are being
made concerning vocational rehabilitation, rehabilitation counselors should be members of
the team. We would argue that counselors also should be involved if the user is a student who
is transitioning from special education services into rehabilitation services, whether those
services will be provided in postsecondary or workplace settings. Other members of the AT
team should be selected as needed and as appropriate.
Ongoing Assessment
A major purpose of assessment is to document the presence of a disability and determine
placement eligibility. Such assessments usually occur once, and the information is secured in
the student’s cumulative folder for reference and archival purposes. Assistive technology
assessments, on the other hand, never end. Assessments continue in one form or another
indefinitely, because the use of devices is monitored and evaluated continuously with follow-
ups and follow-alongs to ensure that the decisions of the assessment team were accurate and
that the device is being used effectively and as recommended. In addition, ongoing
assessments ensure that AT decisions that were initially helpful do not become obsolete or
inappropriate.
Throughout the course of this text, we discuss the benefits of AT in helping a person
compensate for functional limitations. For the technology to be effective, it must be matched
to the individual; meshed with other devices, services, and individuals; and evaluated
constantly to make sure that the technology adaptation is valid; that is, the adaptation is
benefiting the individual as hypothesized. Think of AT use as a hypothetical experiment, a
testing of a hypothesis, so to speak. In this line of thinking, the AT assessment team
hypothesizes that a particular device will benefit the individual in compensating for
disability-related or functional limitations. The hypothesis is made after a thorough
evaluation of the individual and the contexts within which he or she learns, works, and plays.
Sometimes the hypothesis proves correct (the device is helpful), but sometimes the
hypothesis proves false (the device does not result in expected performance gains). The
hypothesis must be tested continuously over time until it is proven correct, which is a critical
yet too often overlooked evaluation component.
2.2. ASSISTIVE TECHNOLOGY ASSESSMENTS
Assistive technology assessments should incorporate a multidimensional assessment model
that recognizes the dynamic interplay of various factors across contexts and over time
(Table). As already implied, selecting AT devices requires careful analysis of the interplay
among (a) the user’s specific strengths, struggles, special abilities, prior experience/
knowledge, and interests; (b) the specific tasks to be performed (e.g., compensating for a
reading, writing, or mobility problem); (c) specific device qualities (e.g., reliability,
operational ease, technical support, cost); and (d) the specific contexts of interaction (across
settings—school, home, work; and over time—over a semester or a lifetime).
Tasks
As part of living and breathing on planet Earth, people perform a myriad of tasks and
functions. These pertain to independent functioning, academics, play, work, and so forth.
Each task has requisite skills (smaller tasks) that lead to the completion of the larger task.
Part of an ecological assessment is identifying what tasks are to be performed, in what
settings or contexts the tasks are performed, and who are significant players (co-workers, job
coaches, and teachers) in those settings. Documentation of these tasks provides a contextual
mapping of the person’s life and activities.
Context
Context is an important consideration when conducting AT assessments. The specific
contexts of interaction (across school, home, and work settings and over time, whether a
semester or a lifetime) are examined as the assessment team examines how an AT device fits
into a person’s daily routine. For example, if a boy uses a wheelchair, all the various places
he goes, along with his mobility needs, are considered. How are his classrooms configured?
What does his home look like? What accessibility issues must be considered? When these
and other questions are considered, the person–technology match comes into a clearer focus.
Individual
The individual who will use the device is central to the assessment. The person’s specific
strengths and functional limitations in a variety of areas (sensory, affect, cognitive, motor),
prior experiences, and interests all must be examined by the assessment team. An
examination of the individual’s cumulative records will yield considerable information in all
of these areas. For example, juniors in high school who have been receiving special education
services since third grade have been tested and retested dozens of times over the course of
their academic lives, and there is a wealth of anecdotal data that has been collected in their
school folders over the years. Before conducting additional assessments, these records should
be thoroughly reviewed and data should be extracted. If there is a lack of data on sensory and
motor skills and the need for additional data gathering is identified, then additional testing
should take place.
Testing can include gathering quantitative and qualitative data. Quantitative data consist of
norm-referenced test scores that compare a person’s assessment performance to that of a peer
group; intelligence test scores are perfect examples. Criterion-referenced tests also yield
quantitative data in that they identify areas of mastery; for example, students demonstrate
mastery of double digit addition when they achieve a preset criterion (e.g., 9 of 10 items
correct) on a test. Checklists of observed behavior also can yield quantitative data, depending
on how the checklist is scored.
Qualitative data can be accumulated through observations, interviews, document review, and
rating scales that yield descriptive information. Qualitative information can also be obtained
by watching examinees and recording their actions (e.g., body tension, facial strain,
grimaces) during testing. Additional information can be gleaned through error analyses of
missed test items.
In summary, effective assessments yield quantitative and qualitative information that,
together, is useful in making adaptation decisions. Neither source of information is sufficient
in and of itself. It should be remembered that the more data that are gathered, the more
reliable will be the decisions that are made based on available data.
Device
Assistive technology teams have a wide selection of devices from which to choose. That is a
good thing, because it allows team members to begin the assessment process with a corpus of
potential technologies that can be considered during the assessment process. Unfortunately,
not all AT devices are created equal, and it is necessary for the devices themselves to be
evaluated. Assistive technology team members should create their own checklists for each
type of device they prescribe. Through personal experiences, literature reviews, or interviews
with users, AT assessment team members can complete evaluations on a variety of devices
and consider those evaluations when making decisions.
Care should be taken not to become enamored with a specific device before the evaluation
takes place. More than one parent or practitioner has come away from a product
demonstration workshop with the feeling, “Now, that is what my child [student] needs!”
Sadly, not all devices that appears spectacular during demonstrations meet expectations when
purchased. Each device must be evaluated according to specific criteria. Usually those criteria
relate to reliability (i.e., consistency of use), validity (i.e., does the device do what it is
intended to do?), technical support (e.g., does the toll-free tech support number have a human
being on the other end, and is the local sales representative capable of providing on-the-spot
technical assistance?), and cost (what is the cost-benefit ratio?). Regarding cost, we have
often heard two sides disagree over a particular device because of the device’s price tag. In
all cases, the bottom line should be whether the device meets the needs of the user. If two
devices are equally reliable, yield valid results, and the vendors offer equal technical support,
we would follow the lead of generic prescription drugs and select the less expensive device.
But that should be the decision of the assessment team based on the information at hand.
Assistive Technology Consideration
The Individuals with Disabilities Education Improvement Act (IDEA 1997), for the first time
mandated that the Individualized Education Plan (IEP) team consider whether a child
receiving special education requires AT devices and services. The Wisconsin Assistive
Technology Initiative (WATI) (2004) noted that “when considering a child’s need for
assistive technology, there are only four general types of conclusions that can be reached:
1. The first is that current interventions are working and nothing new is needed,
including assistive technology. This might be true if the child’s progress in the
curriculum seems to be commiserating with his abilities.
2. The second possibility is that assistive technology is already being used either
permanently or as part of a trial to determine applicability, so that we know that it
does work. In that case the team should write the specific assistive technology into the
IEP to insure that it continues to be available for the child.
3. The third possibility is that the team may conclude that new assistive technology
should be tried. In that case, the team will need to describe in the IEP the type of
assistive technology to be tried, including the features they think may help, such as
“having the computer speak the text as the student writes.”
4. Finally, the last possibility is that the Team will find that they simply do not know
enough to make a decision. In this case, they will need to gather more information.
That could be a simple process of calling someone for help, or going to get some
print, disk, or online resources to help them better “consider what AT might be useful.
It could also be an indication that they need to schedule (or refer for) an evaluation or
assessment of the child’s need for assistive technology.”
There are several ways for IEP teams to satisfy the consideration provision. The WATI
Assistive Technology Considerations Guide (Figure) is one example of an excellent
evaluation tool that can be used by school districts. Devised in 2004 by the Wisconsin
Assistive Technology Initiative, the Guide provides a workable framework for AT
considerations.
2.3. ASSESSMENT COMPONENTS
Once it has been determined that a student can benefit from AT it is time to actually conduct
assessment by looking at a variety of issues. What are the contexts that have to be
considered? What are the user’s strengths and struggles? What to-date experiences does the
person have with technology? What technology should be considered, and is it a quality
device? What is the match between the technology and potential user? The Wisconsin
Assistive Technology Initiative (WATI, 1998) recommends a nine-step approach for
conducting AT assessments (Figure).
Considerations of Various Contexts
It is important to look at the various contexts within which the device will be used. To do so,
one can identify the various contexts within which the individual works and plays and then
examine requisite skills needed to accomplish expected tasks. The evaluator identifies the
frequency (i.e., daily, weekly, or monthly/less frequently) with which each task is
accomplished in the various settings examined.
Considerations of Strengths and Struggles
It is important to determine the individual’s strengths and struggles across a variety of
academic and cognitive tasks (e.g., listening, memory, organization, physical/motor). A
person who knows the individual’s behaviors could rate the individual’s abilities in the areas
of interest. It would be beneficial for more than one rater to complete the scale in order to
gain an ecological perspective.
Information from the scale can be used to identify potential areas of difficulty that may be
circumvented by AT. Also, the examination can help identify areas of strength on which a
device can capitalize to bypass a specific difficulty. This is particularly important when
setting demands and requisite abilities are identified for the student and the strengths and
struggles must be considered.
Considerations of Technology Characteristics
A factor that is too often overlooked in an AT evaluation is the device itself. People can
become smitten with what a particular device is supposed to do but fail to look at such
qualities as the technology’s dependability or the vendor’s technical support record. It is
important to examine the specific device being used in the assessment in such areas as
reliability, efficacy of purpose, compatibility, screen presentation, operational ease, and
technical support. The technology that is not trustworthy is best left in the showroom.
Considerations of Technology Experience
It may be helpful to identify the individual’s prior experience with AT devices. An
assessment team member has a conversation with the individual being evaluated. Based on
the discussion, the examiner rates the individual’s expertise in using specific devices. This
could be accomplished by identifying various devices that have potential to compensate for
difficulties in the areas of spoken language, reading, memory, mobility, organization, and so
on.
Considerations of the Person–Technology Match
Every AT assessment should examine the interplay between the individual and the device
while performing specific tasks and/or compensating for specific difficulties. The Individual–
Technology Evaluation Scale provides information about the person’s interaction with the
AT device that is being evaluated. A series of questions are asked relating to compensatory
effectiveness, interest, ease of use, comfort, operational ease/proficiency, and behavioral
responses. The Individual–Technology Evaluation Scale and the device-specific worksheets
can be considered the “nuts and bolts” of the AT assessment, because they help the specialist
and user determine if the technology “works” as projected.