Administering Test
Administering Test
A teacher's test administration procedures can have great impact on a student's test performance. Test
administration involves more than simply handling out and collecting the test. A well-designed test can be a
useful learning tool for students, whereas a poorly designed test can create a frustrating experience for
students. When we systematically administer test, we prepare our students for final examination.
TEST ADMINISTRATION
The traditional approach to assessment of student learning is formal testing. Still the most widely used of all
methods of assessment, testing has been the center of discussion and debate among educators for years. The
topic of testing includes a large body of information, some of which will be discussed in the upcoming
section. Basically, testing consists of four primary steps: test construction, test administration, scoring and
analyzing the test. Each of these steps can result in a variety of test forms and elicit a variety of useful
outcomes, such as:
Ideas for lesson plans.
Knowledge of individual students.
Ideas for approaching different students/classes.
Scores for admission.
Indication of teacher effectiveness.
CONSTRUCTING A TEST
There are eight basic steps in constructing a test:
1. Defining the purpose: Before considering content and procedure, the teacher must first determine who is
taking the test, why the test is being taken, and how the scores will be used. Furthermore, the teacher should
have a rationale for giving a test at a particular point in the course: Does the test cover a particular part of
the unit content? Or should material currently being studied be saved and tested at a later time when the
entire section is completed?
2. Listing the topics: Once the purpose and parameters have been established, specific topics are listed and
examined for their relative importance in the section. This is called representative sampling. For example, if
the study of crustaceans comprised approximately 10% of all class work in the section to be tested
(including class time, homework, and other assignments), then that topic should comprise approximately
10% of the test. This can be done either by calculating the number of questions per topic or by weighting
different sections to match class coverage.
3. Listing types of questions: Different types of material calls for different types of test questions. While
multiple choice questions might adequately test a student's knowledge of mathematics, essays reveal more
about a student's understanding of literature or philosophy. Thus, in deciding what types of test questions to
use (short answer, essay, true/false, matching, multiple choice, etc.) the following advantages and
disadvantages should be kept in mind:
2
Type Advantages Disadvantages
Short Can test many facts in short time Difficult to measure complex learning
Answer Fairy easy to score
Often ambiguous Excellent format for math
Tests recall
Essay Can test complex learning Difficult to score objectively
Can evaluate Uses a great thinking process and
creativity deal of testing time
Subjective
True/False Test the most facts in shortest time Difficult to measure complex learning to
Easy score
Tests recognition
Objective
Difficult to write reliable items
Subject to guessing
Matching Excellent for testing associations and process of elimination complex learning
recognition of facts (especially concepts)
Difficult to write good items
Subject to
Although terse can test
Objective Can evaluate learning at all levels of
Multiple complexity.
Choice Difficult to write
Somewhat subject to
Can be highly reliable objective guessing Tests
fairly large knowledge base
In short time
Easy to score
In choosing types of questions to be used on a test, it is also important to consider the following points:
Classroom conditions can automatically eliminate certain types of questions. Since answers to multiple
choice questions can be easily copied in an overcrowded classroom, they might not be an accurate
measure of student learning. Likewise, if blackboards are the only media available for presenting the
test, long questions and textual references might be impossible to include on the test.
Considerations regarding administration and scoring often dictate the type of questions to be included
on a test. Numbers of students, time constraints, and other factors might necessitate the use of
questions which can be administered and scored quickly and easily.
The types of knowledge being tested should be considered in the assessment process. A simplified
checklist could be used by the teacher to determine if students have been assessed in all relevant areas.
This could take the form of a graph such as the one which follows:
2
TOPICS TO BE TESTED FACTS SKILLS CONCEPTS APPLICATION
Verbs: Conjugation of "to be" x
Pronunciation: Short "a" X
Use of Models: Should, Must, x
Ought to
Free Expression X
4. Writing items once purpose, topics and types of questions have been determined, the teacher is
ready to begin writing the specific parts, or items, of the test: Initially, more items should be written than
will be included on the test. When writing items, the following guidelines are followed:
Cover important material No item should be included on a test unless it covers a fact, concept, skill or
applied principle that is relevant to the information covered in class (see 3. Listing Types of Questions
above).
Items should be independent. The answer to one item should not be found in another item; correctly
answering one item should not be dependent on correctly answering a previous item. (This guideline
might not apply in some cases. For example, a math test might begin by testing simple skills and then
test their integration. In all cases, the teacher should be aware of what is being tested at each level and
use this strategy sparingly).
Write simply and clearly. Use only terms and examples students will understand and eliminate all
nonfunctional words.
Be sure students know how to respond. The item should define the task clearly enough that students
who understand the material will know what type of answer is required and how to record their
answers. For example, on essay questions, the teacher may specify the length and scope of the answer
required.
Include questions of varying difficulty. Tests should include at least one question that all students can
answer and one that few, if any, can answer. Tests should be designed to go from the easiest to most
difficult items so as not to immediately discourage the weaker students.
Be flexible. No one type of item is best for all situations or all types of material. Whenever feasible,
any test should contain several types of items.
5. Reviewing items: Regardless of how skilled the teacher is, not all his/her first efforts will be perfect or
even acceptable. It is therefore important to review all items, revising the good and eliminating the bad.
Finally, all items should be evaluated in terms of purpose, standardization, validity, practicality, efficiency,
and fairness (see 8. Evaluating a Test below).
6. Writing directions: Clear and concise directions should be written for each section. Whenever possible,
an example of a correctly answered test item should be provided as a model. If there is any question as to
the clarity of the directions, the teacher should "try them out" on someone else before giving the exam.
7. Devising a scoring key: While the test items are fresh in his/her mind, the teacher should make a scoring
key - a list of correct responses, acceptable variations, and weights assigned to each response (see Scoring
below). In order to assure representative sampling, all items should be assigned values at this time. For
example, if "factoring" comprised 50% of class material to be tested and only 25% of the total number of
test questions, each question should be assigned double value.
8. Evaluating A Test: All methods of assessing student learning should achieve the same thing: the clear,
consistent and systematic measurement of a behavior or something that is learned. Once a test has been
constructed, it should be reviewed to ensure that it meets six specific criteria: clarity, consistency, validity,
practicality, efficiency, and fairness. The following is a checklist of questions that should be asked after the
test (or any assessment activity) has been prepared and before it is administered:
2
A CLEARLY Who is being assessed?
DEFINED PURPOSE What material is the test (or activity) measuring?
What kinds of knowledge or skills is the test (or activity)
measuring?
Do the tasks or test items relate to the objectives?
STANDARDIZATION Are content, administration, and scoring consistent in all
OF CONTENT groups?
VALIDITY Is this test (or activity) a representative sampling of the
material presented in this section?
Does this test (or activity) faithfully reflect the level of
difficulty of material covered in the class?
PRACTICALITY AND Will the students have enough time to finish the test (or
EFFICIENCY activity)?
Are there sufficient materials available to present the test or
complete the activity effectively?
What problems might arise due to structural or material
difficulties or shortages?
FAIRNESS Did the teacher adequately prepare students for this
activity/test?
Were they given advance notice?
Did they understand the testing procedure?
How will the scores affect the students' lives?
ACTIVITY BOX
1. Make a statement of fact. Now write it as a test item in the form of multiple choice, matching,
true/false, and short answer. If you were to include this item on a test, which format would you choose?
2. Write directions for the format you chose in activity one and read them to someone else. Are they clear?
Concise? Understandable?
3. Take a test that you have designed. Before you administer it use the checklist above to evaluate it
Steps for systematic administration of test
A teacher must organize in sequence of steps for systematic administration of a test. Steps can be arranged
as follow:
Seating arrangement of hall
Arrangement of question papers and answer sheets
Preparation before the test
Test administration
Collection of answer sheets
Return of test
1.Seating Arrangement:
Students should be randomly assigned seats for each examination. Random seat assignments prevent
prearranged plans for sharing information and help breakup groups of friends. Here are some suggestions
for assigning seats:
Conduct test in large lecture hall spacious enough to accommodate all students with comfortable chairs.
Lecture room must be well-ventilated.
Pre assign (before the day of the test) seats according to numbers and use numbered test books which to
distributed in a sequential pattern. Ask students to write their roll numbers on their question paper and
2
answer sheet. Pass question paper sequentially and assign seas from front to back. If at the end of the
test, answer sheets number does not tally with seat number, there is reason to suspect why that student
was not in their assigned seat.
If feasible, prepare answer booklet for your college as given in university examination. This will prepare
students for final examinations.
The simplest method is to control the flow of students in the door and as they enter have support staff
direct them to a seat, varying the assignments from front to back and side to side. This method requires a
teacher to point to a specific seat and ask the student to sit there. Graduate students or other faculty can
help for the few minutes at the beginning of class that seating may take. This method may require an
announcement that students are not to enter the test room until the doors are opened by Faculty on duty.
Controlling entry to the room is a key factor in test administration security.
In lecture rooms, where seats are numbered, place each seat number on a 3 x 5 card, shuffle the cards,
and hand one to each student as they enter the room. Ask students to sit in the seat indicated on their
card. Support staff should be stationed in the room to watch for students who may attempt to switch
cards. Ask students to place their cards on their desks so that they are visible during the time that test
books are being passed out. Have students’ grid or print their seat numbers on their answer sheets or test
books. If the seats are not numbered, teacher can number them using a pre-printed set of 3 x 5 cards or
in smaller rooms, two sets of playing cards can be used (one for the desks and one for entering students).
An easier alternative is to write role numbers on seats with chalk.
Create a random seating chart using roll numbers. Call off roll numbers and direct students to their seats
as they enter the room. Sitting arrangement of students can be attached at the door before entry. This
could guide students to occupy their respective seats.
In rooms where it is feasible, seat students in every other seat directly behind the person in the row
ahead of them.If you have the luxury of a large room and a small class, seat students every other seat,
every other row. This will not only cut down on information sharing, it gives students room in which to
relax and feel they can look around and stretch without being accused of cheating.
Test books should be numbered when they are distributed to students. Try to distribute them
sequentially. Always have students' grid or write their seat number and/or test book number on their
answer sheet. Separate attendance should be maintained with roll no. and answer booklet number signed
by students.
Limit materials that can be brought into the classroom or ask that book bags be placed at the front of the
room. Do not allow students to wear caps or hats during the test.
Change students' assigned seat immediately, if teacher suspect cheating.
In very large lecture classes CCTV cameras may be installed to keep an eye on students’ activities.
2.Arrangement of Question Papers and Answer Sheets
o Have extra copies of the test on hand, in case you have miscounted or in the event of some other
5
problem.
o Previous years question papers must be compiled in file in the library for students to review.
o Give students practice exams prior to the real test. Actually, tests prepares students for final
examinations.
o Explain, in advance of the test day, the exam format and rules, and explain how to attempt various
questions with time management.
o Give students tips on how to study for and take the exam-this is not a test of their test-taking ability,
but rather of their knowledge, so help them learn how to take tests.
o Have extra office hours and a review session before the test.
o Arrive at the exam site early to communicate the importance of the event.
o Create multiple forms of a test by scrambling the items.Form codes should be inconspicuously
printed on the front of the test book. Do not print the form code on each page of the test.
o Test books should be counted a minimum of four times during the testing process. Using a four-step
counting procedure guarantees that you have control over your test materials and if a problem arises,
allows you to pinpoint where it occurred.
2
o Count the question papers immediately after receiving them from the printer so that you know
exactly how many you initially have in your possession. Store the question papers in a locked secure
area with limited access. Even if you have requested that your question papers be numbered, you
should go through them one at a time to ensure that no numbers were skipped or extra books
included in your order.
o Count the question papers prior to passing them out in class so that you know if any copies were
removed from storage. For multiple sections of a course each instructor should count the number of
question papers they receive for their room.
o After the question papers have been handed out, count the number of people testing and the number
of unused question papers. These two numbers should equal your total. If there is a discrepancy, then
someone in class has been given or taken an extra question papers by mistake.In order to recover
missing materials you may want to announce that you have passed out too many test books and
would appreciate the return of the extras.
o At the end of the test, preferably before students have left the room, count to see that all materials
have been returned.
o Answer sheets should be treated in the same manner as test books. Extra answer sheets can be used
by students to share answers during a test or to provide answers to someone taking a test at a later
time. If you or your department keeps a stock of answer sheets on hand, they should be kept in a
locked storage area, else can be filed in library. Answer sheets can also be numbered to match
question papers.
2
papers, be sure to do it at another time. It is important to both evaluation of your students and
improvement of your tests.
If you are recording marks, record them in pencil in your internal assessment register before
returning papers. If there are errors/ adjustments in scoring, they (marks) are easier to change when
recorded in pencil.
Provision of re-checking and re-evaluation should be made for the purpose of transparency of
results.
Evaluation should be dore carefully and honestly.
Entire evaluation work should remain strictly confidential.
2
It goes without mention that checking to see if your students are learning is incredibly important. No
matter what is taught, tests are a way for teachers to determine which students are having trouble and which
are acquiring the skills and knowledge they should be acquiring. The result of the test can help the teacher
know if some things should be reviewed, or if it's okay to keep moving forward. Where grades are
concerned, they give students something to strive for and work towards. Students can ask themselves: I
know how I'm doing. Can I do better? What other goals can I achieve?
2. To Test the Teacher
Since teachers are directly responsible for providing the lessons themselves, tests also provide insight
on how well they do their job.After all it's not only about the program, material and students. Teachers
have to work their magic to manage all these elements and any others that exist to ensure learning takes
place. Basically, to see how well they are teaching their students. Tests can help them ask themselves: Are
the strategies I've chosen, the best? What teaching methods or approaches are most effective? Are any
changes or modifications needed to help my student? What have they learned? Can the student use the new
knowledge? Can the student demonstrate and use the new skills accurately?
3. To Test Your School/Institute
Schools and/or institutes are responsible for choosing or even creating the programs that are used in
their institutions. Tests give them accurate feedback on how well designed these programs are and how
they can be improved. Also, they would be able to see if something needs to be added or removed from the
program.
4. To Keep Students Motivated
Most would agree that the harder you work, the better you do. With this in mind, students tend to work
harder if someone is checking up on their work. Tests help keep students on their toes and ensure, to some
extent, they don't let their work slide. If they know that they'll have to take a test on the material, they might
be more likely to give it that extra effort.
ACTIVITY BOX
1. Think of a test you took as a student on which you did very well. What factors contributed to
your successful performance (classroom conditions, nature of the test, personal interest in
subject matter, etc.)?
2. Think of a test you took as a student on which you did very poorly. Can you remember why?
3. Construct a quiz consisting of five multiple choice questions. How would you explain this method
of testing to a class which has never seen it before?
2
z
Conclusion
Test administration guidelines exist in order to reduce cheating and ensure the fair and reliable test
scores of standardized K-12 exams. Utilizing test administration procedures is beneficial to both
examinees and test administers alike, and following test administration guidelines increases
consistency and test security. There are protocols before, during, and after testing that should be
considered, and resources are available to states and districts to support fluid test administration
practices.
2. International Test Commission (2001) International Guidelines for Test Use, International Journal of
Testing, 1:2, 93-114, DOI: 10.1207/S15327574IJT0102_1
Developed by the International Test Commission (ITC), the International Guidelines for Test Use are a set
of guidelines that provide an international view on areas of consensus on what constitutes "good practice" in
test use, without being prescriptive about how these guidelines should be implemented by national
professional psychological associations and other organizations associated with testing. In addition to key
competencies (including knowledge, skills, and other personal and contextual factors) needed for
responsible test use, the Guidelines also address issues of professional and ethical standards in testing; rights
of the test taker and other parties involved in the testing process; choice and evaluation of alternative tests;
test administration, scoring, interpretation, report writing and feedback. Appendixes cover guidelines for an
outline policy on testing; guidelines for developing contracts between parties involved in the testing
process; and points to consider when making arrangements for testing people with disabilities or
impairments.
3. Minguell, M. E., Usart, M., & Cervera, M. G. (2019). Assessing Teacher Digital Competence: the
Construction of an Instrument for Measuring the Knowledge of Pre-Service Teachers. Journal of
New Approaches in Educational Research, 8(1), 73–78. https://doi.org/10.7821/naer.2019.1.370
2
A multidimensional competency like teacher digital competence (TDC) can make it even more difficult to
assess competencies, which is a challenge in and of itself. TDC is believed to have a variety of dimensions
connected to its parts. Because of this complexity, it is necessary to standardize TDC training and evaluation
using a set of validated benchmark indicators. An instrument for TDC assessment was created and
developed over the course of two phases. In the first phase, the COMDID-A self-assessment tool was
created, and in the second, the COMDID-C knowledge assessment tool for TDC was created. We describe
how the COMDID-C instrument was built in this article. We used two samples for this initial stage: an
expert validation sample and a pilot test sample. We used two samples for this initial stage: an expert
validation sample and a pilot test sample. We conducted a preliminary analysis of the validity of the test's
content, construction, and reliability due to the test's complexity. Our findings show that the test is properly
constructed and serves the intended purpose. The test will then be given to a larger sample size, allowing the
instrument to undergo external validation.
The need to continue to offer professional assessment services in pandemic situations has given rise to the
remote use of tests designed for face-to-face administration. This practice of tele-assessment modifies the
original conditions in which the test was constructed, standardized, and validated, and therefore should be
accompanied by an analysis of the potential risks. This paper describes the threats associated with remote
test use, and suggests a number of recommendations to mitigate them. When the remote administration of
tests that have been constructed to be used in person is considered, the professional should be aware of the
risks and benefits associated with this practice, and once these have been evaluated, he or she should act
accordingly.
5. Yang, C., Luo, L., Vadillo, M. A., Yu, R., & Shanks, D. R. (2021). Testing (quizzing) boosts classroom
learning: A systematic and meta-analytic review. Psychological Bulletin, 147(4), 399–
435. https://doi.org/10.1037/bul0000309
Over the last century hundreds of studies have demonstrated that testing is an effective intervention to
enhance long-term retention of studied knowledge and facilitate mastery of new information, compared with
restudying and many other learning strategies (e.g., concept mapping), a phenomenon termed the testing
effect. How robust is this effect in applied settings beyond the laboratory? The current review integrated
48,478 students’ data, extracted from 222 independent studies, to investigate the magnitude, boundary
conditions, and psychological underpinnings of test-enhanced learning in the classroom. The results show
that overall testing (quizzing) raises student academic achievement to a medium extent (g = 0.499). The
magnitude of the effect is modulated by a variety of factors, including learning strategy in the control
condition, test format consistency, material matching, provision of corrective feedback, number of test
repetitions, test administration location and timepoint, treatment duration, and experimental design. The
documented findings support 3 theories to account for the classroom testing effect: additional exposure,
transfer-appropriate processing, and motivation. In addition to their implications for theory development,
these results have practical significance for enhancing teaching practice and guiding education policy and
highlight important directions for future research. (PsycInfo Database Record (c) 2022 APA, all rights
reserved)
6. Shepard, L. A. (2019). Classroom Assessment to Support Teaching and Learning. The ANNALS of the
American Academy of Political and Social Science, 683(1), 183–200.
https://doi.org/10.1177/0002716219843818
2
Classroom assessment includes both formative assessment, used to adapt instruction and help students to
improve, and summative assessment, used to assign grades. These two forms of assessment must be
coherently linked through a well-articulated model of learning. Sociocultural theory is an encompassing
grand theory that integrates motivation and cognitive development, and it enables the design
of equitable learning environments. Learning progressions are examples of fine-grained models of learning,
representing goals, intermediate stages, and instructional means for reaching those goals. A model for
creating a productive classroom learning culture is proposed. Rather than seeking coherence with
standardized tests, which undermines the learning orientation of formative assessment, I propose seeking
coherence with ambitious teaching practices. The proposed model also offers ways to minimize the negative
effects of grading on learning. Support for teachers to learn these new assessment practices is most likely to
be successful in the context of professional development for new curriculum and standards.
7. Trumbo, M. C. S., McDaniel, M. A., Hodge, G. K., Jones, A. P., Matzen, L. E., Kittinger, L. I., Kittinger,
R. S., & Clark, V. P. (2021). Is the testing effect ready to be put to work? Evidence from the laboratory to
the classroom. Translational Issues in Psychological Science, 7(3), 332–
355. https://doi.org/10.1037/tps0000292
The testing effect refers to the benefits to retention that result from structuring learning activities in the form
of a test. As educators consider implementing test-enhanced learning paradigms in real classroom
environments, we think it is critical to consider how an array of factors affecting test-enhanced learning in
laboratory studies bear on test-enhanced learning in real-world classroom environments. This review
discusses the degree to which test feedback, test format (of formative tests), number of tests, level of the test
questions, timing of tests (relative to initial learning), and retention duration have import for testing effects
in ecologically valid contexts (e.g., classroom studies). Attention is also devoted to characteristics of much
laboratory testing-effect research that may limit translation to classroom environments, such as the
complexity of the material being learned, the value of the testing effect relative to other generative learning
activities in classrooms, an educational orientation that favors criterial tests focused on transfer of learning,
and online instructional modalities. We consider how student-centric variables present in the classroom
(e.g., cognitive abilities, motivation) may have bearing on the effects of testing-effect techniques
implemented in the classroom. We conclude that the testing effect is a robust phenomenon that benefits a
wide variety of learners in a broad array of learning domains. Still, studies are needed to compare the benefit
of testing to other learning strategies, to further characterize how individual differences relate to testing
benefits, and to examine whether testing benefits learners at advanced levels. (PsycInfo Database Record (c)
2022 APA, all rights reserved)
8. Dawson, M. R., & Lignugaris/Kraft, B. (2017). Meaningful Practice: Generalizing Foundation Teaching
Skills From TLE TeachLivETM to the Classroom. Teacher Education and Special Education, 40(1), 26–50.
https://doi.org/10.1177/0888406416664184
Novice teachers need to develop foundation teaching skills to effectively address student behavior and
academics in the classroom. The TLE TeachLivE™ simulation laboratory (TLE) is a virtual classroom used
to supplement traditional didactic instruction and field experiences in teacher preparation programs. In this
study, repeated practice and structured feedback were provided to preservice special educators in TLE to
improve their delivery of specific praise, praise around, and error correction. Their weekly performance was
observed in TLE during simplified teaching scenarios in intervention and during more complex teaching
scenarios following intervention. In addition, their generalization of target skills to their own classrooms
was measured weekly. Overall, teachers improved delivery of the target skills in the virtual classroom. They
generalized performance to real classroom settings with varying levels of proficiency. Implications for
2
teacher preparation are discussed, including the impact of aligning simulated practice opportunities and
authentic teaching environments.
9. Scheeler, M.C., Bruno, K., Grubb, E. et al. Generalizing Teaching Techniques from University to K-12
Classrooms: Teaching Preservice Teachers to Use What They Learn. J Behav Educ 18, 189–210 (2009).
https://doi.org/10.1007/s10864-009-9088-3
Preservice teachers learn evidence-based practices in university classrooms but often fail to use them later
on in their own K-12 classrooms. The problem may be a missing link in teacher preparation, i.e., failure to
teach preservice teachers to generalize newly acquired techniques. Two experiments using multiple baseline
designs across participants assessed effectiveness of a model to promote generalization and maintenance of
a specific teaching skill. In Experiment 1, preservice teachers’ maintenance of behavior deteriorated from
practicum to student teaching when intervention consisted of training to criteria alone. When a
programming for generalization component (program common stimuli) was added to the intervention,
teachers in Experiment 2 generalized and maintained behavior across settings (student teaching to own
classrooms) at a higher average than occurred during intervention.
10. Lisa M. Abrams, Joseph J. Pedulla & George F. Madaus (2003) Views from the Classroom: Teachers'
Opinions of Statewide Testing Programs, Theory Into Practice, 42:1, 18-29,
DOI: 10.1207/s15430421tip4201_4
This article discusses teachers' views on state-mandated testing programs. An overview of the literature is
presented, as well as results from a nationwide survey of teachers. Findings from both suggest that high-
stakes state-mandated testing programs can lead to instruction that contradicts teachers' views of sound
educational practice. In particular, teachers frequently report that the pressure to raise test scores encourages
them to emphasize instructional and assessment strategies that mirror the content and format of the state
test, and to devote large amounts of classroom time to test preparation activities. The article concludes that
serious reconsideration must be given to the use of high-stakes consequences in current statewide testing
programs.
11. Catherine Horn (2003) High-Stakes Testing and Students: Stopping or Perpetuating a Cycle of
Failure?, Theory Into Practice, 42:1, 30-41, DOI: 10.1207/s15430421tip4201_5
As state-mandated standardized testing becomes an increasingly popular tool by which to make student-
level high-stakes decisions such as promotion or graduation from high school, it is critical to look at such
applications and their effects on students. Findings in this article suggest that non- White, non-Asian
students, as well as students with special needs and English Language Learners, are among the groups most
deeply affected by highstakes testing. Test scores give us important information, but they do not give us all
the information necessary to make critical decisions. Given their limited nature and the potentially adverse
impacts they can have, using state-mandated large-scale testing for student-level high-stakes purposes is
unadvisable.
12. Harlen W, Crick RD, Broadfoot P, Daugherty R, Gardner J, James M & Stobart G (2002) A Systematic
Review of the Impact of Summative Assessment and Tests on Students’ Motivation for Learning. EPPI-
Centre, University of London. https://eppi.ioe.ac.uk/cms/LinkClick.aspx?fileticket=Pbyl1CdsDJU
%3D&tabid=108&mid=1003
The current widespread use of summative assessment and tests is supported by a range of arguments. The
points made include that not only do tests indicate standards to be aimed for and enable these standards to
be monitored, but that they also raise standards. Proponents claim that tests cause students, as well as
teachers and schools, to put more effort into their work on account of the rewards and penalties that can be
applied on the basis of the results of tests. In opposition to these arguments is the claim that increase in
scores is mainly the consequence of familiarization with the tests and of teaching directed specifically
2
towards answering the questions, rather than developing the skills and knowledge intended in the
curriculum. It is argued that tests motivate only some students and increase the gap between higher and
lower achieving students; moreover, tests motivate even the highest achieving students towards performance
goals rather than to learning goals, as required for continuing learning.
13. Carini, R.M., Kuh, G.D. & Klein, S.P. Student Engagement and Student Learning: Testing the
Linkages. Res High Educ 47, 1–32 (2006). https://doi.org/10.1007/s11162-005-8150-9
This study examines (1) the extent to which student engagement is associated with experimental and
traditional measures of academic performance, (2) whether the relationships between engagement and
academic performance are conditional, and (3) whether institutions differ in terms of their ability to convert
student engagement into academic performance. The sample consisted of 1058 students at 14 four-year
colleges and universities that completed several instruments during 2002. Many measures of student
engagement were linked positively with such desirable learning outcomes as critical thinking and grades,
although most of the relationships were weak in strength. The results suggest that the lowest-ability students
benefit more from engagement than classmates, first-year students and seniors convert different forms of
engagement into academic achievement, and certain institutions more effectively convert student
engagement into higher performance on critical thinking tests.
14. Polesel, J., Dulfer, N., & Turnbull, M. (2012). The experience of education: The impacts of high stakes
testing on school students and their families. Sydney, Australia: The Whitlam Institute.
The evidence presented in this paper draws from a range of research studies which have investigated the
impact of a range of testing regimes, many of them quite different in their approach. What emerges
consistently across this range of studies are serious concerns regarding the impact of high stakes testing on
student health and well-being, learning, teaching and curriculum. It is acknowledged that much of this is
international research, particularly focussed on the USA and the UK. However, the consistency of these
findings raises legitimate questions and deep concern regarding the Australian experience. For this reason, it
is important to investigate the extent to which we can extrapolate these findings of the largely negative
impact of testing to the NAPLAN program recently implemented in Australia.
15. Edwards, S. H. (2003). Improving student performance by evaluating how well students test their own
programs. Journal on Educational Resources in Computing (JERIC), 3(3), 1-es.
Students need to learn more software testing skills. This paper presents an approach to teaching software
testing in a way that will encourage students to practice testing skills in many classes and give them
concrete feedback on their testing performance, without requiring a new course, any new faculty resources,
or a significant number of lecture hours in each course where testing will be practiced. The strategy is to
give students basic exposure to test-driven development, and then provide an automated tool that will assess
student submissions on-demand and provide feedback for improvement. This approach has been
demonstrated in an undergraduate programming languages course using a prototype tool. The results have
been positive, with students expressing appreciation for the practical benefits of test-driven development on
programming assignments. Experimental analysis of student programs shows a 28% reduction in defects per
thousand lines of code.
2
Bibliography & REFERENCES:
Arthur, J., & Davies, I. (2012). The Routledge Education Studies Textbook. Routledge
Sodhi, K.J.2022. Comprehensive textbook of nursing education. Jaypee Brothers Medical publishers (P)
Ltd.
Smith, M. J., Carpenter, R. D., & Fitzpatrick, J. J. (2015b). Encyclopedia of Nursing Education. Springer
Publishing Company.
Wiggins, G. P. (1993). Assessing student performance: Exploring the purpose and limits of testing. Jossey-
Bass/Wiley.
EdS, W. D. (2021, June 1). Test Administration Guidelines — Before/During/After Testing — Caveon.
https://blog.caveon.com/test-administration-guidelines
https://dokumen.tips/documents/administrating-testscoringgrading-vs-marks.html
https://www.proftesting.com/test_topics/pdfs/steps_10.pdf
2
2