Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
53 views22 pages

Effective Assessment of Computer Science

Uploaded by

Rila Aristariana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views22 pages

Effective Assessment of Computer Science

Uploaded by

Rila Aristariana
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Effective Assessment of Computer Science Capstone


Projects and Student Outcomes
https://doi.org/10.3991/ijep.v10i2.11855

Fatima K. Abu Salem


American University of Beirut, Beirut, Lebanon

Issam W. Damaj ()


Beirut Arab University, Debbieh, Lebanon
[email protected]

Lama A. Hamandi
American University of Beirut, Beirut, Lebanon

Rached N. Zantout
Rafic Hariri University, Mechref, Lebanon

Abstract—A capstone project is a culminating experience that entails crea-


tivity, critical thinking, and advanced problem-solving skills. To that end, cap-
stone projects enable students to prove their abilities, demonstrate their attained
skills, and carry out a significant project in their field. In Computer Science
Bachelor programs, there is a strong mapping between learning outcomes of
capstone projects and all student learning outcomes. This paper presents an as-
sessment framework for capstone courses that allows for sound evaluations of
the performance of students and project qualities; besides assessing the student
outcomes of the program. The developed framework comprises criteria, indica-
tors, extensive analytic rubrics, and an aggregate statistical formulation. The
presented course and framework are supported by the results, analysis, and
evaluation of a pilot study for a single institution to explore the effectiveness of
the proposed tool.

Keywords—Assessment, Capstone Projects, Computer Science, Accreditation,


ABET.

1 Introduction

Capstone projects are a rich resource for assessing the level of attainment of stu-
dent outcomes and in most of the knowledge acquired by students during their study.
This assessment is done towards the end of students’ studies giving a true indication
of the level of attainment of learning outcomes. However, capstone project assess-
ment is very challenging. It is different from assessment in a regular course. Capstone
projects do not feature traditional modes of assessment that include lab assignments,
home assignments, period quizzes and term exams. In contrast also, students usually

72 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

are undertaking learning tasks that are more research oriented rather than application
oriented. As a result, evaluators would be employing different evaluation strategies
inspired by the wide spectrum reflecting the versatility in their expertise. Another
challenge is that a capstone project covers all student outcomes.
It is very important to develop a capstone project assessment tool that will address
the above challenges. One should base this tool on performance criteria that adhere to
the capstone project, particularly its learning outcomes. It should also be flexible to be
used for assessing projects of different nature. Moreover, it should be easy enough to
be used by evaluators having different expertise. Indeed, the tool should be clear so
that students would be able to prepare themselves accordingly.
In this article, we develop a framework to assess the learning outcomes of capstone
projects (CPs), and hence the mapping that binds them to Student Outcomes (SOs) at
the program level. Our focal point of interest is in Computer Science and related
Computing programs [1]. Our tool follows the ABET Accreditation criteria where we
adopt a large part of its terminology. In line with the works of [2] and [3], our tool
consists of a suite of criteria and their indicators, supported by thorough rubrics, and
concluding with a summative statistical aggregation.
Our manuscript is organized so that in Section 2 we present related work. In Sec-
tion 3, we develop the measurement scheme and address aspects of assessment crite-
ria as well as indicators. Section 4 presents a discussion on the measurement rubrics,
together along the statistical formulation for both evaluating student performance, as
well as the attainment of SOs. In Section 5, we delve into the pilot study and present
its analysis and evaluation. Section 5 presents the conclusion and offers insight for
future work.

2 Related Work

Several investigations are reported in the literature to present effective Capstone


Project (CP) course setups, structures, and assessment frameworks in computer sci-
ence and engineering programs [1-12]. In [1], the author describes a course structure
that includes a carefully designed prerequisite course on project management and
scheduled milestones. Furthermore, assessment is done using a holistic rubric that
enables the measurement of a set of Course Learning Outcomes (CLOs) that are de-
rived from ABET Student Outcomes (SOs). Yousafzai et al. [2] and Damaj et al. [3]
present a unified framework which allows for sound evaluations of student perfor-
mance and CP qualities in addition to assessing SOs. Along the lines of those works,
the authors develop a framework consisting of a suite of criteria and their indicators,
supported by thorough analytic rubrics, and concluding with a summative statistical
formulation. The authors conduct a pilot study examining several capstone projects.
The results reveal that after raters’ calibration, only a minor difference in average
scores exist. In fact, the authors examine the weighted average assessment scores pre
and post calibration and report it to be at 3.2%.
Olarte et al. [4] present a study that compares student and staff perceptions of key
aspects of CPs, such as, characteristics of projects, competencies of students, in-

iJEP ‒ Vol. 10, No. 2, 2020 73


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

volvement of advisor, and perceived learning of students. The study employs a holis-
tic Likert scale of 1–4 to aid the evaluation. Three questionnaires were designed for
the student, the advisor, and the committee. The questionnaires are divided into four
blocks, one for each of the key elements. The blocks examine project characteristics,
student competency, advisor involvement, and the level of learning perceived by stu-
dents. primary finding of this study is that the students' expectations differ greatly
from those of staff.
Furthermore, a supervision typology in Computer Science Engineering CPs is pre-
sented in [5]. The study develops and validates an instrument, and then utilizes it to
determine different styles of supervision. A questionnaire is developed to survey cap-
stone project advisors at a university during the past two years. A total of 109 surveys
are successfully collected. A combination of multivariate statistical methods, such as
factorial analysis, is employed. This study distinguished seven main supervision fac-
tors: technology, arrangements, keep alive, execution, meetings, management, and
reports.
Assessment rubrics were presented for software-related undergraduate capstone
projects in [6]. In addition, it was recommended that capstone projects should always
undergo a continuous improvement process. A survey was carried out at different
Pakistani universities. The survey results highlighted several challenges such as poor
process visibility, difference in support and information documents, limited guidelines
for project assessment, lack of adequate requirements on software process and docu-
mentation, and limited incentives for supervisors. The proposed rubrics specify the
key assessment criteria to be assessed using quality levels.
Instead of having a capstone project course, a series of courses include capstone
projects is presented by Bihari et al. [7]. Among those is a course in Software Engi-
neering in which control was inverted so that the industrial sponsor has more control
and management duties over the project than the faculty member supervising the
students. The course was scaled up successfully through developing unique assess-
ment and evaluation tools to monitor, measure and fairly assess a wide spectrum of
projects. Students are evaluated based on several presentations done at various points
in the quarter, in-class weekly reports, industrial sponsor feedback, project workbook,
poster and an individual report. The rubrics are designed to be immune to variability
in projects, variability in industry sponsors and variability in technology. Each deliv-
erable is evaluated on a combination of the choice of the method as well as the execu-
tion of the method.
Moreover, the meetings-flow (MF) method is evaluated in, in [8], terms of its ef-
fect on teams undergoing their capstone projects. Previous studies have shown that
MF is beneficial for monitoring student work and product quality. In this study, it was
empirically proven that MF enhances a team’s communication and coordination and
creates a balance between the contributions of all team members. However, MF was
observed to have small influence on team cohesion.
In addition, an instrument is developed, by Pérez et al [9], to determine the differ-
ent styles of supervision for advisors of capstone projects. Six supervision styles were
identified based on seven factors of supervision. The identified supervision styles and

74 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

factors can be used to guide advisors on how to supervise students as well as where to
assess competencies of students and how to give meaningful feedback to the students.

3 The Measurement Scheme

3.1 Background
According to [2, 3], the CP’s learning outcomes can be associated with or replaced
completely by the SOs at the program level. According to ABET, student learning
outcomes capture the knowledge that students are to attain or to perform by gradua-
tion time. There are six student learning outcomes associated with the Computer Sci-
ence program and as set out by ABET [10]. The CP is the most critical juncture in a
student’s undergraduate journey where they get exposed to a significantly practical
experience. The CP is the very first encounter in their educational timeline that pro-
pels students towards an area of their own interest. With contemporary worldwide
societal challenges, it also becomes imperative to train students on using their
knowledge for the service of public good. As such, the CP provides an optimal venue
where students can aim to develop technologies and analytics for addressing societal
challenges.
As discussed earlier, our assessment approach is dedicated for Computer Science
programs and related Computing fields. In our approach we consider a CP course
taking place in one semester only. The CP course aims to enhance students’ skills
with practical experience giving them the opportunity to integrate knowledge accumu-
lated in different courses. The pre-requisites for the CP are three junior and senior
level courses following a three-year undergraduate CS program under the liberal arts
model:

 An introductory software engineering course introducing the fundamentals


 An introductory Operating Systems Course introducing the fundamentals of OS
function, design, and implementation
 An introductory database systems course with an overview of the nature and pur-
poses of database systems and an introduction to data modeling.
At the end of this course, students must deliver a product with a major software
component. The product must demonstrate aspects in the design, analysis, implemen-
tation, testing, and evaluation. In our approach, we distribute those four aspects over
sixteen weeks with the following milestones:

 Definition of the problem and its objectives


 Management of the project and teamwork
 Literature overview
 Alternatives for design and software and or project design methodology
 Specifications: software/project requirements
 Modeling and analysis
 Early-release prototyping

iJEP ‒ Vol. 10, No. 2, 2020 75


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

 Mid-semester software/project documentation


 Mid-semester checkpoint presentation
 Implementation
 Testing and verification
 Critical appraise: the analytical process for the systematic evaluation and validation
of the software product
 Documentation
 Final product demonstration: Final software/project delivery and oral presentation

The above milestones are carefully scheduled by the project team and overseen by
the supervisor over the course of the semester. Close follow-ups, by the team and
monitoring by the supervisor, are necessary for the thorough completion of require-
ments. Indeed, enabling the evaluation of the project aspects, including the proposed
milestones, is the aim of the target assessment criteria and the overall framework.

3.2 The assessment criteria


Although CPs are of joint efforts of a team of students, any developed assessment
framework must allow for sound evaluations of contributions per student. Assessment
of student contribution carefully considers the contents and their quality, timeliness of
achievements, and professionalism. A variety of assessment tools can be used for CPs
and student evaluations. Moreover, assessment tools are deployed to measure indica-
tors within specific criteria. For an increased reliability of measurements, tools are
carefully selected to enable the following multiple sources of measurements:

 Project proposal
 Supervisor consultations
 Mid-semester reports on project progress
 An oral final exam per student
 A report delivered upon project completion
 Project presentation examination by a committee

The project examination committee is mainly formed of professor from within the
department that the students belong to. In some instances, external examiners are
invited from outsides the department and the university; however, all examiners are
university professors in areas related to the project topic. In this paper, we propose the
following assessment CPs criteria; some names are inspired by those presented in [2]:

a) Content 60%)
b) Impact of the CP on the social good (5%)
c) Integrity and ethical and legal implications (5%)
d) Project management and teamwork skills (10%)
e) Written communication (10%)
f) Presentation and oral communication (10%)

76 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

The developed six criteria, A through F, are carefully selected to cover all intended
CP aspects within its indicators and map to all SOs at the program level. The most
significant part of CP assessment is under Criterion A that measures the level of pro-
ject contents achievement. The contents comprise reviewing the literature, the design
and implementation techniques, use of technology, analysis and evaluation, and the
identification of future work. Furthermore, a weight of 60% of the CP evaluation is
allocated for Criterion A (See Table 1). Criterion B assesses the extent to which the
proposed CP addresses a challenging problem for the social good. Criterion C ensures
that students are clearly aware of the ethical and legal implications surrounding hu-
man subject data. Moreover, Criteria C through F captures a bouquet of CP require-
ments that comprises the understanding of legal implications and adherence to profes-
sional ethics, project management aspects, and documentation and presentation skills.
Table 1 presents the list of criteria, indicators, and the allocated weights.

Table 1. The framework hierarchy: criteria, performance indicators, in addition to the


allocated weights; Criteria D, E, and F are inspired by the work presented in [2].
A Content (Total %60, Supervisor; %35, Examination Committee %25)
1.Summary, comparisons and evaluations of various concepts, research findings and current theories and
models in core content areas of computer sciences and computing literature in general. (10%)
2.Identification of principles and techniques that are relevant to the project and ability to apply them within
specific problem domain. (10%)
3.Interpretability, creativity, and originality of the adopted methodology and the developed solution. (10%)
4.Alternative research solutions, if applicable, and benchmarking with competitors. (10%)
5.Identification, mastering, and use of hardware/software tools. (10%)
6.Robustness and interpretability of results. (5%)
7.Identification of further improvements and future work. (5%)

B. Social Impact (%5, Supervisor and Examination Committee Members)


1.Addressing a problem that stems from a social need, and thus has a social impact. (2%)
2.Adaptability and use by people without an engineering/computing/technology background. (2%)
3.Evaluation of computing solutions that consider global and/or societal factors relevant to the region.
(1%)

C Integrity and ethical and legal implications, and regulations (%5, Supervisor and Examination
Committee Members)
1.Demonstration of scientific and professional ethics, especially in the interaction with team members and
advisor. (2%)
2.Adherence to legal principles, rules, and code of conduct. (2%)
3.Attainment of the criteria and abiding by the regulations that govern the project and/or the data used in
the project1. (1%)

D Project Management and Teamwork skills (%10, Supervisor)


1.Ability to work individually, or as part of the team where appropriate, to formulate, analyze, design, and
implement a significant project (3%)
2.Contribution to the team project/work. (3%)

1
Whenever applicable, adherence to IRB guidelines involving human research subjects.

iJEP ‒ Vol. 10, No. 2, 2020 77


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

3.Taking responsibility. (4%)

E Written Communication (%10, Supervisor and Examination Committee Members)


1.Organization and logic. (4%)
2.Writing style (word choice, grammar and sentence structure. (4%)
3.Use of References. (2%)

F Presentation and Oral Communication (%10, Supervisor and Examination Committee Members)
1.Mechanics. (2%)
2.Organization. (2%)
3.Delivery. (2%)
4.Relating to audience. (2%)
5.Response to questions. (2%)

3.3 Bridging capstone projects and outcomes at the program level


The framework’s set of carefully developed performance indicators enables a va-
riety of measurements of CP outcomes at the course levels. To benefit from CP meas-
urements in the program review process, the framework is built upon the adoption of
ABET 2019 complete set of SOs as both the program and the course outcomes. The
adoption of the complete set of SOs as CLOs guarantees the coverage of required
outcomes within a CP course. With no doubt, such a unified arrangement of CLOs
and SOs facilitates closing the continuous improvement of ABET’s Criterion 4. In
Table 2, we present the mapping among assessment criteria and ABET SOs. Indeed,
the relationships between CP indicators and SOs are many-to-one, where several
indicators’ scores are aggregated to measure the attainment of a single SO. The fol-
lowing represent our CP CLOs that are the same as the newly developed ABET SOs
for Computer Science Programs:

 “Analyze a complex computing problem and to apply principles of computing and


other relevant disciplines to identify creative and original solutions.
 Design, implement, and evaluate a computing-based solution to meet a given set of
computing requirements in the context of the program’s discipline.
 Communicate effectively in a variety of professional contexts: example presenta-
tions and software documentation
 Recognize professional responsibilities and make informed judgments in compu-
ting practice based on legal and ethical principles.
 Function effectively as a member or leader of a team engaged in activities appro-
priate to the program’s discipline.
 Apply computer science theory and software development fundamentals to produce
computing-based solutions.”

78 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 2. Criteria mapping to ABET SOs.


SOs CP Indicator
A2, A6
1
2 A1, A3, A4, A5, A7
3 E1, E2, E3, F1, F2, F3, F4, F5
4 C1, C2, C3
5 D1, D2, D3
6 A2, A3, B1, B2, B3

Adopting ABET SOs as the intended CLOs of the capstone course enables multiple
mutual benefits. First, the adoption of SOs as CLOs unifies their assessment without
the need for mapping them to each other and accordingly the need to develop an addi-
tional statistical aggregation. Second, such an adoption guarantees the literal coverage
of all SOs within the course intended outcomes. Accordingly, the framework main-
tains a 3-level hierarchy of evaluation metrics, namely, criteria, indicators, and their
rubrics.

3.4 Hierarchy of evaluation metrics: criteria, indicators, and rubrics


The suggested framework is a three-level hierarchy. The top-most level is the set of
criteria that covers all intended CP aspects. A rich set of indicators stems from the
criteria to specify the intended measurements. The bottom-level of the framework
includes extensive set of analytic rubrics for each indicator. The rubric descriptors
specify the quality scale of achievements and the intended requirements. The devel-
oped rubrics and the statistical formulations are presented in Section4.

4 Measurement Rubrics

To further develop the proposed framework, analytic scoring rubrics are carefully
created based on the specific requirements of the intended CP context. We base our
analysis around twenty-Four indicators that map onto the set of revised ABET SOs
(See Table 2). Furthermore, the adopted scale of rubrics consists of four attainment
levels: a beginning level (B), a developing level (D), a competent level (C), and an
accomplished level (A).
To verify the suitability of the created rubrics, we have consulted with four Profes-
sor of Computer Science, besides, comparing with rubrics from [2,3,14,15]. The aim
of the developed rubrics includes adopting solid descriptors and a variety of perfor-
mance levels. These four levels in turn correspond to a range of percentages given by
[40-69, 70-79, 80-89, 90-100] respectively. The selected ranges of percentages are for
a scale that considers 70% as the starting point of the D level. With this level of gran-
ularity, we can assess the deliverables of the CP at any of the criterion or indicator
levels or the combination of the two. We adopt the following weighted average to
aggregate all indicators: . where is the combined score, is

iJEP ‒ Vol. 10, No. 2, 2020 79


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

the score percentage of the th indicator and is the weight of the th performance
indicator such that , and , where is the num-
ber of performance indicators i.e. in our case, . The weights are described in
Table 1.
Capstone project assessment criteria cover aspects related to global and regional
social impacts; understanding of integrity and ethical issues, legal implications, and
regulations; management; and effective communication skills. For some part of the
created indicators, we are inspired by the rubrics developed in [2, 14, 15]. Although
our framework is primarily created for assessing computer science CPs, minor modi-
fications enable its use in similar disciplines.
In Tables 3 through 8 we delve into many details surrounding Criteria A through F.
Each row in these tables represents a certain relevant mapping to an ABET outcome.
The various columns in each table describe the attainment per scale point (Beginning,
Developing, Competent, or Accomplished). We present an overview of the content of
these tables as follows. Criterion A in Table 3 carries with it a significant percentage
of the overall score, as it addresses several focal points related to the overall quality of
the project. The stated rubrics cover the various stages of the project from beginning
to end. The indicators dwell on evidence of a thorough literature survey, of a robust
understanding of computing principles and techniques, and of a sound methodology
that is interpretable and yet creative and original. The indicators investigate whether
there has been enough exploration of alternative research solutions, and to what extent
the team has conducted benchmarks against competitor solutions.
Criterion A is also concerned with the extent to which appropriate hardware
and/or software tools have been exploited, and finally, with the level by which results
are interpretable and future work and improvements are identified. Beginner levels are
those mostly lacking in all these indicators whereas competent levels meet those crite-
ria and beyond.
Criterion B in Table 4 probes into the social impact that the proposed CP aims to
address. The indicators investigate the extent to which the project addresses issues of
social impact, examples of which can include poverty, education, or crisis manage-
ment, to name a few, and the extent to which the team exhibits awareness of the high
risks and/or low resourced settings surrounding our society. We then carry on having
more indicators of this criterion investigate the level by which the project can be
adapted for use by people without the relevant technical background, and the extent to
which the team evaluates computing solutions that consider global and regional socie-
tal factors. Competent students are expected to demonstrate exemplary awareness of
societal challenges and to offer solutions of high impact that are of utility and can be
adopted by lay people from all walks of life.
Criterion C in Table 5 assesses the understanding and application of integrity and
ethical and legal implications, and regulations. To that end, competent students exhib-
it consideration and compliance with professionalism and integrity, especially with
team members and advisor. Moreover, a team member is to abide by the regulations
that govern the project, its data, and show proper adherence to handling guidelines.
Criterion D in Table 6 is aimed at assessing the management skills within the
team work as well as the level to which the student has individually contributed to the

80 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

project and taken responsibility for sub-tasks. Also, the indicators tackle the time
management skills required for achieving major milestones in a timely fashion. The
indicators stipulated under Criterion D require the assessment of the project supervi-
sor exclusively, as external examiners have no way to monitor those aspects of the
project. Competency at this criterion requires that a student demonstrate active partic-
ipation in the project as well as a strong initiative leading up to monumental ideas in a
timely fashion.
Written communication is addressed in Criterion E in Table 7. The rubrics begin
by addressing the organization and the logical order and coherence of ideas. A compe-
tent student exhibits a solid logical rationale and a smooth transitioning among ideas
as well as a highly relevant body of information. The rubrics then examine the writing
style involving the choice of words as well as the grammatical proficiency and the
readability of the written document. A competent student has a compelling writing
style that captivates the reader throughout. The rubrics finish off with examining the
use of references and the level to which the writer provides citations in the text, is
accurate in referring to the citations, and chooses relevant and impactful literature
references.
Table 8 presents the last criterion assessing the oral and presentation communica-
tion skills, as evident from the student’s own slides as well as their delivery of the
presentation. Particularly, we pay attention to the mechanics manifested in the slides
as well as the extent to which they are effectively written, the sequencing and pace of
topics in the presentation. We also examine the actual delivery including voice and
tone, as well as body language. The engagement with the audience and the level by
which the response to questions is appropriate are also address. A student is compe-
tent at this criterion if they present extremely creative and well written slides in an
engaging manner and show confident presence on stage and excellent engagement
with the audience and can navigate through and adapt the presentation considering
real time response from the audience.
A weighted aggregation of indicator scores produces the overall percentage grade
per student. In addition, simple averaging using the indicator mapping presented in
Table 2 enables the calculation of attainment scores per indicator, criteria, and student
outcome. In Section 4, the benefits, challenges, results of deployment, and validation
of the proposed framework are investigated.

iJEP ‒ Vol. 10, No. 2, 2020 81


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 3. Criterion A (Content); rubric is partly inspired by the tool presented in [2].
[Mapping to ABET
Beginning Developing Competent Accomplished
outcomes]
Literature review is Literature review is Literature review is Literature review is
poor in its time-span satisfactory in its good in its time-span excellent in its time-
Summary, compari-
coverage, quality of time-span coverage, coverage, quality of span coverage,
sons and evaluations
publishing venues, quality of publishing publishing venues, quality of publishing
of various concepts,
as well as breadth venues, as well as as well as breadth venues, as well as
research findings
and depth of topics. breadth and depth of and depth of topics. breadth and depth of
and current theories
Relevance of topics topics. Relevance of Relevance of topics topics. Relevance of
and models in core
discussed is poor. topics discussed is discussed is good. topics discussed is
content areas of
Attempt to support satisfactory. Attempt Attempt to support excellent. Attempt to
computer sciences
assertions with to support assertions assertions with support assertions
and computing
evidence is poor. with evidence is evidence is good. with evidence is
literature in general.
Content is exces- satisfactory. Content Content is occasion- excellent. Content is
[SO2]
sively marred with is frequently marred ally marred with occasionally marred
errors. with errors. errors. with errors.
Identification of Includes basic Renders that project
Provides good
principles and principles and completely ground-
Demonstrates a computational and
techniques that are techniques relevant ed in computational
basic understanding technological
relevant to the to project but misses principles and
of principles; fails to framework for the
project and ability to some others. Fails to technologies; applies
apply them within project; applies
apply them within develop complete them to problem
specific problem principles and
specific problem theoretical or design correctly and clearly
domain. techniques correctly
domain framework for the establishes their
to problem domain.
[SO1, SO6] project. relevance.
The interpretability
of the methodology
The interpretability
The interpretability The interpretability is excellent. The
of the methodology
of the methodology of the methodology creativity of solu-
Interpretability, is satisfactory. The
is poor. The creativi- is good. The creativ- tions is excellent.
creativity, and creativity of solu-
ty of solutions is ity of solutions is The proposed work
originality of the tions is satisfactory.
poor. The novelty of good. The proposed has substantial
adopted methodolo- The proposed work
the proposed work is work has evident novelty and there is
gy and the devel- has some novelty
completely lacking novelty and there is an extremely prom-
oped solution. and there is some
and there is no adequate impact ising impact ob-
[SO2, SO6] mild impact ob-
impact observed to observed to the work served to the work
served to the work
the work proposed. proposed. proposed that can
proposed.
propel it into multi-
ple directions.
Experiences short-
Presents only one comings in explor- Achieves the final
Alternative research Identifies alternative
alternative research ing and identifying design after review-
solutions, if applica- solutions to some
solution or gives alternative research ing reasonable
ble, and benchmark- fair degree and
clearly infeasible solutions. Attempts alternatives. Presents
ing with competi- attempts to bench-
alternatives. Omits to refer to competi- comprehensi-
tors. mark against com-
reference to compet- tors but omits ble/conclusive
[SO2] petitor solutions.
itors entirely. benchmarking the benchmarks.
results.
Masters hardware
Suffers from serious Masters hardware
Demonstrates mini- and software tools
Identification, deficiencies in and software tools
mal application, and uses them highly
mastering, and use understanding the and uses them with
mastering, and/or effectively to devel-
of hard- correct selection effectiveness to
use of appropriate op and analyze
ware/software tools and/or the mastering develop designs.
hardware and soft- designs. Final prod-
[SO2] and use of hardware Further improve-
ware tools. uct is highly profes-
and software tools. ment could be made.
sional.

82 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Testing of the design


is somewhat fair;
Almost all the Testing is adequate; Testing is thorough;
results are inconclu-
Robustness and experiments and analysis and results analysis and results
sive but not usable
interpretability of tests are inconclu- are acceptable, are robust, usable
for further investiga-
results. sive; results are complete, and and highly interpret-
tion. Attempts to
[SO1] incomplete and hard sufficiently inter- able even to a non-
interpret results are
to interpret. pretable to an expert. expert.
made but not to a
satisfactory level.
Several ideas, of Several novel direc-
Identification of
One or two ideas for which one or two are tions for important
further improve- No direction for
future expansion are practical and ade- expansions of the
ments and future further improvement
listed but may not be quate, for further current ideas are
work. is provided.
practical. improvements are thoroughly ex-
[SO2]
explained. plained.

Table 4. Criterion B (Impact of the CP on the Social Good); rubric is partly inspired by the
tool presented in [2].
[Mapping to ABET
Beginning Developing Competent Accomplished
outcomes]
Addresses in an
No addressing of impactful manner
Limited addressing Addresses needs on
needs on issues of needs on issues of
of needs on issues of issues ranging of
social impact, ex- great social rele-
social impact, ex- social impact, ex-
amples of which can vance, examples of
Addressing a prob- amples of which can amples of which can
include poverty, which can include
lem that stems from include poverty, include poverty,
education, or crisis poverty, education,
a social need, and education, or crisis education, or crisis
management, to or crisis manage-
thus has a social management. Lim- management. Exhib-
name a few. No ment. Exhibits
impact. ited awareness of the its awareness of the
awareness of the conscientious
[SO6] high risks and/or high risks and/or
high risks and/or awareness of the
low resourced low resourced
low resourced high risks and/or
settings surrounding settings surrounding
settings surrounding low resourced
our society. our society.
our society. settings surrounding
our society.
Adaptability and use Impossible to adapt Limited potential for Sufficiently adapts Highly adapts for
by people without an for use by people adaptation for use by for use by people use by people with-
engineering, compu- without an engineer- people without an without an engineer- out an engineering,
ting, or technology ing, computing, or engineering, compu- ing, computing, or computing, or
background. technology back- ting, or technology technology back- technology back-
[SO6] ground. background. ground. ground.
Evaluation of com-
No evaluation of Limitedly evaluates Evaluates computing Efficiently evaluates
puting solutions that
computing solutions computing solutions solutions that con- computing solutions
consider global
that consider global that consider global sider global and/or that consider global
and/or societal
and/or societal and/or societal societal factors and/or societal
factors relevant to
factors relevant to factors relevant to relevant to the factors relevant to
the region.
the region. the region. region. the region.
[SO6]

iJEP ‒ Vol. 10, No. 2, 2020 83


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 5. Criterion C (Integrity and Ethical and Legal Implications); rubric is partly inspired by
the tool presented in [2].
[Mapping to ABET
Beginning Developing Competent Accomplished
outcomes]
Exhibits incomplete Exhibits understand-
Demonstration of Clearly documented
understanding but ing and compliance
scientific and pro- Lack of demonstra- understanding of
still complies with with principles of
fessional ethics, tion of scientific and compliance with all
principles of scien- scientific, profes-
especially in the professional ethics, relevant ethical
tific, professional sional and/or aca-
interaction with especially with team guidelines; clearly
and/or academic demic integrity,
team members and members and advi- establishes author-
integrity, especially especially with team
advisor. sor. ship of the project
with team members members and advi-
[SO4] work.
and advisor. sor.
Clear documentation
Exhibits incomplete
of compliance with
Exhibits understand-
Lack of understand- understanding but
all relevant legal
ing and complies
Adherence to legal ing of legal princi- still complies with
guidelines and
with legal principles
principles, rules, and ples and implica- legal principles
implications.
and/or implications.
code of conduct. tions. Poor adher- and/or implications.
Demonstration of
Shows proper adher-
[SO4] ence to code of Shows reasonable
exemplary adher-
ence to code of
conduct. adherence to code of
conduct. ence to code of
conduct.
conduct.
Strict and explicit
Incomplete attain- reference in the
Exhibits attainment
Lack of attainment ment of the criteria project towards the
of the criteria and
of the criteria and but some manifesta- attainment of the
abiding by the
Attainment of the poor abiding by the tion of abiding by criteria and abiding
regulations that
criteria and abiding regulations that the regulations that by the regulations
govern the project
by the regulations govern the project govern the project that govern the
and/or the data used
that govern the and/or the data used and/or the data used project and/or the
in the project.
project and/or the in the project. in the project. data used in the
Whenever applica-
data used in the Whenever applica- Whenever applica- project. Whenever
ble, shows adher-
project. ble, violation of IRB ble, some attempt applicable, diligently
ence to IRB guide-
[SO4] guidelines governing and recall of IRB assures the reader of
lines governing
human research guidelines governing the strict adherence
human research
subjects. human research to IRB guidelines
subject.
subjects. governing human
research subjects.

84 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 6. Criterion D (Project Management and Teamwork Skills); rubric is partly inspired by
the tool presented in [2].
[Mapping to ABET
Beginning Developing Competent Accomplished
outcomes]
Unable to contrib- Is only marginally Is working effective-
ute effectively to operating within the ly as a team mem-
the team effort. team. Contributions ber. Contributions to Is instrumental in
Ability to Work indi- Individual contri- to the project are not the project are leading the team.
vidually, or as part of butions to the significant despite satisfactory. Can Contributions to the
the team where appro- project fall below exceeding minimal follow a timeline yet project are signifi-
priate; to formulate, minimally accept- requirements. falls behind when cant. Strictly abides
analyze, design, and ed standards. By Somehow able to taken by other by a timeline. Deliv-
implement a signifi- and large always follow a timeline yet commitments. erables are well-
cant project. falling behind falls behind when Deliverables do not formulated, de-
[SO5] schedule. Deliver- taken by other contain any faults signed, and imple-
ables are faulty on commitments. but leave room for mented. .
numerous occa- Deliverables contain ample improvement.
sions. some faults. .
Only when prompt- Is able to provide
ed, embarks on Can provide some extremely relevant
Individual contri-
contributing infor- basic and useful information to assist
butions to the
mation to the team. information to assist in the project. Sys-
Contribution to the team are not
Tries to provide in the project and tematically offers
team project/work relevant or useful,
some ideas but occasionally makes well developed and
[SO5] and do not address
suggestions are not some useful sugges- clearly expressed
the team’s needs;
sufficiently devel- tions to the team that ideas that fall at the
oped to meet the meet its needs. heart of what the
team’s needs. project needs.
Takes no respon-
sibility whatsoev-
er and shows no
Can perform as-
initb
signed tasks but Is able to perform all
?
regularly needs Can perform all assigned tasks
reminders and assigned tasks. highly effectively.
‘iative at all relyin
prompts. Delegates Attends all meetings Takes initiative in
Taking responsibility /g on the other
the challenging parts and is generally setting up meetings
[SO5] team members to
of the project to engaged in the and is the lead
do the work. By
others. Does not discussions that take participant in the
and large misses
have a constructive place then. discussions that take
meetings and
presence during place then.
when present
meetings.
demonstrates
marginal partici-
pation.

iJEP ‒ Vol. 10, No. 2, 2020 85


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 7. Criterion E (Written Communication); rubric is partly inspired by the tool presented
in [2].
[Mapping to
ABET out- Beginning Developing Competent Accomplished
comes]
The written report
exhibits strong clarity
The text exhibits a
The text exhibits a and a solid logical
The information in the reasonable organiza-
weak organization. rationale. Transition-
text has no logical tion. The information
Organization The presentation of ing among ideas is
order, lacking in many in the text has some-
and logic ideas lacks coherence smooth. The infor-
important details, and what a logical back-
[SO3] and shows no smooth mation presented is at
is difficult to under- ing and an attempt to
transition between large very relevant and
stand. provide a project
them. thorough, all resulting
rationale is made.
in a highly informative
piece of text.
Choices of words and
Writing style expressions often Choice of words and
(word choice, misleading. Text level of grammatical The writing style and The writing style is
grammar and suffers from numerous proficiency is gener- the general flow of compelling and is able
sentence errors in grammar that ally adequate. Yet, the text are satisfacto- to captivate the reader
structure) compromise on the the document still is ry. till the end.
[SO3] clarity of the docu- difficult to read.
ment.
Prior work is proper- Prior work is properly
ly cited in most cited in most places
places where needed where needed (e.g.
(e.g. when referring when referring to
to theories, assump- theories, assumptions,
Most references
tions, and findings). and findings), with no
Most references in- provided are clearly
Minor exceptions exceptions whatsoev-
cluded are inaccurate indicated but have
Use of Refer- exist. References are er. References are
and are not relevant. little impact in the
ences accurate in referring accurate in referring to
Almost inexistent literature. A con-
[SO3] to author names, author names, journals
attempt to provide servative attempt to
journals or proceed- or proceedings, vol-
citations in the text. provide citations in
ings, volume num- ume numbers, page
the text is made.
bers, page numbers, numbers, and year of
and year of publica- publication. Refer-
tion. References have ences have a great
a modest impact in impact in the litera-
the literature. ture.

86 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Table 8. Criterion F (Presentation and Oral Communication); rubric is partly inspired by the
tool presented in [2].
[Mapping to
ABET Beginning Developing Competent Accomplished
outcomes]
Content of slides seems
to be completely irrele-
vant, reflecting a lack of Slides are extremely
understanding how the creative and expose
Slides are boring and Slides are generally
presentation should be the main aspects
Mechanics largely ineffective, good and convey
crafted. Numerous behind the projects.
[SO3] despite being largely key messages
mistakes appear in the Audience remains
error free. reasonably well.
presentation’s text. interested throughout
Speaker largely unsure the presentation.
how to flow from one
slide to the next.
The presentation is
Presentation oscillates Ideas are well clear and slowly
Sequencing and pace of
between sometimes organized and helpbuilds up in a fo-
topics in the presenta-
following an organized the audience move cused manner. De-
tion seems to be in a
Organization track and some other along; the key tails in the presenta-
complete disarray,
[SO3] times not so. In gen- tion are entirely
points are present-
making it difficult to
eral, though, one is ed; leads up to relevant and help the
derive any clear conclu-
still able to derive convincing conclu-audience derive a
sions.
plausible conclusions. sions. deep understanding
of the topic.
Speaks naturally and
in a confident man-
Speaks in a low voice
Speaks in an extremely ner. Goes beyond
but only occasionally Speaks in a clear
low voice and mumbles. merely conveying the
Delivery inaudible. Uses some voice and delivers a
Too many filler words message but over to
[SO3] distracting filler words generally effective
that distract the audi- enhancing it with the
but mostly articulates presentation.
ence help of an effective
in a modest manner.
tone, pitch, and body
language.
Shy attempts to main-
tain eye contact and to
Generally, attempts
move around in a
to maintain eye Maintains excellent
catching manner. A
Reads most of the contact and to move engagement with the
modest attempt to
presentation from the around in a catching audience throughout
improvise aside from
slides or note. Fails to manner. Is able to the audience. Is able to
Relating to the notes written in the
maintain eye contact or deliver the presenta- modify the presenta-
audience presentation. Somehow
maintain a catching tion as if conversing tion on the spot as
[SO3] aware of the presence
body language. Com- with the audience. needed based on
of audience (at the
pletely oblivious to Satisfactory interac- audience engagement,
very least, those sitting
audience reactions. tion with the audi- questions, and com-
closely). Very brief
ence during Q and ments.
and somehow dis-
A.
missive response to
audience questions.

iJEP ‒ Vol. 10, No. 2, 2020 87


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Fully prepared for


questions to the extent
Partially prepared to Demonstrates a that speaker can
Generally unprepared to answer questions. clear understanding anticipate questions
answer questions. Understands most of of the questions and and respond with
Response to
Misunderstands most of the questions but is well prepared to ample information
questions
the questions and fails to demonstrates difficulty answer them. Pro- beyond what’s needed.
[SO3]
provide appropriate in providing correct or vides mostly correct Demonstrates a deep
response. well-informed re- and well-informed understanding of the
sponses. responses. project’s intricacies
and controversial
topics.

5 Analysis and Evaluation

This section presents the result and evaluation of the proposed framework. In addi-
tion, the benefits of the proposed assessment framework and its challenges are dis-
cussed. The framework uses an assessment structure, formulation and scoring like the
one used in [16]. In addition, several CPs covering an extensive selection of computer
science problems are used to evaluate the assessment tool. The tool is also evaluated
with several evaluators who have different experience and level of expertise in differ-
ent areas.

5.1 Case-study setup


The presented study includes seven projects with a total of 25 students and a typi-
cal project team size of 3-4 students. Each project has a single supervisor and is exam-
ined by a committee of four professors. The examination committee evaluates the
project report, presentation, and the developed prototype during a demonstration. The
supervisor examines all deliverables. The target passing score per student is 70% for
the overall course grade; the same percentage is adopted as the target score of SOs.
A pilot study for a single program is applied to evaluate the proposed framework.
The evaluated CP’s are from an institution of higher education which adopts the
American model of higher education. Data collection started with refining the docu-
ments and forms already in use at the institution. This led to balanced artifacts. The
analytic version of the tool was then used to collect data. Training sessions were then
conducted for the evaluators to ensure evaluators understood the rubrics. Then data
collection was conducted after calibration.

5.2 Analysis
As shown in Tables 9 and 10, most of the indicators and SOs were not different be-
fore and after calibration. Even with the indicators which had different scores before
and after calibration the 4-point rubric scale mapping did not change. This indicates
that the Rubric is clear enough to be used by evaluators before and after calibration
without the fear of changing the 4-point score of any of the indicators. Based on the

88 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

rater calibration, several improvements are identified. The improvements include the
following:

 Add the identifier “ACM” for core areas in indictor A1 to specify the target core
areas. ACM core areas are detailed in [17].
 Add "during the design phase" to indicator A2 to identify the specific stage of
development.
 Remove "impact" from indicator A3 to avoid overlapping measurements with indi-
cator B1.
 Specify the code of conduct in C2 as the "Code of conduct of the Institution"
 Modify "good" under F1 under Competent by "well-designed" to better match the
intended meaning by the rubric designers.
The measurements made for the proposed indicators identified additional opportu-
nities for improvement on the SO level (See Table 10), namely SO4 and SO6. The
identified improvement is on the student abilities to recognize professional responsi-
bilities and make informed judgments in computing practice based on legal and ethi-
cal principles. In addition, the needed improvement is related to the student effective
application of computer science theory and software development fundamentals to
produce computing-based solutions. At the program level, the attainment scores are
usually combined with triangulated measurements from other courses to reach a final
attainment score and improvement decisions.
As presented in Section 3, the proposed framework enables evaluating student per-
formance as the weighted sum of indicator scores. The results of the evaluated 25
students ranged between 70% and 90%. Although all students met the projected pass-
ing grade of the course with their overall score, the tool allowed for the identification
of improvements to their intended abilities at the indicator, criteria, and SO levels.
Upon incorporating the suggested modifications to the rubric, the proposed frame-
work is ready for deployment within Computer Science Programs that are aligned
with ABET requirements and ACM recommendations. However, tuning and custom-
izing the framework is straightforward. Customizations can be applied to the criteria,
indicators, rubrics, and the choice of the aggregating statistical formulation.

5.3 Evaluation
Several benefits exist for the framework proposed in this paper. The framework
limits bias and uncertainty of evaluator’s thereby promoting quality in assessment.
This is due to the clear measurement structure resulted from the conceptual basis of
the framework. At the program level, a main source of measurement are the integrated
key CP indicators. In addition, conclusions at different levels of abstraction are possi-
ble using the framework. Measurements are made at the indicators, criterion as well
as ABET SO levels of abstraction. The measurement structure and statistics in the
framework are not limited to computer science and can be applied for any other disci-
pline. Indicators and rubrics are comprehensive and rich yet simple enough to be
understood by faculty members as well as students. A smooth transition in descriptors

iJEP ‒ Vol. 10, No. 2, 2020 89


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

is used in the rubrics which makes it easier for evaluators to pick the appropriate de-
scriptor.
However, the framework poses several challenges to its implementation. A pre-
requisite for implementing the framework is that a culture of assessment be present.
The evaluators should be first trained on how to use the rubrics. They should also be
committed to reviewing thoroughly all artifacts being measured such as presentations,
reports and essays. In addition, the time constraints placed on the evaluation process
are also considered as a great challenge. To that end, the allocation of four examiners
per project can be reconsidered and replaced by a smaller evaluation committee.

6 Conclusion

Computer Science programs rely heavily on senior CPs to demonstrate student


abilities accumulated throughout the program. CPs are rich in requirements and de-
liverables; this makes them of unique importance in evaluating student performance
and assessing their attainment of SOs. In this investigation, we present a framework
for systematically, accurately, effectively, and jointly evaluating student performance,
assessing learning outcomes, and accordingly assessing SOs. The hierarchy of the
developed framework is of three levels that comprise criteria, indicators, and an ex-
tensive set of analytic rubrics. A single-institution pilot study is executed to calibrate
the proposed rubrics. The study includes several CPs from wide spectrum of computer
science topics. The tool tuning attained a small variance in scores after the calibration
of raters. A difference of 2% is found between the scores—before and after calibra-
tion. The proposed framework is easy to deploy and was found to effectively elimi-
nate subjectivity in assessment and evaluation. Future work includes carrying out a
study that involves multiple programs.

Table 9. Assessment results in percent and their corresponding scale point: Beginning (B),
Developing (D), Competent (C) and Accomplished (A).
Indicators Before Calibration (BC) After Calibration (AC)
A.1 (10%) 32 (B) 32 (B)
A.2 (10%) 100 (A) 76 (A)
A.3 (10%) 72 (D) 72 (D)
A.4 (10%) 52 (B) 56 (B)
A.5 (10%) 100 (A) 100 (A)
A.6 (5%) 84 (C) 84 (C)
A.7 (5%) 60 (D) 60 (D)
B.1 (2%) 44 (B) 44 (B)
B.2 (2%) 60 (D) 60 (D)
B.3 (1%) 16 (B) 16 (B)
C.1 (2%) 60 (D) 60 (D)
C.2 (2%) 16 (B) 16 (B)
C.3 (1%) 72 (D) 72 (D)
D.1 (3%) 84 (C) 84 (C)
D.2 (3%) 84 (C) 84 (C)

90 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

D.3 (4%) 84 (C) 84 (C)


E.1 (4%) 56 (B) 56 (B)
E.2 (4%) 60 (D) 60 (D)
E.3 (2%) 32 (B) 32 (B)
F.1 (2%) 84 (C) 84 (C)
F.2 (2%) 100 (A) 100 (A)
F.3 (2%) 100 (A) 100 (A)
F.4 (2%) 100 (A) 100 (A)
F.5 (2%) 84 (C) 84 (C)
Weighted Average 70.32% 68.32%

Table 10. Mapping scores to ABET SOs before and after calibration.
SOs CP Indicator BC AC
A2, A6
1 92% 80%
2 A1, A3, A4, A5, A7 83.2% 78.4%
3 E1, E2, E3, F1, F2, F3, F4, F5 77% 77%
4 C1, C2, C3 49.3% 49.3%
5 D1, D2, D3 84% 84%
6 A2, A3, B1, B2, B3 58.4% 53.6%

7 References
[1] Bachnak, R., An approach for successful Capstone projects. Proceedings Frontiers in Edu-
cation 35th Annual Conference, Indianopolis, IN, 2005, pp. F4D-18
[2] Yousafzai, J., Damaj I., El Abd M., A unified approach for assessing capstone design pro-
jects and student outcomes in computer engineering programs, 2015 IEEE Global Engi-
neering Education Conference (EDUCON), Tallinn, 2015, pp. 339.https://doi.org/10.11
09/EDUCON.2015.7095993
[3] Damaj, I., Yousafzai J. (2019). Effective Assessment of Student Outcomes in Computer
Engineering Programs using a Minimalistic Framework. International Journal of Engineer-
ing Education, Tempus Publications, 35(1.A): 59-75
[4] Olarte, J. J., Domı´nguez, C., Jaime, A., F. Garcı´a-Izquierdo J. (2016), Student and Staff
Perceptions of Key Aspects of Computer Science Engineering Capstone Projects. IEEE
Transactions on Education, 59(1): 45-51. https://doi.org/10.1109/TE.2015.2427
118
[5] Pérez, C. D., Elizondo, A. J., García‐ Izquierdo, F. J., Olarte Larrea, J. J. (2012). Supervi-
sion typology in computer science engineering capstone projects. Journal of Engineering
Education, 101(4): 679-697. https://doi.org/10.1002/j.2168-9830.2012.tb01124.x
[6] Ahmad, E., Raza, B., Feldt, R., Assessment and Support for Software Capstone Projects at
the Undergraduate Level: A Survey and Rubrics. Frontiers of Information Technology, 19-
21 December 2011, Islamabad, Pakistan. pp. 25-32. https://doi.org/10.1109/FIT.
2011.13
[7] Bihari, T., Malkiman I., Chaabouni M., Bolinger J., Ramanathan J., Ramnath R., Herold
M. Enabling Scalability, Richer Experiences and ABET-Accreditable Learning Outcomes
in Computer Science Capstone Courses through Inversion of Control. 41st ASEE/IEEE

iJEP ‒ Vol. 10, No. 2, 2020 91


Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Frontiers in Education Conference, October 12-15, 2011, Rapid City, SD.


https://doi.org/10.1109/FIE.2011.6142872
[8] Chen, C., Hong, Y., Chen, P. (2014). Effects of the Meetings-Flow Approach on Quality
Teamwork in the Training of Software Capstone Projects. IEEE Transactions on Educa-
tion, 57(3): 201-208. https://doi.org/10.1109/TE.2014.2305918
[9] Pérez, C. D., Elizondo, A. J., García-Izquierdo, F. J., Olarte Larrea, J. J. (2012). Supervi-
sion Typology in Computer Science Engineering Capstone Projects. Journal of Engineer-
ing Education, 101(4): 679–697. https://doi.org/10.1002/j.2168-9830.2012.tb01124.x
[10] Utesch, M. C., Hauer, A., Heininger, R., & Krcmar, H. (2017). Automated Stock Trading-
Developing the Serious Game FSTG to Teach the Topic of Finite State Machines. Interna-
tional Journal of Engineering Pedagogy, 7(1). https://doi.org/10.3991/
ijep.v7i1.6524
[11] Campbell, B., Voelker, J., & Kremer, C. (2015). An analysis of engineering educational
standards and outcomes achieved by a robotics summer camp experience. International
Journal of Engineering Pedagogy, 5(4), 12-21. https://doi.org/10.3991/ijep.v5i4.
4713
[12] Baumgartner, I. (2014). A Set of Best Practices to Design Face-to-face Teaching Sessions
for Technology-centered University-level Computing Courses. International Journal of
Engineering Pedagogy (iJEP), 4(4), 59-66. https://doi.org/10.3991/ijep.v4i4.4000
[13] ABET CAC Criteria 2018 – 2019, http://www.abet.org/accreditation/accreditation-
criteria/cac-18-19/, Accessed April 4, 2019
[14] CSE Program: Objectives, Outcomes, Assessments, Program Improvements, The Ohio
State University, Accessed October 10, 2019. https://cse.osu.edu/content/cse-program-
objectives-outcomes-assessments-program-improvements
[15] Learning Outcomes Assessment, University of Idaho, Accessed October 10, 2019.
http://www.webpages.uidaho.edu/ira/assess
[16] Damaj, I., Ater Kranov A. (2017). Sustainable Practices in Technical Education: A Quality
Assurance Framework. International Journal of Engineering Education, 33: 1627-1642
[17] ACM Curricula Recommendations, Curriculum Guidelines for Undergraduate Programs in
Computer Science, Accessed October 10, 2019. https://www.acm.org/binaries/
content/assets/education/cs2013_web_final.pdf

8 Authors

Fatima K. Abu Salem received her BS and MS degrees in Mathematics from the
American University of Beirut, Beirut, Lebanon, and the D.Phil. degree in Computing
from the University of Oxford, Oxford, U.K. She is currently an Associate Professor
with the Computer Science Department, American University of Beirut. Her current
research interests include computer algebra, parallel computing, and data science for
the public good. Email: [email protected]
Issam W. Damaj, received his PhD in Computer Science from London South
Bank University, London, UK. His ME in Computer and Communications Engineer-
ing from the American University of Beirut, and his BE in Computer Engineering
from Beirut Arab University (BAU), Beirut, Lebanon. He is an Associate Professor
with the Electrical and Computer Engineering Department, BAU, where he is also the
Director of the Center for Quality Assurance. His research interests include hardware
design, smart cities, and engineering education. Email: [email protected]

92 http://www.i-jep.org
Paper—Effective Assessment of Computer Science Capstone Projects and Student Outcomes

Rached N. Zantout received his BE from the American University of Beirut, Leb-
anon in 1988, his MSc from the University of Florida in 1990 and PhD from the Ohio
State University in 1994, all degrees being in Electrical Engineering. He is a Professor
at the Electrical and Computer Engineering Department of the College of Engineering
at Rafik Hariri University, Mechref, Lebanon. His research interests include Robotics,
Artificial Intelligence, and Natural Language Processing. Email: zan-
[email protected]
Lama A. Hamandi received her BE from The American University of Beirut,
Lebanon and her MSc and PhD degrees from the Ohio State University; all degrees
being in Electrical Engineering. She is a Senior Lecturer at the Electrical and Com-
puter Engineering Department of the Maroun Semaan Faculty of Engineering and
Architecture at the American University of Beirut, Beirut, Lebanon. Her research
interests include Parallel Processing, Natural Language Processing, and Computation-
al Linguistics. Email: [email protected]

Article submitted 2019-10-12. Resubmitted 2019-11-26. Final acceptance 2019-11-27. Final version
published as submitted by the authors.

iJEP ‒ Vol. 10, No. 2, 2020 93

You might also like