Assessment from an historical perspective [Original paper written in 2003 for a faculty workshop – Judy Redder]
Higher education has shifted from a traditional view of what instructors provide to concentrating on the practical concerns of what students learn, achieve, and become. The paradigm shift from teacher-centered to learner-centered is output-focused in that the primary measurement of program success is defined by what students actually know and are able to do. Learner-centered is also competency-based with learning objective / outcomes attached to important skills and knowledge outcomes defined by the institution, academic programs and departments. Learner-centered education is committed to continual improvement through ongoing assessment of student learning (Guba & Freed, 2000).
To appreciate this paradigm shift in education, I believe it helps to understand assessment from an historical perspective. Ralph Typer (1949) pioneered the concept of an objective based approach to educational evaluation. In the 1030‟s, he operationalized his theories
in an eight year study focusing on alternative teaching methods. From his study he expanded his evaluation work to include:
1. defining appropriate learning objectives
2. establishing useful learning experiences
3. organizing learning experiences to have a maximum cumulative effect; and,
4. evaluating the curriculum and revising those aspects of the curriculum that did not
prove to be effective (Worthem & Sanders, 1987)
Assessment of student learning outcomes plays nicely off Tyler‟s work as does Blooms‟
1950, “Taxonomy of Education Objectives”. Bloom‟s worm is used routinely as the framework for classifying student learning outcomes (Worthem & Sanders, 1987). In 1960, Education Psychology began to take a systems look at teaching and learning. Their research focused on goals and objectives, the learner, the instructor, the material, and student performance assessments, and the learning environment. Education Psychology concluded that a “critical systems concept included the idea of feedback; that is system
performance could be improved by collecting data and feeding it back into the system to regulate and refine the system” (Carey & Gregory, 2003, p216).
As one might guess, the use of data as part of a feedback system encouraged measurement and evaluation specialist such as Cronbach (1963) and Scriven (1967) to define „evaluation for improved student learning outcomes‟ as formative assessment. Their concept of instructional systems and formative evaluation are laregely what we now refer to as outcomes assessment. And, in the 1990‟s outcomes assessment became linked with educational accountability.
The maturation of outcomes assessment has followed two parallels, yet independent paths. The first path is outcomes assessment for accreditation (also, referred to as assessment for accountability and is associated with a regulatory process; and 2) outcomes assessment for improvement of teaching and learning and program improvement (promoted by Tom Angelo, Patricia Cross, and others). “The theory and philosophy out outcomes
Prepared by Judy Redder 1
assessment in these two parallel paths of development are remarkably similar, and Carey & Gregory (2003) suggest that there is a natural tendency for higher education at many levels to incorporate the two paths into uniform policies and practices.
It is at this juncture where some discomfort with assessment seems to surface. Peterson and Ernarson (2001) conducted a survey based on 2523 institutions with return rate 55%, to understand how higher education approaches, supports, and promotes undergraduate student assessment, and how institutions use assessment data. Peterson and Ernarson (2001) found that institutions rarely used assessment data to reward or punish academic units in the budget process; and that institutions seldom linked faculty evaluation and reward policies to assessment involvement. Barak & Sweeney (1995) has suggested that institutions most often use assessment information in decisions around program review and least often in decisions regarding faculty reward (Carey & Gregory, 2003).
It is interesting to note that strategic planning and quality management tools have used outcome assessment for years in both the public and private sectors, and higher education has shared and adopted many of these tools for their assessment practices. According to Seymour, 1996), educational assessment is basically about doing what is necessary to achieve continuous improvement in the educational process.
Richard Frye, Ph.D. from Western Washington University posits that a “Simple Theory
of Excellence” includes Best Policies and Practices, plus High Student Involvement.
A Simple Theory of Excellence
High Student Involvement Best Policies & Practices
; Academic ; Application
; Student / Faculty ; Faculty Modeling ; Student to Student ; New Situations BEST LEARNING + = ; Cultural Diversity ; Collaboration ; Community Service ; Rich Feedback ; Curricular Goals
Richard Frye‟s assessment theory is supported by Wiggins, Tom Angelo, Patricia Cross,
AAHE Assessment Forum (1992), Sandy Astin, Trudy Banta, Clincy, Perry, Guba and others works. Richard Frye and others work is grounded in student learning theory.
Prepared by Judy Redder 2
However, to address the specific question, “How do we know that programs can be
improved, using student learning assessment?” I contacted Barbara Walvoord, Trudy
Banta and others to learn directly from them and their expert views on this question. Here are some of their candid responses:
“All of our training as academics, teaches us to gather data systematically
and responsibly, and use that data, rather than thinking we understand the
problem or can make changes by the seat our pants. Assessment is part of
the academic way of reasoning.”
“Assessment is a tool whereby those who are motivated to address
problems and make changes can come to understand the nature and
contributing factors to their problems and can discern what might be
helpful. In the absence of the motivation to learn and change, assessment
is a futile exercise.”
“In an individual institution or department, all of us can point to changes
that have worked well because they were based on good data and careful
“I also might add, in thumbing through numerous journals, surfing the
web, doing literature searches, library searches, etc., I found that much of
the current research that connects program improvement to assessment of
student learning outcomes is qualitative and quantitative in nature and
consists of case studies and action research.”
Fredericks Volkwein (Pen State) hit the nail on the head with his statement that “ideal assessment processes emerge from a partnership between administration and faculty. While leadership is needed it is the faculty that must be at the center of any institutional outcomes assessment strategy. The institution facilitates faculty involvement through information, guidance, and mechanisms of support.
Prepared by Judy Redder 3
Alexander, C.N. & Langer, E.J. (Eds.). (1990). Higher Stages of Human Development:
Perspectives on Adult Growth. New York: Oxford University Press.
Angelo, T. & Cross, P. (1993). Classroom Assessment Techniques: A Handbook for nd Edition). San Francisco: Jossey-Bass. Faculty (2
Banta, T.W. & Associates. (1993). Making a Difference: Outcomes of a Decade of
Assessment in Higher Education. San Francisco: Jossey Bass.
Banta, T.W. (1996). Assessment in Practice: Putting Principles to Work on College
Campuses. San Francisco: Jossey Bass.
Carey, J., & Gregory (2003). Toward improving student learning: Policy issues and
design structures in course-level outcomes assessment. Assessment & evaluation
in Higher Education, V 28 (3).
Dick, W., & Carey, L. (1978). The Systematic Design of Institutions. Glenview, IL: Scott
Seymour, D. (1996). High Performing Colleges. Maryville, MO, Prescott Publishing Co.
Scriven, M. (1967). The methodology of evaluation in: R.Tyler, R. Gagne, & M. Scriven
(Eds). Perspectives of Curriculum Evaluation AERA Monograph Series on
Curriculum Evaluation. Chicago, IL: Rand McNally.
Tyler, R. (1949). Basic Principles of Curriculum and Instruction. Chicago: University of
Walroord, B., & Anderson, V. (1998). Effective grading: A tool for learning and
assessment. San Francisco: Jossey Bass.
Wiggins, G.P. (1993). Assessing Student Performances: Exploring the Purpose and
Limits of Testing. San Francisco: Jossey-Bass.
Worthen, B. & Sanders, J. (1998). Education Evaluation: Alternative Approaches and
Practical Guidelines. White Plains, NY: Pitman Publishing, Inc.
Additional Must Read Resources
9 Principles of Good Practice for Assessing Student Learning.
The nine principles form the basis for distinguishing good assessment practices
from the not-so-good.
“What Do We Know About Students‟ Learning And How Do We Know It?
Prepared by Judy Redder 4
An interesting discussion on teaching and learning, by Patricia Cross.
“Implementing the Seven Principles: Technology as Lever”
A discussion of the Seven Principles for Good Practice in Undergraduate
Education in the context of communication and information technologies, by
Steve Ehrmann and Arthur Chickering.
“Organizing for Learning: A New Imperative”
A terse but useful summary of what is known about learning, promoting learning,
and institutional change, by Peter Ewell.
“The New Conversations About Learning: Insights From Neuroscience and
Anthropology, Cognitive Science and Work-Place Studies”
An in-depth exploration of the implications of new knowledge about learning for
the organization and practice of higher education, by Ted Marchese. Prepared by Judy Redder 5