octype html public "-//w3c//dtd html 4.0 transitional//en" Unit 8: Formative Evaluation, Revision, and Summative Evaluation
[Instructivism] [ Constructivism][Information Processing] [Problem-Based Learning] [Critical Thinking] [Instructional Technology]
[History of Instructional Technology] [Effectiveness of IT] [Evaluation][Development] [Student Evaluation] [Objectives] [Systems] [Asynchronous] [Activities]

AIL 601
George E. Marsh II, Professor

Discrepancy Model Components (DEM)

The discrepancy evaluation model is based on standards, which are mutually agreed to by all members of a project as the outset.  The standards are the most important part of development and all evaluations are later based on the discrepancy between standards and actual performance.  This is really a systems theory approach with three major divisions: input, process and output.  A DEM may show that a program or parts of it failed for lack of sufficient inputs, which could be money, qualified personnel, and other necessary prerequisites.  After standards are developed it is possible at any point in the process to determine if a discrepancy exists, which can be done in many ways with available data and instruments developed for the purpose.  In theory, the discrepancy in a formative assessment can be used to determine what needs to be done to get up to standards or even to eliminate some element and start over.


A standard can be expressed in the forms of objectives or outcomes, and it can be enumerated by reference to specific parts of the design when the project is planned.  In actual practice, the standards for a project should be stated in such a way that it is possible to determine if they have been met.  This is similar to the use of objectives in behavioral terms.  If not, then some consideration need to be given to defining terms.  If a standard is that subjects will "appreciate" classical music, some attention needs to be given to what is meant by appreciation.  Otherwise, how will you know if you reach the standard?  For many administrative tasks the standard may be simple and straightforward, such as organize a committee, hire a staff, open an office, and so forth, but even in this cases there may be more descriptive or qualifying terms.  For example, an office may have to meet certain criteria imposed by ADA or other external agency.


Performance is simply a matter of determining if the project meets standards and can be considered in the divisions of input, process, and output.


The measure of the difference between a standard and what is or has actually happened is a discrepancy. If a project activity does not meet standards, the there is a discrepancy.

Data Collection

Production Function

Much of research in education has been aimed at finding a production function (mathematical expression of the relationship between inputs and outputs in education), similar to that used in manufacturing efforts.  It must be remembered that the shape of modern education has its roots in scientific management, so research approaches have regarded schooling as something done to students, instead of regarding education as something that students do for themselves. Traditional schools, school-based management, total quality management (TQM), charter schools, voucher systems and other recent trends in education are still based on the concept of the school as a manufacturer, and students are the raw materials.  Many educational research paradigms are predicated on the assumption that the task is to relate variations in measured achievement or attitudes of pupils to variations in the observed behaviors or other traits of teachers, which have included gender, age, amount of training, type of training, personality, type of questions used, and so forth, or to sets of circumstances, behaviors, or routines that can be reliably replicated to result in higher achievement scores (Rosenshine & Stevens).  Some investigators presume to reveal the precise degree of relationship between teachers' behaviors (i.e., questions, classroom organization, homework, seatwork) and student achievement scores, such as the researcher Walberg.  However, Monk (1990; 1992) concluded that production studies of education have not yielded very much useful knowledge.

Unlike manufacturing or service industries, outcomes, inputs, and processes are difficult to identify, isolate, and investigate.  In education outcomes are confounded--multiple, jointly produced, and not easily transformed into standardized units for comparison.  Unlike a factory, it is nearly impossible to track inputs directly to students.  As Monk illustrated, a teacher may invest considerable time providing tutorial instruction for one student, but the student may "..decline the assistance, either overtly or covertly."

Relationships between aptitude (entry ability) and the criterion (what the student is to learn) are determined by the nature and quality of learning experiences, but primarily by ability and motivation.  Some students fall farther behind as the pace of the curriculum moves forward.   Instruction can range significantly across classrooms, particularly the extent to which instruction meets the needs of a learner.  At the microlevel, any differences may be attributed to a myriad of variables, such as the child's intelligence, motivation, SES, or to the classroom environment, curriculum, teacher, peers, parents, nutrition, and so forth.  There may be a fundamental problem with the production metaphor as applied to education, because in school it is not evident what (or who) the raw materials are, nor who is doing the producing, nor what the product is.  Quite unlike manufacturing, few schools have any real control over their raw materials and basic inputs, except for the most prestigious and exclusive private schools.  Under this premise, the most productive schools have the best raw products.

As can be seen, the difficulties in educational research are related to the roles of students in the learning process.  Are students producers or raw materials shaped by teachers?  If students are relatively free agents, as in constructivist models of learning, one set of assumptions is invoked.  If they are objects to be manipulated, as in behavioral models of learning, clearly the teacher is entirely responsible for productivity.  This helps explain the problems for researchers and policymakers in making accountability models that delineate a production relationship in education.  As Liven (1993) explained, it would be interesting to imagine a factory in which raw materials had minds of their own and could make independent decisions about whether or not they would be part of whatever was being manufactured.  Students decide to attend, pay attention, undertake work seriously, and concentrate on grades (Doyle, 1986).

This only underscores the importance of deciding how to consider the purpose of a school or any educational program.  If it is not quite like manufacturing, what is it?  Attempts in recent years to improve schools have adopted yet another manufacturing concept, Total Quality Management or TQM, spawned by W. Edwards Deming. The Deming story is almost legendary, how after World War II armed with the principles of total quality management he could get no followers in the United States, so he took his program to Japan and was able to transform the Japanese economy into the most productive in the world.  Deming summarized his principles of management into 14 points.  Three important points are summarized here:

Deming observed that most problems in organizations are created by the structure and not by the workers. Efforts to change the productivity of an organization must focus on the structure and not on individual workers.  The causes of low quality and low productivity belong to the system and lie beyond the power of the work force.  If education were to follow Deming's recommendations, we would abolish annual merit ratings, management by objective, testing, and SAT scores.

As Deming showed, evaluation of another person does not create excellence.  In schools the board assesses the superintendent, who assess the principals.  The principals assess the teachers, and teachers assess the students.  Ultimately, the state department of education, which is usually not assessed by anyone, assesses the school district.  Giving outstanding teaching awards and bonuses will not alter the system, nor will posting achievement scores in the local newspaper.  A state department grading system for school districts is just another elaboration of the problem of assessment.  For schools to improve it is necessary that the system change to engage in self-assessment.  The best teachers spend their time improving rather than assessing (blaming) their circumstances, the principal, their students, their students' parents, and so forth.  Apparently we may never reach the levels in education suggested by Deming, for schools have actually become more tied to inspection, evaluation, and assessment than ever, something that certain to be a central focus of the upcoming presidential campaign.

There is no reason to consider that program evaluation for computers or multimedia applications and projects would be different than any other kind of program evaluation in education and training.  While educators and IT personnel will be required to use evaluation, there are contradictory forces.  At the same time that TQM and other advocates are saying that we need to change the system and eliminate evaluation in favor of systemic self-evaluation, we find more evaluation being required by state governments and federal agencies.  While constructivist philosophy seems to be spreading throughout education at all levels, we are bound more tightly to imposed instructional procedures and assessments.  Perhaps this is an illustration of the Chinese curse, "May you live in interesting times."


Gredler, M.E. (1996). Program Evaluation. Englewood Cliffs, New Jersey: Prentice-Hall, Inc.

Levin, B.  (1993).  Students and educational productivity.  Education Policy Analysis Archives, 1:5.

Monk, D. (1990).  Educational Finance: An Economic Approach.  New York: McGraw-Hill.

Monk, D. (1992).  Education productivity research: An update and assessment of its role in education finance reform.  Educational Evaluation and Policy Analysis, 14(4), 307-332.

Worthen, B.R. & Sanders, J.R. (1987) Educational evaluation: Alternative approaches and practical guidelines. White
Plains, NY: Longman.