Sunday 18 June 2017


Goal free model
In the goal-free evaluation model developed by Michael Scriven (1991), the evaluation looks at a program's actual effect on identified needs. In other words, program goals are not the criteria on which the evaluation is based. Instead, the evaluation examines how and what the program is doing to address needs in the client population. With this model, you observe without a checklist, but record all data accurately and determine their importance and quality. Categories naturally emerge from your observations. This model of evaluation can use all forms of obtrusive methods—those methods a subject is aware of, such as tests—as well as unobtrusive ones—methods that a subject is not aware of, such as a hidden camera—to gather data. The evaluator has no preconceived notions regarding the outcome of the program—that is, goals. The staff should not contaminate the evaluator's method with goal statements. The evaluator is trying to form a description of the program, identify processes accurately, and determine their importance to the program. As the evaluator, you are gathering data on things that are actually happening and evaluating their importance in meeting the needs of the client population.
A good example of this model is the process followed by the Consumer's Union, producers of Consumer Reports, in which the manufacturer's intent for the product is irrelevant to the actual usefulness to the consumer.
The goal-free model is the most difficult to use, especially when the evaluator is part of the program or project; yet it is a popular method because it can be used within a program that has many different projects occurring simultaneously. In such a situation the same client population participates in a number of activities, and it is difficult to separate the results of two projects' activities. In fact, program results might come from the interactions between two or more projects' activities.
For example, an evaluator might be asked to evaluate the effectiveness of an adult basic education (ABE) project housed within the program of a local adult learning center (ALC). Also housed in that program are workplace literacy, welfare-to-work, and adult computer literacy projects. Clients of the adult learning center may partake in any or all of these programs. Thus it would be difficult, if not impossible, to isolate the results of just one project's activities. A goal-free evaluation would examine the overall results for the clients of the ALC program, which would be more meaningful than individual evaluations of each project.
The person who performs the goal-free evaluation of the ABE project may have no subject-matter expertise in the field of adult education. This point has become a topic of debate among many experts. Some say the evaluator should have expertise in the field being evaluated; others say no expertise is better (Rossi and Freeman, 1993). The issue, of course, is preconceived notions. Some scholars say that an evaluator who is not familiar with the nuances, ideologies, and standards of a particular professional area will presumably not be biased when observing and collecting data on the activities of a program in that area. They maintain, for example, that a person who is evaluating a program to train dental assistants should not be a person trained in the dental profession. But other scholars allege that a person not aware of the nuances, ideologies, and standards of the dental profession may miss a good deal of what is important to the evaluation. Both sides agree that the evaluator must attempt to be an unbiased observer and be adept at observation and capable of using multiple data collection methods (Wholey, Hatry, and Newcomer, 1994; 2004).
Once the data have been collected, the evaluator attempts to draw some conclusions about the impact of the program on addressing client needs. This information is then delivered to parties interested in the evaluation results. Again, the evaluator using this model makes a deliberate attempt not to know about program goals, written proposals, or brochures that exist. He or she simply studies the outcomes and reports on them.
The goal-free model works best for qualitative evaluation because the evaluator is looking at actual effects rather than anticipated effects for which quantitative tools have been designed.
Interestingly, Scriven suggests using two goal-free evaluators, each working independently (Popham, 1974). In this way, the evaluation does not rely solely on the observations and interpretations of one person.
As a program manager, you might think that it would be impossible for me to use the goal-free model because I have intimate knowledge of the project and would find it nearly impossible to ignore that knowledge in conducting an evaluation. Similarly, any internal personnel that I might employ to conduct this type of evaluation would also have this knowledge. Consequently, you should probably seek an external, third-party evaluator to perform a goal-free evaluation of your program who has little or no knowledge of the intricacies or nuances of the program to be evaluated.

Transaction Model

The transaction model first proposed by R. M. Rippey (1973) affords a concentration of activity between you, as both evaluator and participant, and the project staff. The main beneficiaries of an evaluation using this model are the clients and practitioners.
This model combines monitoring with process evaluation through a continuous back-and-forth between evaluator and staff. The evaluator is an active participant, giving constant feedback. In effect, the evaluator is or acts as one of the project staff members.
The evaluator uses a variety of observational and interview techniques to obtain information from the program staff and clients. This model usually has a goal-based orientation. Instead of trying to achieve objectivity as in the previous models, the evaluator uses subjectivity in the transaction model.
Using the previous example of the adult learning center, the transaction evaluator might be one of the teachers of the ABE project who is assigned to follow a group of clients through the other projects in an attempt to distinguish any measurable results coming from a single project. The evaluator is one of the staff of the ALC, participating in and providing project activities. The findings are shared with the staff of all the projects to improve both individual projects and the overall program.

Decision-Making Model

The decision-making model developed by Daniel Stufflebeam (Madaus, Scriven, and Stufflebeam, 1983) is employed to make decisions regarding the future use of the program. In this case, you are less concerned with how the program is performing presently. Instead you are concerned with its long-range effects, such as the number of cancer patients who survive in a five-year trial or the number who survive in this program as compared to another program with a different approach. The focus is on decisions that need to be made in the future.
For example, an adult education program might have three different commercial packages for teaching people with a low literacy level to read English. In previous evaluations all three packages have proven effective in teaching reading; however, the sponsors of the program need to cut funds and a decision needs to be made to discontinue the use of one or more of the packages. This is a decision-based situation that requires focus not on the client, the staff, or the activity but on how best to cut operating expenses.
This model is wide open in the methodology you use to collect data. Both quantitative methods—such as tests and records—and qualitative methods—such as interviews, observations, and surveys—might be employed. This choice depends on what the sponsor wants to know in order to make the decision. The decision-making model can be used to structure formative evaluations, but it is well suited to summative evaluation.

No comments:

Post a Comment