10/29/2009

Respective Graduate Programs For Essay Help

The Implementation

Once the samples were selected and the respective graduate programs identified, each program was contacted by telephone and an attempt was made to reach a faculty member having admissions decision-making capacity within the program. In many cases, multiple telephone calls were required in order to locate and speak directly with an appropriate program admissions decision-maker.

essay helper


Each participant in the survey was read an introduction to the study and an overview of the purpose for the telephone survey. An oral informed consent statement was also read, and each participant was asked to agree to this consent statement. Lastly, each participant was asked permission to record the balance of the telephone call on audio tape for transcription later.
The participants were then asked the set of four questions pertaining to their perceptions of using essay scores generated partially by computer.
Participants from arts and sciences and social sciences programs were asked to speculate on how such scores would be interpreted and used if available, while the business program participants were asked to respond concretely, if possible, to how such scores are interpreted and used by their programs.
  • Specifically, each of these decision-makers was asked to respond to the following questions:
    Has your receipt of essays with scores that are based on both human scoring and computer scoring affected, as a matter of official policy at your institution, the way the essay scores are interpreted and used, such as the relative weight that is placed on the scores in the admissions process?
  • Might this, in your opinion, affect unofficially the interpretation or use of the scores in the admissions process?
  • How comfortable do you feel personally with interpreting and using these scores? Is it any different from using totally human-generated scores?
  • In light of how you use essay scores in admissions decisions, do you believe that your using an average of one human-generated score and one computergenerated score, instead of an average of two ratergenerated scores, might potentially create an unfairness to an applicant?
During pilot testing of these questions, colleagues of the researcher suggested that making reference to "human scores," rather than the clumsier term "rater-assigned scores," while perhaps demeaning to raters, might simplify the questions for participants. This suggestion was incorporated into the text of the questions, as shown above.

No comments:

Post a Comment