서브메뉴
검색
상세정보
Using the Free-Response Scoring Tool To Automatically Score the Formulating-Hypotheses Item. GRE Board Professional Report No. 90-02bP. Kaplan, Randy M., Bennett, Randy Elliot [microform]
Using the Free-Response Scoring Tool To Automatically Score the Formulating-Hypotheses Item. GRE Board Professional Report No. 90-02bP. Kaplan, Randy M., Bennett, Randy Elliot [microform]
상세정보
- 자료유형
- 마이크로피시
- 언어부호
- 본문언어 - English
- 보고서번호
- ETS-RR-94-08
- 청구기호
- 서명/저자
- Using the Free-Response Scoring Tool To Automatically Score the Formulating-Hypotheses Item. GRE Board Professional Report No. 90-02bP. : Kaplan, Randy M., Bennett, Randy Elliot - [microform]
- 발행사항
- 형태사항
- 41; 1
- 총서명
- ERIC Reports
- 주기사항
- 41p.
- 초록/해제
- 요약This study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were generated for 30 examinees responses to each of 8 items by a semantic pattern-matching program and independently by 5 human raters. On its initial scoring run, the program agreed highly with the raters mean item scores for some questions and improved its concurrence substantially as modifications to the automatic scoring process were made. By the final run, correlations between the program and the raters on item scores ranged from .89 to .97, and mean human-machine discrepancies ran from .6 to 1.1 on a 16-point scale. At the individual hypothesis level, the proportion agreement, given the large disproportion of correct responses in the sample, was little better than chance. F-H items might be more effectively scored by a semiautomatic system that combines machine processing with a small number of human judges, and a preliminary configuration for such a process is presented. Appendix A discusses scoring iterations and modifications to the tool, and Appendix B presents changes to the scoring tools interface. (Contains 5 figures, 9 tables, and 14 references.) (AuthorSLD)
- 복제주기
- Microfiche. . Springfield, VA : ERIC Document Reproduction Service. . microfiches ; 11×15 cm.
- 기금정보
- Graduate Record Examinations Board, Princeton, N.J.
- 일반주제명
- 키워드
- 기타저자
- 기타저자
MARC
008980930s1994 us b 000 0 eng d■0010000437268
■001PCUL00371060
■002ED385558
■00520020803010340
■007heuumu---buua
■008980930s1994 us b 000 0 eng d
■040 ▼apcul
■0410 ▼aEnglish
■088 ▼aETS-RR-94-08
■090 ▼a370.78▼bE68
■24500▼aUsing the Free-Response Scoring Tool To Automatically Score the Formulating-Hypotheses Item. GRE Board Professional Report No. 90-02bP.▼cKaplan, Randy M., Bennett, Randy Elliot▼h[microform]
■260 ▼aU.S.; New Jersey▼bEducational Testing Service, Princeton, N.J.▼cJun 94
■300 ▼a41; 1
■440 0▼aERIC Reports
■500 ▼a41p.
■520 ▼aThis study explores the potential for using a computer-based scoring procedure for the formulating-hypotheses (F-H) item. This item type presents a situation and asks the examinee to generate explanations for it. Each explanation is judged right or wrong, and the number of creditable explanations is summed to produce an item score. Scores were generated for 30 examinees responses to each of 8 items by a semantic pattern-matching program and independently by 5 human raters. On its initial scoring run, the program agreed highly with the raters mean item scores for some questions and improved its concurrence substantially as modifications to the automatic scoring process were made. By the final run, correlations between the program and the raters on item scores ranged from .89 to .97, and mean human-machine discrepancies ran from .6 to 1.1 on a 16-point scale. At the individual hypothesis level, the proportion agreement, given the large disproportion of correct responses in the sample, was little better than chance. F-H items might be more effectively scored by a semiautomatic system that combines machine processing with a small number of human judges, and a preliminary configuration for such a process is presented. Appendix A discusses scoring iterations and modifications to the tool, and Appendix B presents changes to the scoring tools interface. (Contains 5 figures, 9 tables, and 14 references.) (AuthorSLD)
■533 ▼aMicrofiche.▼bSpringfield, VA▼cERIC Document Reproduction Service.▼emicrofiches ; 11×15 cm.
■536 ▼aGraduate Record Examinations Board, Princeton, N.J.
■650 4▼xEducation
■653 ▼aAutomation▼aComputer Assisted Testing▼aCorrelation▼aHigher Education▼aHypothesis Testing▼aResponses▼aScores▼aScoring▼aSemantics▼aTest Items▼aFree Response Test Items▼aHypothesis Formulation▼aPattern Matching
■7001 ▼aKaplan, Randy M.
■7001 ▼aBennett, Randy Elliot
■999 ▼a142


