Actions for The Routledge international handbook of automated essay evaluation
The Routledge international handbook of automated essay evaluation / edited by Mark D. Shermis and Joshua Wilson
- Published
- New York, NY : Routledge, 2024.
- Physical Description
- 1 online resource
- Additional Creators
- Shermis, Mark D., 1953- and Wilson, Joshua (Joshua Aaron)
Access Online
- Taylor & Francis: ezaccess.libraries.psu.edu
- Series
- Contents
- Introduction to automated evaluation / Mark D. Shermis and Joshua Wilson -- Automated essay evaluation at scale : hybrid automated scoring/handscoring in the summative assessment program / Corey Palermo and Arianto Wibowo -- Exploration of the stacking ensemble learning algorithm for automated scoring of constructed-response items in reading assessment / Hong Jiao, Shuangshuang Xu and Manqian Liao -- Scoring essays written in Persian using a transformer-based model : implications for multilingual AES / Tahereh Firoozi and Mark J. Gierl -- SmartWriting-Mandarin : an automated essay scoring system for Chinese as a foreign language learners / Tao-Hsing Chang and Yao-Ting Sung -- NLP application in the Hebrew language for assessment and learning / Yoav Cohen, Anat Ben-Simon, Anat Bar-Siman-Tov, Yona Doleve, Effi Levi and Tzur Karletiz -- Automated scoring for NAEP short-form constructed responses in reading / Mark D. Shermis -- Automated scoring and feedback for spoken language / Klaus Zechner and Ching-Ni Hsieh -- Automated scoring of math constructed-response item / Scott Hellman, Alejandro Andrade, Kyle Habermehl, Alicia Bouy and Lee Becker -- We write automated scoring : using ChatGPT for scoring in large scale writing research projects / Kausalai (Kay) Wijekumar, Debra McKeown, Shuai Zhang, Pui-Wa Lei, Nikolaus Hruska and Pablo Pirnay-Dummer -- Exploring the role of automated writing evaluation as a formative assessment tool supporting self-regulated learning in writing / Joshua Wilson and Charles MacArthur -- Supporting students' text-based evidence use via formative automated writing and revision assessment / Rip Correnti, Elaine Lin Wang, Lindsay Claire Matsumura, Diane Litman, Zhexiong Liu and Tianwen Li -- The use of AWE in non-English majors : student responses to automated feedback and the Impact of feedback accuracy / Aysel Saricaoglu and Zeynep Bilki -- Relationships between teachers' perceptions and use of AWE and gains in students' writing performance / Amanda Delgado, Joshua Wilson, Corey Palermo, Tania M. Cruz Cordero, Matthew C. Myers, Halley Eacker, Andrew Potter, Jessica Coles and Saimou Zhang -- Automated writing trait analysis / Paul Deane -- Advances in automating feedback for argumentative writing : feedback prize as a case study / Perpetual Baffour and Scott Crossley -- Automated feedback in formative assessment / Harry A. Layman -- Using automated scoring to support rating quality analyses for human raters / Stefanie A. Wind -- Calibrating and evaluating automated scoring engines and human raters over time using measurement models / Stefanie A. Wind and Yangmeng Xu -- AI scoring and writing fairness / Mark D. Shermis -- Automating bias in writing evaluation : sources, barriers, and recommendations / Maria Goldshtein, Amin G. Alhashim and Rod D. Roscoe -- Explainable AI and AWE : balancing tensions between transparency and predictive accuracy / David Boulanger and Vivekanandan Suresh Kumar -- Validity argument roadmap for automated scoring / David Dorsey, Hillary Michaels and Steve Ferrara -- Redesigning automated scoring engines to include deep learning models / Sue Lottridge, Chris Omerod and Milan Patel -- Automated short-response scoring for automated item generation in science assessments / Jinnie Shin and Mark J. Gierl -- Latent dirichlet allocation of constructed responses / Jordan M. Wheeler, Shiyu Wang and Allan S. Cohen -- Computational language as a window into cognitive processing / Peter W. Foltz and Chelsea Chandler -- Expanding AWE to incorporate reading and writing evaluation / Laura K. Allen, Püren Öncel and Lauren E. Flynn -- The two U's in the future of automated essay evaluation : universal access and usercentered design / Danielle S. McNamara and Andrew Potter.
- Summary
- "The Routledge International Handbook of Automated Essay Evaluation (AEE) is a definitive guide at the intersection of automation, artificial intelligence, and education. This volume encapsulates the ongoing advancement of AEE, reflecting its application in both large-scale and classroom-based assessments to support teaching and learning endeavours"--
- Subject(s)
- Grading and marking (Students)—Data processing
- Educational tests and measurements—Data processing
- Essay—Evaluation
- Composition (Language arts)—Evaluation
- Artificial intelligence—Educational applications
- PSYCHOLOGY / Assessment, Testing & Measurement
- EDUCATION / Testing & Measurement
- COMPUTERS / Artificial Intelligence
- ISBN
- 9781003397618 (ebook)
1003397611
9781040033241 (electronic bk. : PDF)
1040033245 (electronic bk. : PDF)
9781040033340 (electronic bk. : EPUB)
1040033342 (electronic bk. : EPUB)
9781032502564 (hardback)
9781032502571 (paperback)
View MARC record | catkey: 44540355