This is a cache of https://www.uibk.ac.at/en/congress/ltrc2024/pre-conference-workshops/workshop-d/. It is a snapshot of the page at 2024-11-24T02:16:52.645+0100.
Workshop D – Universität Innsbruck

Generative AI for content generation and automated scoring: no-code and low-code solutions

With the advancement of generative AI technology, such as GPT, test development organizations and language testing researchers have started to explore the potential of leveraging generative AI to automate aspects of test development. One example application is the automated generation of test contents, such as reading comprehension passages at target CEFR levels, multiple-choice questions, short-answer questions, and graphics. A second application is using prompt-engineering so that the large language model directly generates scores and feedback for essays. This bypasses the usual automated essay scoring approach wherein expert human raters provide ratings for a sample of essays and models are developed to predict human scores.

The purpose of this workshop is to provide an introduction to the large language models (LLMs) that are available for automating content generation and automated scoring of writing. We will introduce the best practices for writing prompts in ChatGPT, refining prompts iteratively to get the intended outcomes, and building scalable production solutions through API interactions. This means that we will (a) use freely available online interfaces such as ChatGPT and Claude.ai to try out prompts, and (b) use Python code, which we will provide, so that we can instruct large language models to complete tasks at scale, e.g., for hundreds of essays, rather than one at a time.

We will focus on applying Generative AI technology for the following tasks:

  • Generate test content and test items
  • Score student essays
  • Provide qualitative feedback on student essays

Participants will be asked to apply different prompt engineering approaches (e.g. few-shot and zero-shot learning) to:

  • Write reading comprehension passages at different CEFR levels
  • Generate multiple-choice, cloze, and short-answer questions for reading comprehension passages
  • Generate graphics as a visual support for reading comprehension passages
  • Assign scores to student essays that are aligned to writing rubrics
  • Discuss the best approaches to prompt engineering
  • Critique each other’s prompts
  • Compare the pros and cons of different LLMs
  • Discuss the ethical implications of using LLM in test development

This workshop will be suitable for participants who have played with ChatGPT but who want to take their usage to a more professional level, and who have little or no experience of using Python or APIs. In advance of the workshop, participants will be expected to set up an account and payment method for OpenAI’s API access (instructions will be provided).

Speakers: Alistair Van Moere and Jing Wei

Alistair Van Moere

Portrait Alistair Van Moere

Alistair Van Moere is President of MetaMetrics Inc and Research Professor at the University of North Carolina at Chapel Hill.

Alistair drives innovation in educational AI and assessments, and manages the Lexile Framework which reaches 35 million students every year. Before joining MetaMetrics, Alistair was President at Pearson, where he managed artificial intelligence scoring services for tens of millions for students in speaking and writing programs. He has worked as a teacher, university lecturer, assessment developer and ed-tech executive, in the US, UK, Japan, and Thailand. He has an MA in Language Teaching, Ph.D. in Language Testing, and an MBA, and as authored over 20 research publications on assessment technologies.

Jing Wei

Portrait Jing Wei

Jing is responsible for shaping MetaMetrics’ AI capabilities and impact, providing thought leadership on using AI to drive business values, and integrating innovative solutions into MetaMetrics’ existing and future products. Prior to joining MetaMetrics, Jing served as a research scientist at the Center for Applied Linguistics, leading the development and validation of a portfolio of high-stake digital assessments used by millions of students every year. Jing brings 15+ years of experience in test development, measurement and statistics, machine learning, and product development. She holds a bachelor’s degree in English from Shanghai International Studies University, a M.Phil. in Second Language Education from University of Cambridge, and a Ph.D. in Language Testing from New York University.

Nach oben scrollen