This is a cache of https://www.uibk.ac.at/en/congress/ltrc2024/conference-program/sig-sessions/. It is a snapshot of the page at 2024-11-25T02:17:30.399+0100.
SIG sessions – Universität Innsbruck

SIG SESSIONS


SIG SESSION 1:

Automated Language Assessment Special Interest Group (ALASIG)

Chairs: Jing Xu & Xiaoming Xi

Location: Aula

Automated scoring is probably the most widely used AI application in language assessment. In the past fifteen years we have seen increasing use of automated scoring technology in both writing and speaking assessment for high-stakes purposes. More recently, the use of deep learning methods in training an automated scoring engine has become widespread. The deep learning approach has the benefit of achieving higher scoring accuracy than traditional training methods such as regression or classification trees. The deep learning approach is also difficult to interpret and thus often dubbed as the black box approach.

The ALA SIG meeting at LTRC 2024 will feature a debate on the topic “Using an automated scoring engine trained with a deep learning method as the sole rater in high-stakes speaking tests is warranted.” The participants of the debate are:

Yes (Affirmative Team)

  • Alistair Van Moere (MetaMetrics, Inc.)
  • Eunice Jang (University of Toronto)

No (Opposing Team)

  • Lianzhen He (Zhejiang University)
  • Barry O'Sullivan (British Council)

Voting will open before and after the debate. We invite all interested LTRC attendees to participate.


SIG SESSION 2:

Integrated Assessment & Language Assessment Literacy Special Interest Group (joint session of IASIG and LALSIG)

Chairs: Rebecca Yaeger, Xun Yan, Sharry Vahed, Elsa Fernanda Gonzalez, Gladys Quevedo-Camargo

Location: HS 1

Session Title: Language Assessment Literacy for Integrated Assessment: Steps Towards a Preliminary Definition and Research Agenda

Panelists: Tineke Brunfaut, Gladys Quevedo-Camargo, Jin Yan

Session Description:

Recent years have seen a growth of interest in integrated assessment and an explosion of research on integrated tasks. However, training materials for integrated assessment literacy lag behind the development of training materials for the design of independent tasks, leaving many current test developers, researchers, and language teachers to pursue these skills on their own time. According to multiple studies, the majority of in-service university EFL instructors report having received zero training on integrated assessment (Gan & Lam, 2020; Sayyadi, 2022); however, language teachers indicate interest in learning more about LAL for integrated assessment (Harsch et al., 2021; Makipaa & Soltyska, 2023), and the value instructors place on integrated assessment increases with years of experience (Hakim, 2015). At present, little is known about the development of Language Assessment Literacy for Integrated Assessment (LALIA). This panel represents an introductory attempt to define this critical concept and outline a research agenda for a better understanding of how LALIA unfolds across different contexts. Panelists in this session represent viewpoints from the perspective of language assessment researchers, test developers, and teacher trainers. Panelists will share their own trajectory of developing LALIA, respond to questions generated by a SIG member survey, and field questions from the audience.


SIG SESSION 3:

Test-taker Insights in Language Assessment Special Interest Group (TILASIG)

Chairs: Andy Jiahao Liu & Ray Jui-Teng Liao

Location: HS 2

Session title: Embracing Artificial Intelligence in Classroom Assessments: Global Views

Panelists: Ahmet Dursun, Ari Huhta, Geoff LaFlair, Spiros Papageorgiou

Session Description:

The trending development of artificial intelligence tools, particularly generative AI technologies (e.g., ChatGPT), has brought new opportunities and challenges to the field of language testing and assessment. Though the application of AI technologies has been common in large-scale tests for item development and scoring, much less is known about their use in classroom assessments. Bringing together researchers and practitioners from universities and test companies, this one-hour panel event presents multiple possibilities for embracing artificial intelligence tools in classroom assessments.

During this session, the three panelists—Dr. Ahmet Dursun, Dr. Ari Huhta, and Dr. Geoffrey LaFlair—will first present their respective research and expertise related to AI and classroom assessments and highlight their takeaways. Following that, the discussant—Dr. Spiros Papageorgiou—will open the floor for discussion and communication. Attendees will also have the chance to discuss various issues, practices, and challenges related to the use of AI tools in classroom assessments.

We invite you to contribute and share questions for our panelists in advance here:

https://forms.office.com/r/JZSap9KTZZ

Panel Schedule at a Glance
TimeAgendaPanelists
1:30–1:35Welcome Greetings and Introduction Andy Jiahao Liu, Ray Jui-Teng Liao
1:35–1:45Panel Presentation #1: Exploring Students’ Perceptions of AI in the Language Classroom: The Role of Educational Experience, Self-Efficacy, and Self-AccountabilityAhmet Dursun
1:45–1:55Panel Presentation #2: Using Artificial Intelligence for Supporting Reading and Other Language Skills in FL English via Diagnostic and Dynamic AssessmentAri Huhta
1:55–2:05Panel Presentation #3: Embracing AI: Opportunities for Language Program AssessmentsGeoffrey LaFlair
2:05–2:20DiscussionSpiros Papageorgiou
2:20–2:30Questions and Remarks

SIG SESSION 4:

Young Learner Special Interest Group (YLSIG)

Chairs: Mark Chapman, Veronika Timpe-Laughlin, Jeanne Beck

Location: HS3

This hour-long meeting will feature three expert-led demonstrations and a discussion on contemporary young learner assessment tools.

Demonstrations:

Dr. Gordon West (WIDA): WIDA Screener for Kindergarten

Dr. Evelina Galaczi (Cambridge): Cambridge English Qualifications Digital for Young Learners

Dr. Veronika Timpe-Laughlin (ETS): TOEFL® Junior Speaking

Dr. Fumiyo Nakatsuhara (University of Bedfordshire): Discussion

The meeting aims to provide insights into different international assessments for young language learners (YLLs), showcase approaches to creating assessments for YLLs, and facilitate networking among researchers and practitioners in the field of assessing young language learners.

Nach oben scrollen