KEYNOTE SPEAKERS

Prof. Jo-Anne Baird

University of Oxford

Title: Raising educational standards

Abstract: A perennial challenge for the field is how assessments can support learning and thereby raise educational standards. Part of the answer to this must involve clear messaging about what the educational standards are, to provide goals for teachers and learners.

In this address, I will consider how standards are thought about differently in three assessment paradigms: construct-, curriculum- and outcomes-based approaches. I have previously characterised these paradigms as prototypical ways of thinking about assessments, since many assessment systems are hybrid versions, borrowing across paradigms. In each case, the philosophy of assessment differs in terms of the attribute of interest, the definition of standards and expectations of outcomes. This creates different relations between what kind of thing is valuable learning, the ways in which we think they ought to be assessed and which quality criteria we prioritise. Different paradigms promise information of different kinds about standards for teachers and learners.

In research conducted with over 900 stakeholders in Scotland during the pandemic, we investigated how stakeholders think of national assessments and their standards. One of our main findings was that they think of the same assessments in relation to different paradigms. I will outline how this raised significant issues for the management and communication of standards by exam boards, that go beyond misunderstandings or lack of assessment literacy. In a smaller-scale qualitative study in Wales, we investigated how industry-insiders and teachers thought about standards and their communication. A striking finding from this study was that not only were teachers unaware of the standard setting processes, but they did not believe that they needed to know. This raises questions about exactly what teachers and learners need to know about standards and whether our industry-insider perspective has been miscommunicating.

Bio: Professor Jo-Anne Baird is Professor of Educational Assessment and Director of the Oxford University Centre for Educational Assessment. Her research addresses systemic aspects of educational assessment, and she works extensively with governments and assessment organisations to support the development and evaluation of national and international assessment systems. Read More

Dr. Matt Glanville

International Baccalaureate

Title: All the World Is a Stage – Enter the Androids: Navigating Current and Future Challenges of International Baccalaureate Assessment

Abstract: The International Baccalaureate (IB) has an unusual role in aiming to offer an educational philosophy and approach for the whole world. In assessment terms, this presents the IB with a number of challenges – some of which will be familiar to national assessment systems and some which are unique. After providing a brief overview of the IB and its qualification I will reflect on these unusual and interesting matters.
However, the assessment landscape is not standing still and developments in technology and society are impacting on many of our foundations of education. In the second half of my presentation, I will talk about how the IB is thinking about these changes (AI being key amongst them) and speculate on what the known unknowns may be for future assessment.

Bio: Dr Matt Glanville is the Director of Assessment at the International Baccalaureate (IB), where he is responsible for the assessment models across the Middle Years Programme, Career-related Programme, and Diploma Programme. These models serve more than 250,000 students annually, in over 150 countries and across more than 250 subjects. Read More

Dr. Militsa Ivanova

Winner of the Kathleen Tattersall New Researcher Award

Title: Who Tries and When in the Digital Age: Measuring and Modeling Test-Taking Effort through Process Data in Large-Scale Assessments 

Abstract: Achievement tests aim for valid estimation of proficiency, but inadequate test-taking effort can introduce construct-irrelevant variance and threaten score validity, particularly in low-stakes contexts. Since 2010s, international programs like the Programme for International Student Assessment (PISA) have tried to measure effort, mainly through self-reports. With the digitalization of large-scale assessments, the large amount of process data now allows for tracing examinees’ test-taking behavior, avoiding the biases of self-reports. Process data variables, such as response time, have proven valuable in estimating examinee effort on multiple-choice items, but research on constructed-response items – often linked to lower effort – remains limited. The literature on how thresholds can be determined to distinguish effortless from effortful responses also remains inconclusive. While individual characteristics have been widely studied as predictors of effort, family- and school-level factors have not. Cross-national differences in effort and its relationship to performance are evident, but predictions of effort across countries are yet to be explored in depth. This presentation will discuss indicators, thresholds, and predictors of test-taking effort using process data. Read More

Bio: Militsa G. Ivanova earned her Ph.D. in Clinical Psychology from the University of Cyprus in June 2024. She also holds a BS in Social and Behavioral Sciences with a major in Psychology from European University Cyprus and a MA in School Psychology from the University of Cyprus. She is a licensed School Psychologist in Cyprus. During her graduate studies, she taught research methodology and contributed to several funded projects focused on student motivation, test-taking engagement, and responders’ behavior in large-scale assessments such as the Trends in International Mathematics and Science Study (TIMSS) and the Program for International Student Assessment (PISA). In 2019, she completed an ERASMUS+ funded internship at the Umeå University in Sweden, where she expanded her research skills and statistical expertise. Read More

Prof. Jennifer Randall

University of Michigan

Title: Assessment as a Tool for Liberation: Come Dream with Me

Abstract: This talk reimagines educational assessment as a transformative force for justice and liberation, rather than a mechanism of oppression and marginalization. Drawing on critical pedagogies of care (Noddings) and discomfort (Boler), I illuminate how assessment systems can elevate learners’ critical consciousness, respond to the expressed needs of marginalized communities, and affirm diverse ways of knowing. Through personal narrative (i.e. storytelling) and scholarship, I describe the historical and ongoing harms perpetuated by conventional assessment practices that center whiteness; and advance a liberatory framework for assessment that disrupts these practices. I propose a set of principles, framed within a shared responsibility for justice, that center the voices and lived experiences of rights-holders and actively seek to disrupt structures of inequity. Ultimately, I invite my colleagues to dream with me about assessment systems that not only measure academic achievement, but actively foster growth, social justice, inclusion, and liberation for all learners.

Bio: Professor Jennifer Randall is the Dunn Family Chair of Psychometrics and Test Development in the School of Education at the University of Michigan, and the founder of the Center for Measurement Justice. Her work pioneers the development of justice-oriented educational assessment, challenging traditional paradigms that often marginalize underserved student populations. Read More