CategoriesConferences

Reflections about LAK26 and where the field is heading

I was lucky this week to attend the Learning Analytics '26 conference in Bergen, Norway. This year's conference focused on the synergies between LA and Generative AI. This shift to GenAI has intensified in the last few years. The collection of data from more traditional sources, such as LMS logs or visualisations in the LA dashboard, has been replaced by efforts to capture how and to what extent students learn with GenAI.

This shift is also reflected in the workshop topics. For instance, in the CROSSMMLA workshop, we explored GenAI as a "sensor for semantics" that can be integrated with a variety of modalities to analyse the learning process and add a layer of deeper understanding to typically structured and messy multimodal data.

Since my first LAK in 2016, I have been eagerly following the development of the field, while generally being quite positive yet critical of the research community's openness to new, theory-informed, technically rich approaches.

This year, however, the progressive shift towards GenAI at LAK left me not with enthusiasm but with a sense of unsettlement about the field.

First of all, there is the realisation that scientific discourse has pivoted almost exclusively toward how to make LLMs work for specific educational purposes, regardless of whether they are suitable or convenient to use over more parsimonious approaches. This includes how to train, fine-tune, and, more generally, "tame" LLMs, as well as how to deal with their side effects, such as fabricated results and incorrect information.

But very few of these works have addressed why these systems should be used in the first place, nor have they explored the broader consequences of using LLMs, e.g., resource exploitation, data labour by underpaid workers, and copyright infringement.

The dominant scientific imperative is to use LLMs as a research method, regardless of the results they produce, whether their use offers an actual advantage for students, learners, or a more powerful scientific approach.

It seems to me that science is also a victim of the hype rhetoric that either uses GenAI or is left behind. It is sad but true to admit that LA research is slowly being swept away by GenAI.

The critique of GenAI and the economy of hyperscale is probably an ethical dilemma I see, while many fellow scientists don't see it as an ethical problem at all.

Adapting to GenAI is imperative in the current era, where LLM use is pervasive, and adoption is unprecedented. While I see that this technology is here to stay, I am not blindly buying it, and I believe that researchers cannot absolve themselves of the responsibility to examine the social ramifications of a technology, just because it is widely used.

There is no straightforward positioning here. If we do not want to be swept away even more by GenAI and the corporations behind it, we have to strengthen our critical thinking skills and question how and why we do things, as well as the net advantages.

CategoriesConference article

New pub: Are rubrics all you need? Towards rubric-based automatic short answer scoring

The latest paper led by Sebastian Gombert has been published in the Proceedings of the LAK26: 16th International Learning Analytics and Knowledge Conference (LAK26).

"Are rubrics all you need? Towards rubric-based automatic short answer scoring via guided rubric-answer alignment"

In educational assessment, rubrics are central because they define clear criteria for evaluating learner responses and specify what counts as relevant evidence. Yet, most automatic short answer scoring approaches make little to no explicit use of rubrics, or treat them only as additional side information. This paper turns that around and asks what happens if rubrics themselves become the primary scoring reference for automated systems.

The authors introduce the task of rubric-based automatic short-answer scoring, in which the model uses the scoring rubric as an explicit anchor rather than relying solely on large sets of labelled student responses. To implement this idea, they propose a guided rubric–answer alignment, in which each student's answer is aligned directly with rubric criteria and level descriptors rather than with other answers.

Building on this concept, the paper presents two new transformer-based architectures, GRAASP and ToLeGRAA, which use attention mechanisms to focus on the most relevant rubric information when predicting scores. These architectures aim to make scoring more transparent and more faithful to the assessment design, and they promise greater robustness when tasks change because the scoring logic is driven by the rubric rather than solely by historical training data.

This work aligns with a broader agenda in our group: designing AI systems that are tightly coupled with pedagogical artefacts such as rubrics, feedback guidelines, and learning objectives, instead of treating AI as a detached black box. By placing rubrics at the centre of the modelling process, this research opens a path towards more interpretable, educator-aligned automatic assessment tools that can better support teaching and learning.

Check it here (Open Access PDF via ACM):

Gombert, S., Sun, Z., Zehner, F., Lossjew, J., Wyrwich, T., Czinczel, B. K., Bednorz, D., Kubsch, M., Di Mitri, D., Neumann, K., & Drachsler, H. (2026). Are rubrics all you need? Towards rubric-based automatic short answer scoring via guided rubric-answer alignment. Proceedings of the LAK26: 16th International Learning Analytics and Knowledge Conference, 272–282. https://dl.acm.org/doi/10.1145/3785022.3785064