ionicons-v5-eGitHub iconVimeo iconYouTube icon
Arthur Rump

Research

A Refined Model of Ill-definedness in Project-Based Learning

Conference paper for the Educators Symposium at MODELS 2022 - with Vadim Zaytsev

Project-based courses are crucial to gain practically relevant knowledge in modelling and programming education. However, they fall into the "ill-defined" domain: there are many possible solutions; the quality of a deliverable is subjective and not formally assessable; reaching the goals means designing new artefacts and analysing new information; and the problem cannot always be divided into independent tasks. In this paper, we refine the existing two-dimensional (verifiability and solution space) classification of ill-defined classes of problems, contemplate methods and approaches for assessment of projects, and apply the model to analyse two study units of two different computer science programmes.

Automated Assessment of Learning Objectives in Programming Assignments

Conference paper for ITS 2021 - with Ansgar Fehnker and Angelika Mader

Individual feedback is a core ingredient of a personalised learning path. However, it also is time-intensive and, as a teaching form, it is not easily scalable. In order to make individual feedback realisable for larger groups of students, we develop tool support for teaching assistants to use in the process of giving feedback. In this paper, we introduce Apollo, a tool that automatically analyses code uploaded by students with respect to their progression towards the learning objectives of the course. First, typical learning objectives in Computer Science courses are analysed on their suitability for automated assessment. A set of learning objectives is analysed further to get an understanding of what achievement of these objectives looks like in code. Finally, this is implemented in Apollo, a tool that assesses the achievement of learning objectives in Processing projects. Early results suggest an agreement in assessment between Apollo and teaching assistants.

Atelier – Tutor Moderated Comments in Programming Education

Poster for EC-TEL 2021 - with Ansgar Fehnker and Angelika Mader

In the programming course of our engineering design degree tutorials are the focal point of learning. This is especially so since we employ a tinkering based educational approach, in which students explore, from the very beginning, the material by self-defined projects. The assignment defines ingredients to use and sets expectations, but students are free to set their own design goals. In this setting tutorials are an important place of feedback and learning, and we developed an online platform that supports tutors during tutorials. This paper reports on the educational philosophy and underpinnings, and results from applying the tool in two first-year courses.

Atelier: An Online Platform for Programming Tutorials

Poster for CSERC 2020 - with Ansgar Fehnker, Angelika Mader, Margot Rutgers, Lotte Steenmeijer and Chris Witteveen

The aim of the Atelier project is to develop an online platform that creates an atelier-like setting that emphasises collaboration and sharing of ideas. It is built for the Community of Practice of students, student assistants, and lecturers involved in teaching programming in Processing in the first year of the bachelor programme Creative Technology (CreaTe) at the University of Twente. CreaTe is a design programme, with an engineering background in Computer Science and Electrical Engineering. It extols its own design philosophy, which emphasises autonomous design, creative thinking, multidisciplinary teams, tinkering and reflection. The motivation behind Atelier to help the creation of a Community of Practice where face-to-face tutoring is central. It is consciously not intended to replace face-to-face tutoring.

Automated Assessment of Learning Objectives in Programming Assignments

Bachelor Thesis

With online forms of education, it has become harder to ‘gauge the room’ and get an impression of how well students are following along. We introduce Apollo, a tool that automatically analyses code uploaded by students to get an overview of their progression towards the learning objectives of the course. First, typical learning objectives in Computer Science courses are analysed on their suitability for automated assessment. A set of learning objectives is analysed further to get an understanding of what achievement of these objectives looks like in code. Finally, this is implemented in Apollo, a tool that assesses achievement of learning objectives in Processing projects. Validation of the tool is not conclusive, but early results suggest an agreement in assessment between Apollo and teaching assistants.