SE /


SE Seminar Schedule

This is the schedule for the SE division seminar in the academic year 2020 / 2021. Common types of seminars are trial talks (for upcoming conferences, defenses, etc.), presentations of ongoing research, lectures by academic guests, and other research talks.

Date & TimePresenterTitleTalk Type Talk AbstractLocation
05.11.2021 (10:00)Khan Mohammad HabibullahNon-functional Requirements for Machine Learning: Understanding Current Use and Challenges in IndustryPost conference, and current research on NFRs for MLMachine Learning (ML) is an application of Artificial Intelligence (AI) that uses big data to produce complex predictions and decision-making systems, which would be challenging to obtain otherwise. To ensure the success of ML-enabled systems, it is essential to be aware of certain qualities of ML solutions (performance, transparency, fairness), known from a Requirement Engineering (RE) perspective as non-functional requirements (NFRs). However, when systems involve ML, NFRs for traditional software may not apply in the same ways; some NFRs may become more prominent or less important; NFRs may be defined over the ML model, data, or the entire system; and NFRs for ML may be measured differently. In this work, we aim to understand the state-of-the-art and challenges of dealing with NFRs for ML in industry. We interviewed ten engineering practitioners working with NFRs and ML. We find examples of (1) the identification and measurement of NFRs for ML, (2) identification of more and less important NFRs for ML, and (3) the challenges associated with NFRs and ML in the industry. This knowledge paints a picture of how ML-related NFRs are treated in practice and helps to guide future RE for ML efforts.Hybrid: Jupiter 473 and Password: 067177
05.11.2021 (10:45)Christoph Laaber (Simula, Norway)Variability of Microbenchmark Results and How to Deal with ItExternal VisitorPerformance variability is a well-known challenge in software systems research and in performance engineering practice. Microbenchmarks, a form of performance testing technique, is not immune to performance variability, which can lead to unreliable results from which one can not draw reliable conclusions. The reasons for performance variability are many-fold, e.g., the environment the benchmark is executed in, the way the benchmark is written, or the measurement methodology that is applied. In this talk, I will show empirical evidence on the extent of benchmark result variability and elaborate on two techniques that can help dealing with it.Hybrid: Jupiter 473 and Password: 067177
19.11.2021 (10:00)Hamdy Michael AyasThe journey of migrating towards microservicesConference presentation practiceWe will showcase the evolutionary and iterative nature of the migration journey towards microservices at an architectural-level and system-implementation level. Also, we identify 18 detailed activities that take place in these levels, categorized in the four phases of 1) designing the architecture, 2) altering the system, 3) setting up supporting artifacts, and 4) implementing additional technical artifacts.Hybrid: Jupiter 473 and Password: 701684
14.01.2022 (10:00)     
28.01.2022 (10:00)Peter Samoaa    
11.02.2022 (10:00)Weixing Zhang    

Registered students in 21 / 22:

  • Hazem Samoaa
  • Aiswarya Raj Munappy
  • Weixing Zhang
  • Hamdy Ayas
  • Khan Mohammad Habibullah
  • Linda Erlenhov