SE /


SE Seminar Schedule

This is the schedule for the SE division seminar in the academic year 2020 / 2021. Common types of seminars are trial talks (for upcoming conferences, defenses, etc.), presentations of ongoing research, lectures by academic guests, and other research talks.

Date & TimePresenterTitleTalk Type Talk AbstractLocation
05.11.2021 (10:00)Khan Mohammad HabibullahNon-functional Requirements for Machine Learning: Understanding Current Use and Challenges in IndustryPost conference, and current research on NFRs for MLMachine Learning (ML) is an application of Artificial Intelligence (AI) that uses big data to produce complex predictions and decision-making systems, which would be challenging to obtain otherwise. To ensure the success of ML-enabled systems, it is essential to be aware of certain qualities of ML solutions (performance, transparency, fairness), known from a Requirement Engineering (RE) perspective as non-functional requirements (NFRs). However, when systems involve ML, NFRs for traditional software may not apply in the same ways; some NFRs may become more prominent or less important; NFRs may be defined over the ML model, data, or the entire system; and NFRs for ML may be measured differently. In this work, we aim to understand the state-of-the-art and challenges of dealing with NFRs for ML in industry. We interviewed ten engineering practitioners working with NFRs and ML. We find examples of (1) the identification and measurement of NFRs for ML, (2) identification of more and less important NFRs for ML, and (3) the challenges associated with NFRs and ML in the industry. This knowledge paints a picture of how ML-related NFRs are treated in practice and helps to guide future RE for ML efforts.Hybrid: Jupiter 473 and Password: 067177
05.11.2021 (10:45)Christoph Laaber (Simula, Norway)Variability of Microbenchmark Results and How to Deal with ItExternal VisitorPerformance variability is a well-known challenge in software systems research and in performance engineering practice. Microbenchmarks, a form of performance testing technique, is not immune to performance variability, which can lead to unreliable results from which one can not draw reliable conclusions. The reasons for performance variability are many-fold, e.g., the environment the benchmark is executed in, the way the benchmark is written, or the measurement methodology that is applied. In this talk, I will show empirical evidence on the extent of benchmark result variability and elaborate on two techniques that can help dealing with it.Hybrid: Jupiter 473 and Password: 067177
19.11.2021 (10:00)Hamdy Michael AyasThe journey of migrating towards microservicesConference presentation practiceWe will showcase the evolutionary and iterative nature of the migration journey towards microservices at an architectural-level and system-implementation level. Also, we identify 18 detailed activities that take place in these levels, categorized in the four phases of 1) designing the architecture, 2) altering the system, 3) setting up supporting artifacts, and 4) implementing additional technical artifacts.Hybrid: Jupiter 473 and Password: 701684
10.12.2022 (13:00)Ricardo Diniz CaldasEngineering Software for Resilient Cyber-Physical SystemsLicentiate Dry-RunResilient cyber-physical systems (CPS) should avoid, withstand, recover from, and evolve and adapt to cope with adversity stemming from computation, networking, or physical environment. From the engineering point of view, the usefulness of such systems is hindered by their lack of ability to adapt and overcome unknown stimuli, ever-changing and conflicting objectives, and deprecated internal components. Software as a tool for self-management is a key instrument for dealing with uncertainty. In this presentation, I discuss design, verification and validation of resilient CPS from software viewpoint and its implications. For more information, click here.Jupiter 473 and Zoom
14.01.2022 (10:00)Aiswarya Raj MunappyMaturity Assessment of Data PipelinesConference Presentation PracticeTBA
28.01.2022 (10:00)Peter SamoaaSource Code Representation for Deep Learning in Software Engineering The usage of deep learning (DL) approaches for software engineering has attracted much attention. However, in order to use DL, source code needs to be formatted to fit the the expected input form of DL models. This problem is known as source code representation. Source code can be represented via different approaches, most importantly the tree-based, text-based, and graph-based approaches. In this paper, we use a systematic literature review (SLR) to detailedly investigate the representation approaches adopted in 103 studies that use DL in the context of software engineering. We show that each way of representating source code can provide a different, yet orthogonal view of the same source code. Thus, different software engineering tasks might require different (combinations of) code representation approaches, depending on the nature and complexity of the task. Particularly, we show that it is crucial to define whether the DL approach requires lexical, syntactical, or semantic code information. Our analysis shows that a wide range of different representations and combinations of representations (hybrid representations) are used to solve a wide range of common software engineering problems. However, we also observe a lack of generalizability of the presented approaches to other tasks, and validation based on industrial datasets. 
11.02.2022 (10:00)Weixing Zhang    

Registered students in 21 / 22:

  • Hazem Samoaa
  • Aiswarya Raj Munappy
  • Weixing Zhang
  • Hamdy Ayas
  • Khan Mohammad Habibullah
  • Linda Erlenhov