SE /

Seminar

SE Seminar Schedule

This is the schedule for the SE division seminar in the academic year 2020 / 2021. Common types of seminars are trial talks (for upcoming conferences, defenses, etc.), presentations of ongoing research, lectures by academic guests, and other research talks.

Date & TimePresenterTitleTalk Type Talk AbstractLocation
23.10.2020 (13:00)Linda ErlenhovAn Empirical Study of Bots in Software Development: Characteristics and Challenges from a Practitioner’s PerspectiveTrial Talk for FSE'20PreprintZoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
06.11.2020 (13:00)Vasilii MosinComplex Performance Analysis of Autoencoder-Based Approaches for Anomaly Detection in Driving Scenario ImagesTrial Talk for SCSSS'20Deep learning algorithms are used in the automotive industry for solving different perception tasks, for example, object detection on images from an onboard camera. It is known that these algorithms can fail when used on data which significantly differs from the training data. In order to minimize the risks related to these failures, anomaly detection techniques can be applied. Widely used anomaly detection methods for images are based on autoencoders – specific types of artificial neural networks. We provide a complex performance analysis of autoencoder-based methods for anomaly detection in driving scenario images.Zoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
06.11.2020 (13:45 ish)Razan GhzouliBehavior Trees in Action: A Study of Robotics ApplicationsTrial Talk for SLE20PreprintZoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
20.11.2020 (13:00)Peter SamoaaDesigning and Implementing an AI pipeline for Measuring the Brand Loyalty Through Social Media Text MiningTrial Talk for SOFSEM21Enhancing customer relationships through social media is an area of high relevance for companies. To this aim, Social Business Intelligence (SBI) plays a crucial role by supporting companies in combining corporate data with user-generated content, usually available as textual clips on social media. Unfortunately, SBI research is often constrained by the lack of publicly-available, real-world data for experimental activities. In this paper, we describe our experience in extracting social data and processing them through an enrichment pipeline for brand analysis. As a first step, we collect texts from social media and we annotate them based on predefined metrics for brand analysis, using features such as sentiment and geolocation. Annotations rely on various learning and natural language processing approaches, including deep learning and geographical ontologies. Structured data obtained from the annotation process are then stored in a distributed data warehouse for further analysis. Preliminary results, obtained from the analysis of three well-known ICT brands, using data gathered from Twitter, news portals, and Amazon product reviews, show that different evaluation metrics can lead to different outcomes, indicating that no single metric is dominant for all brand analysis use cases.Zoom https://chalmers.zoom.us/j/62854069357 (Password: 830937)
20.11.2020 (13:45)Meenu John (University of Malmö)AI Deployment Architecture: Multi-Case Study for Key Factor IdentificationTrial Talk for APSEC'20Machine learning and deep learning techniques are becoming increasingly popular and critical for companies as part of their systems. However, although the development and prototyping of ML/DL systems are common across companies, the transition from prototype to production-quality deployment models is challenging. One of the key challenges is how to determine the selection of an optimal architecture for AI deployment. Based on our previous research, and in order to offer support and guidance to practitioners, we developed a framework in which we present five architectural alternatives for AI deployment ranging from centralized to fully decentralized edge architectures. As part of our research, we validated the framework in software-intensive embedded system companies and identified key challenges that they face when deploying ML/DL models. In this paper, and to further advance our research on this topic, we identify the key factors that help practitioners determine what architecture to select for deployment of ML/DL models. For this, we conducted a follow-up study involving interviews and workshops in seven case companies in the embedded systems domain. Based on our findings, we identify three key factors and we develop a framework in which we outline how prioritization and trade-offs between these result in a certain architecture. With our frame work, we provide practitioners with guidance on how to select the optimal architecture for a certain AI deployment. The contribution of the paper is threefold. First, we identify key factors critical for AI system deployment. Second, we present a framework in which we outline how prioritization and trade-offs between these factors results in selection of a certain architecture. Third, we discuss additional factors that may or may not influence the selection of an optimal architecture.Zoom (same slot as above)
04.12.2020 (13:00)Samuel IdowuMachine Learning Asset Management ToolsOngoing researchMachine learning (ML) techniques are becoming essential components of many software systems today, causing an increasing need to adapt traditional software engineering practices and tools to ML-based software development. This need is especially pronounced due to the challenges associated with the large-scale development and deployment of ML systems. Among the most commonly reported challenges during the development, production, and operation of ML-based systems are experiment management, dependency management, monitoring, and logging. In recent years, we have seen several efforts to address these issues with an increasing number of tools for tracking and managing ML experiments and their assets. To facilitate research and practice on engineering intelligent systems, it is essential to understand the nature of these tools. What kind of support do they provide? What asset types do they track? What operations are carried out on the tracked assets? What are their commonalities and variabilities? To improve the empirical understanding of the asset management capabilities in available tools, we present a feature-based survey of 17 ML asset management tools identified in a systematic search. We overview these tools' features for supporting various asset types and essential supported operations. We found that most of the reviewed tools depend on traditional version control systems, while only a few support an asset granularity level that differentiates between important ML assets, such as datasets and models.Zoom https://chalmers.zoom.us/j/67030246934 (Password: 844107)
18.12.2020 (13:00)Free Slot    
08.01.2021 (13:00)Joel ScheunerServerless Application BenchmarkingOngoing researchTBAZoom https://chalmers.zoom.us/j/65079030510 (Password: 853559)
22.01.2021 (13:00)Eric KnaussConstructive Master’s Thesis Work in Industry: Guidelines for Applying Design Science ResearchFaculty TalkSoftware engineering researchers and practitioners rely on empirical evidence from the field. Thus, education of software engineers must include strong and applied education in empirical research methods. For most students, the master’s thesis is the last, but also most applied form of this education in their studies. Especially thesis work in collaboration with industry requires that concerns of stakeholders from academia and practice are carefully balanced. It is possible, yet difficult to do high-impact empirical work within the timeframe of a typical thesis. In particular, if this research aims to provide practical value to industry, academic quality can suffer. Even though constructive research methods such as Design Science Research (DSR) exist, thesis projects repeatably struggle to apply them. DSR enables balancing such concerns by providing room both for knowledge questions and design work. Yet, only limited experience exists in our field on how to make this research method work within the context of a master’s thesis. To enable running design science master’s theses in collaboration with industry, we complement existing method descriptions and guidelines with our own experience and pragmatic advice to students, examiners, and supervisors in academia and industry. This paper itself is based on DSR. Based on 12 design science theses over the last seven years, we collect common pitfalls and good practice from analysing the theses, the student-supervisor interaction, the supervisor-industry interaction, the examiner feedback, and, where available, reviewer comments on publications that are based on such theses. We provide concrete advise for framing research questions, structuring a report, as well as for planning and conducting empirical work with practitioners.Zoom https://chalmers.zoom.us/j/69933630160
05.02.2021 (13:00)David Issa MattosBayesian Bradley-Terry models: from primates to machine learningOngoing ResearchThis presentation has a little something for everyone: it has SE, ML, Bayesian statistics, survey, R, math, gorillas using technology and football. We will start discussing paired-comparison data with food preferences in gorillas using touchscreen displays. From there we will move to an introduction of the Bradley-Terry model and why you should care about it in SE. We discuss two SE applications: surveys and benchmark experiments. For benchmark experiments we discuss how to rank automated labeling techniques with the Bradley-Terry model with random effects. Finally, we conclude this presentation showing an application of the Davidson model with data from the Brazilian national football league. Of course everything is Bayesian and done in R with the bpcs package (that we created). We make all code available in a R notebook.Zoom https://chalmers.zoom.us/j/66699779945 (Password: 412378)
19.02.2021 (13:00)Free Slot    
05.03.2021 (13:00)Aiswarya Raj MunappyModelling Data PipelinesOngoing ResearchData pipelines play an important role throughout the data management process. It automates the steps ranging from data generation to data reception thereby reducing the human intervention. A failure or fault in a single step of a data pipeline has cascading effects that might result in hours of manual intervention and clean-up. Data pipeline failure due to faults at different stages of data pipelines is a common challenge that eventually leads to significant performance degradation of data-intensive systems. To ensure early detection of these faults and to increase the quality of the data products, continuous monitoring and fault detection mechanism should be included in the data pipeline. In this study we explore the need for incorporating automated fault detection mechanisms and mitigation strategies at different stages of the data pipeline. Further, we identify faults at different stages of the data pipeline and possible mitigation strategies that can be adopted for reducing the impact of data pipeline faults thereby improving the quality of data products. The idea of incorporating fault detection and mitigation strategies is validated by realizing a small part of the data pipeline using action research in the analytics team at a large software-intensive organization within the telecommunication domain.Zoom https://chalmers.zoom.us/j/4604877890
19.03.2021 (13:00)Peter SamoaaAn Exploratory Study of the Impact of Parameterization on JMH Measurement Results in Open-Source ProjectsTrial Talk for ICPE 21https://xleitix.github.io/icet/preprints/icpe21.pdfZoom https://chalmers.zoom.us/j/67468132279
02.04.2021 (13:00) No seminar (Easter)   
16.04.2021 (13:00)Chi ZhangPedestrian trajectory prediction with Social-interaction weighted networkTrial Talk for SHAPE-IT project meeting (and IEEE-IVS conference)In this paper, we present the Social Interaction-Weighted Spatio-Temporal Convolutional Neural Network (Social-IWSTCNN), which includes both the spatial and the temporal features. We propose a novel design, namely the Social Interaction Extractor, to learn the spatial and social interaction features of pedestrians. We use the recently released large-scale Waymo Open Dataset in urban traffic scenarios to analyze the performance of our proposed algorithm in comparison to the state-of-the-art models. The results show that our algorithm outperforms state-of-the-art algorithms such as Social-LSTM, Social-GAN, and Social-STGCNN on both Average Displacement Error (ADE) and Final Displacement Error (FDE).Zoom (Link https://gu-se.zoom.us/j/62602987368?pwd=Z290MUFXK1g0N0V1bEhGUmJ2MWFjQT09 ) passcode: 660880
30.04.2021 (13:00)Ricardo CaldasTBATrial Talk for TBATBAZoom (Link TBA)
14.05.2021 (13:00)Afonso FontesTBATrial Talk for TBATBAZoom (Link TBA)
28.05.2021 (13:00)Hamdy Michael Ayas    
11.06.2021 (13:00)Weixing ZhangTBATBATBATBA

Registered students in 20 / 21:

  • Hamdy Michael Ayas
  • Afonso H. Fontes
  • Katja Tuma
  • Chi Zhang
  • Joel Scheuner
  • Linda Erlenhov
  • Hazem Samoaa
  • Samuel Idowu
  • Ricardo Diniz Caldas
  • Mukelabai Mukelabai
  • Razan Ghzouli
  • Weixing Zhang
  • Aiswarya Raj Munappy
  • Meenu John (University of Malmö)