SE /

Seminar

SE Seminar Schedule

This is the schedule for the SE division seminar in the academic year 2020 / 2021. Common types of seminars are trial talks (for upcoming conferences, defenses, etc.), presentations of ongoing research, lectures by academic guests, and other research talks.

Date & TimePresenterTitleTalk Type Talk AbstractLocation
23.10.2020 (13:00)Linda ErlenhovAn Empirical Study of Bots in Software Development: Characteristics and Challenges from a Practitioner’s PerspectiveTrial Talk for FSE'20PreprintZoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
06.11.2020 (13:00)Vasilii MosinComplex Performance Analysis of Autoencoder-Based Approaches for Anomaly Detection in Driving Scenario ImagesTrial Talk for SCSSS'20Deep learning algorithms are used in the automotive industry for solving different perception tasks, for example, object detection on images from an onboard camera. It is known that these algorithms can fail when used on data which significantly differs from the training data. In order to minimize the risks related to these failures, anomaly detection techniques can be applied. Widely used anomaly detection methods for images are based on autoencoders – specific types of artificial neural networks. We provide a complex performance analysis of autoencoder-based methods for anomaly detection in driving scenario images.Zoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
06.11.2020 (13:45 ish)Razan GhzouliBehavior Trees in Action: A Study of Robotics ApplicationsTrial Talk for SLE20PreprintZoom https://chalmers.zoom.us/j/65970011975 (Password: 672991)
20.11.2020 (13:00)Peter SamoaaDesigning and Implementing an AI pipeline for Measuring the Brand Loyalty Through Social Media Text MiningTrial Talk for SOFSEM21Enhancing customer relationships through social media is an area of high relevance for companies. To this aim, Social Business Intelligence (SBI) plays a crucial role by supporting companies in combining corporate data with user-generated content, usually available as textual clips on social media. Unfortunately, SBI research is often constrained by the lack of publicly-available, real-world data for experimental activities. In this paper, we describe our experience in extracting social data and processing them through an enrichment pipeline for brand analysis. As a first step, we collect texts from social media and we annotate them based on predefined metrics for brand analysis, using features such as sentiment and geolocation. Annotations rely on various learning and natural language processing approaches, including deep learning and geographical ontologies. Structured data obtained from the annotation process are then stored in a distributed data warehouse for further analysis. Preliminary results, obtained from the analysis of three well-known ICT brands, using data gathered from Twitter, news portals, and Amazon product reviews, show that different evaluation metrics can lead to different outcomes, indicating that no single metric is dominant for all brand analysis use cases.Zoom https://chalmers.zoom.us/j/62854069357 (Password: 830937)
20.11.2020 (13:45)Meenu John (University of Malmö)AI Deployment Architecture: Multi-Case Study for Key Factor IdentificationTrial Talk for APSEC'20Machine learning and deep learning techniques are becoming increasingly popular and critical for companies as part of their systems. However, although the development and prototyping of ML/DL systems are common across companies, the transition from prototype to production-quality deployment models is challenging. One of the key challenges is how to determine the selection of an optimal architecture for AI deployment. Based on our previous research, and in order to offer support and guidance to practitioners, we developed a framework in which we present five architectural alternatives for AI deployment ranging from centralized to fully decentralized edge architectures. As part of our research, we validated the framework in software-intensive embedded system companies and identified key challenges that they face when deploying ML/DL models. In this paper, and to further advance our research on this topic, we identify the key factors that help practitioners determine what architecture to select for deployment of ML/DL models. For this, we conducted a follow-up study involving interviews and workshops in seven case companies in the embedded systems domain. Based on our findings, we identify three key factors and we develop a framework in which we outline how prioritization and trade-offs between these result in a certain architecture. With our frame work, we provide practitioners with guidance on how to select the optimal architecture for a certain AI deployment. The contribution of the paper is threefold. First, we identify key factors critical for AI system deployment. Second, we present a framework in which we outline how prioritization and trade-offs between these factors results in selection of a certain architecture. Third, we discuss additional factors that may or may not influence the selection of an optimal architecture.Zoom (same slot as above)
04.12.2020 (13:00)Samuel IdowuMachine Learning Asset Management ToolsOngoing researchMachine learning (ML) techniques are becoming essential components of many software systems today, causing an increasing need to adapt traditional software engineering practices and tools to ML-based software development. This need is especially pronounced due to the challenges associated with the large-scale development and deployment of ML systems. Among the most commonly reported challenges during the development, production, and operation of ML-based systems are experiment management, dependency management, monitoring, and logging. In recent years, we have seen several efforts to address these issues with an increasing number of tools for tracking and managing ML experiments and their assets. To facilitate research and practice on engineering intelligent systems, it is essential to understand the nature of these tools. What kind of support do they provide? What asset types do they track? What operations are carried out on the tracked assets? What are their commonalities and variabilities? To improve the empirical understanding of the asset management capabilities in available tools, we present a feature-based survey of 17 ML asset management tools identified in a systematic search. We overview these tools' features for supporting various asset types and essential supported operations. We found that most of the reviewed tools depend on traditional version control systems, while only a few support an asset granularity level that differentiates between important ML assets, such as datasets and models.Zoom https://chalmers.zoom.us/j/67030246934 (Password: 844107)
18.12.2020 (13:00)Free Slot    
08.01.2021 (13:00)Joel ScheunerServerless Application BenchmarkingOngoing researchTBAZoom https://chalmers.zoom.us/j/65079030510 (Password: 853559)
22.01.2021 (13:00)Eric KnaussConstructive Master’s Thesis Work in Industry: Guidelines for Applying Design Science ResearchFaculty TalkSoftware engineering researchers and practitioners rely on empirical evidence from the field. Thus, education of software engineers must include strong and applied education in empirical research methods. For most students, the master’s thesis is the last, but also most applied form of this education in their studies. Especially thesis work in collaboration with industry requires that concerns of stakeholders from academia and practice are carefully balanced. It is possible, yet difficult to do high-impact empirical work within the timeframe of a typical thesis. In particular, if this research aims to provide practical value to industry, academic quality can suffer. Even though constructive research methods such as Design Science Research (DSR) exist, thesis projects repeatably struggle to apply them. DSR enables balancing such concerns by providing room both for knowledge questions and design work. Yet, only limited experience exists in our field on how to make this research method work within the context of a master’s thesis. To enable running design science master’s theses in collaboration with industry, we complement existing method descriptions and guidelines with our own experience and pragmatic advice to students, examiners, and supervisors in academia and industry. This paper itself is based on DSR. Based on 12 design science theses over the last seven years, we collect common pitfalls and good practice from analysing the theses, the student-supervisor interaction, the supervisor-industry interaction, the examiner feedback, and, where available, reviewer comments on publications that are based on such theses. We provide concrete advise for framing research questions, structuring a report, as well as for planning and conducting empirical work with practitioners.Zoom https://chalmers.zoom.us/j/69933630160
05.02.2021 (13:00)Free Slot    
19.02.2021 (13:00)Hamdy Michael AyasTBATrial Talk for TBATBAZoom (Link TBA)
05.03.2021 (13:00)Aiswarya Raj MunappyAI Impact on Data PipelinesOngoing ResearchTBAZoom (Link TBA)
19.03.2021 (13:00)Afonso FontesTBATrial Talk for TBATBAZoom (Link TBA)
02.04.2021 (13:00)Chi ZhangTBATBATBATBA
16.04.2021 (13:00)Free Slot    
30.04.2021 (13:00)Ricardo CaldasTBATrial Talk for TBATBAZoom (Link TBA)
14.05.2021 (13:00)Free Slot    
28.05.2021 (13:00)Free Slot    
11.06.2021 (13:00)Weixing ZhangTBATBATBATBA

Registered students in 20 / 21:

  • Hamdy Michael Ayas
  • Afonso H. Fontes
  • Katja Tuma
  • Chi Zhang
  • Joel Scheuner
  • Linda Erlenhov
  • Hazem Samoaa
  • Samuel Idowu
  • Ricardo Diniz Caldas
  • Mukelabai Mukelabai
  • Razan Ghzouli
  • Weixing Zhang
  • Aiswarya Raj Munappy
  • Meenu John (University of Malmö)