Keynotes

Intelligence Testing of Autonomous Software Systems

Arnaud Gotlieb

Tuesday, October 15, 2019

Abstract:  Autonomous Software Systems (ASS) are systems able to plan and execute complex functions with limited human intervention, i.e., systems with self-decision capabilities. They usually complement human’s capacity to deal with unexpected events such as faults or hazards and take decisions based on vast amounts of uncertain data. Testing ASS is highly challenging as their requirements in terms of safety, performance, robustness and reliability evolve with their level of autonomy. My talk will address the challenges of testing ASS and will present some cases where Artificial Intelligence techniques have been successful in deploying automated testing methods.

Short Bio: Arnaud Gotlieb, chief research scientist at Simula Research Laboratory in Norway, is an expert on the application of Artificial Intelligence to the validation of software-intensive systems, cyber-physical systems including industrial robotics and autonomous systems. Dr. Gotlieb has co-authored more than one-hundred and twenty publications in Artificial Intelligence and Software Engineering and developed several tools for testing critical software systems. He participated to many French national projects (e.g., RNTL-INKA, ACI-V3F, ANR-CAT, ANR-U3CAT) and was the scientific coordinator of the ANR-CAVERN project, exploring the capabilities of constraint programming for program verification. From 2011 to 2019, Dr. Gotlieb led the Certus center at Simula dedicated to software validation and verification. He was recently awarded with the prestigious RCN FRINATEK grant for the T-LARGO project proposal on testing learning robots (2018-2022). He participates to the H2020 AI4EU Project (2019-2022), leading the industrial pilot experiments of the European AI-on-demand platform. Dr. Gotlieb has served in many PCs including IJCAI, CP, ICSE-SEIP, ICST, ISSRE. He co-chaired the scientific program of QSIC 2013 (Int. Conf. on Quality Software), the SEIP track of ICSE 2014, the “Testing and Verification” track of CP from 2016 to 2019 (Principle of Constraint Programming). He co-chaired the first IEEE Artificial Intelligence Testing Conference in April 2019.

Testing Human-Centric Cyber-Physical Systems

Mauro Pezzè

Thursday, October 17, 2019

Abstract: Human-centric cyber-physical systems are systems where software, devices and people seamlessly and endlessly interact with evolving goals, requirements and constraints. They are increasingly pervading  our life, and span from simple mobile and Web applications, like recommendation systems and virtual shops, to complex evolving systems, like autonomous vehicles and smart cities. In this talk, I will give a broad and visionary view of the emerging issues and opportunities in the verification of  human-centric cyber-physical systems. I will introduce the main features and survey the main open challenges of testing human-centric cyber-physical systems. I will discuss scope and limitation of the most recent research results in software testing; I will overview the ongoing partial but promising research activities in our USI-Star and UniMiB-LTA laboratories, and I will propose my vision of the most challenging open research issues. 

Short bio:Mauro Pezzè is a professor of software engineering at USI – Università della Svizzera Italiana, Lugano, Switzerland, where he coordinates the STAR – Software Testing and Analysis research Lab, and where he served as Dean.  Mauro Pezzè is also a professor at the Università degli studi di Milano Bicocca, where he coordinates the LTA – Laboratory for Software Testing and Analysis.  Mauro Pezzè is editor in chief of ACM TOSEM Transactions on Software Engineering and Methodology, and served in the editorial board of IEEE TSE Transactions on Software Engineering and STVR, the International Journal of Software Testing, Analysis and Verification. He served as program chair of ICSE, the International Conference on Software Engineering, in 2012, and program and general chair of ISSTA, the ACM International Symposium on Software Testing and Analysis, in 2006 and 2013, respectively. He is the co-author of an influential book ‘Software Testing and Analysis, Process, Principle and Techniques, and is known for his work on software testing, program analysis, self-healing and self-adaptive software systems.
During his career, Mauro Pezzè had the opportunity to visit as student, research and professor the University of Edinburg, the University of California Irvine and the National University of Singapore, and has advised over 35 PhD students, many of which are prominent members of the academic and industrial communities.

Invited Tutorial

Symbolic Execution in Testing: Illustration on the PathCrawler and Diversity tools.

Boutheina Bannour, Arnault Lapitre, Nicky Williams

Wednesday, October 16, 2019

Abstract. Two test-generation tools developed at CEA LIST are based on symbolic execution: PathCrawler which is code-based and Diversity which is  model-based. This tutorial will start with an introductory reminder of the principles of symbolic execution and why and how it is widely used in testing. The PathCrawler and Diversity tools will then be used to show test-generation in practice.

PathCrawler generates test-case inputs to ensure structural coverage of C code in unit testing. The user can choose between different coverage criteria and PathCrawler outputs detailed coverage information for each test-case and a justification of coverage failures. Development of PathCrawler started in 2002 and over the years it has been extended to treat a larger subset of C programs and applied to many different verification problems, most often on embedded software. In 2010 it was made publicly available as an online test server for evaluation and use in teaching and it is this version which will be used for the tutorial.

Diversity is an extensible tool for the development of model-based formal analyses using symbolic execution. It comes with an expressive entry language that captures a wide range of classical models semantics. One major application domain for Diversity is model-based testing (MBT), the development of the tool has been mainly driven by needs arising from MBT, such as time modeling, customizable tools or selection criteria. The objective of the introductory tutorial on Diversity is to get familiar with the tool GUI, entry language, symbolic exploration process and, finally, some built-in analysis modules: exhaustive exploration with stopping criteria, test oracle, behavior selection heuristics. 

The tutorial materials will be made available online.