9th Joint Meeting of
the European Software Engineering Conference and
the ACM SIGSOFT Symposium
on the Foundations of Software Engineering

Tutorial Program

  1. Using Continuous Prediction and Tradeoffs to Achieve Project Success
  2. Software Engineering for Mobile Apps: Research Accomplishments and Future Challenges
  3. Statistics in Software Engineering: Pitfalls and Good Practices
  4. DevOps and Continuous Delivery - Simplifying Workload Development, Deployment and Management CANCELLED
  5. Symbolic Execution For Program Debugging and Repair
  6. Git & GitHub Foundations for Educators
  7. Energy-Aware and Energy-Efficient Software
  8. Do's and Don'ts for Empirical Software Engineering: Three different perspectives

9:30 - 11:00

Using Continuous Prediction and Tradeoffs to Achieve Project Success

Murray Cantor and Peri Tarr (IBM Research)

Location: Pavlovsky Hall

'Project success' in software engineering occurs when development teams produce products that provide the value desired by the funding stakeholders. Software engineering processes are full of uncertainty, however, and many things can go wrong that prevent teams from delivering the desired set of features, at the desired quality level, by a given date. A prevalent problem in software engineering today is, therefore, the disconnect between the funding stakeholders, who expect some return on their investment in the project and the project team, which knows it is not possible to commit simultaneously to scope, quality, and time. This tutorial describes a set of predictive analytic techniques to facilitate ongoing conversations between stakeholders and development teams to manage jointly the uncertainties and tradeoffs in software development, in both traditional and continuous delivery contexts. It includes modern development methods, semantic web and collaboration principles, and application of predictive analytics and optimization techniques.

Software Engineering for Mobile Apps: Research Accomplishments and Future Challenges

Emad Shihab (Rochester Institute of Technology) and Ahmed E. Hassan (Queen's University, Canada)

Location: Mariinsky Hall

Over the past few years, we have seen a boom in the popularity of mobile devices and mobile apps which run on these devices. These modern apps bring a whole slew of new challenges to software practitioners. Traditional mobile challenges such as limited processing power are no longer as relevant, instead a whole set of software engineering challenges have emerged due to the highly-connected nature of these devices, their unique distribution channels (i.e., app markets like the Apple App store and Android Play market), and novel revenue models (e.g., freemuim and subscription apps). This tutorial presents the latest research in Mobile Software Engineering (MSE). First, we present the differences between mobile apps and traditional desktop applications. Then, we discuss the state-of-the-art research on MSE code and user-perceived quality. We also highlight recent findings on the impact of various mobile specific issues (e.g., monetization, mobility) on traditional software engineering problems. Lastly, the tutorial will present future challenges and opportunities in the area of MSE and provide some resources to further enhance research in this upcoming research area.

11:30 - 13:00

Statistics in Software Engineering: Pitfalls and Good Practices

Audris Mockus (Avaya Labs), Ahmed E. Hassan (Queen's University, Canada), and Meiyappan Nagappan (Queen's University, Canada)

Location: Mariinsky Hall

The reliance of the Software Engineering (SE) community on data and on quantitative analysis has grown tremendously, yet a typical SE researcher or practitioner has a limited exposure to these domains. The peculiarities of highly-structured and large-scale data from the version control systems and other data sources used in SE makes it difficult for an SE researcher to avoid numerous pitfalls. The purpose of this tutorial is to illustrate the most common pitfalls when conducting a statistical analysis of software repositories. In particular, we will discuss how to clean and transform software data, which models or tests to use, when, and why, and how to check model assumptions. At the end of the tutorial, participants would be familiar with ways to address the most common challenges associated with statistical analysis of software repositories. Sample datasets and R scripts to illustrate the various components of our tutorial will also be shared.

Prerequisite: The focus of the presentation will be on how to solve concrete SE problems, but attendees should be familiar with basic ideas in probability and statistics, such as probability distributions and regression models.

DevOps and Continuous Delivery - Simplifying Workload Development, Deployment and Management

Florian Rosenberg & Tamar Eilam (IBM Research)

CANCELLED

14:30 - 16:00

Symbolic Execution For Program Debugging and Repair

Abhik Roychoudhury (Nat'l Univ. of Singapore) and Satish Chandra (IBM Research)

Location: Pavlovsky Hall

Symbolic execution refers to executing a program with symbolic, or un-instantiated inputs, as opposed to concrete inputs. Symbolic execution along a program path yields a path condition representing the set of all inputs executing the path, as well as an output expression capturing the input-output relationship. In this way, symbolic execution allows reasoning over a set of concrete executions with "equivalent" behavior only once as a group, as opposed to each execution individually. In this tutorial we promote, and prompt the community to think about, another use of symbolic execution: one in which the objective is to debug and possibly repair a program. We will show how symbolic execution can help uncover the intended program behavior via a variety of methods, and also show how fixing or repairing a program can benefit from symbolic execution. Attendees of this tutorial will gain a working knowledge of symbolic execution technology. They will also gain insight into how this technology can be used to build useful tools which enhance software quality and programmer productivity. No prior background is assumed from the attendees - the tutorial will be self-contained.

Git & GitHub Foundations for Educators

John Britton (GitHub)

Location: Mariinsky Hall

Professional software developers depend on version control every day and that dependency continues to grow as technology advances. Students with experience using version control are more prepared and qualified to work in industry. Let us expose students to version control early in their training, not only as an additional skill, but as a tool to improve the learning experience. This tutorial on Git and GitHub will explore the concepts and application of distributed version control and how to effectively begin using it in the classroom. Step through the foundations of Git and GitHub through practical, every-day commands, workflow ideas, and practical tips. As a concluding topic, see strategies for individual and group work submission, grading and feedback. Although this tutorial is targeted to teachers, anyone interested in learning about Git and GitHub will benefit by attending.

Some highlights include:

NOTE: You may want to bring a laptop to follow along, but it's not necessary.

16:30 - 18:00

Energy-Aware and Energy-Efficient Software

Yu David Liu, SUNY Binghamton

Location: Pavlovsky Hall

From smartphone Apps to sensor network applications to data center jobs, energy efficiency is increasingly becoming a critical goal of modern software engineering. This tutorial offers a bird's eye view of the emerging field of energy-aware and energy-efficient software development, with a focus on programming. The first part of the tutorial introduces general strategies of application-level energy management, such as regulating software and hardware interactions, and balancing the trade-off between energy efficiency and quality of service. The second part of the tutorial presents four programming systems as case studies -- namely, Eon, Green, Energy Types, and Green Streams -- and elucidates the role of application software research in green computing from distinct perspectives. The primary targeted audience are researchers and practitioners interested in green programming technologies. As the tutorial is designed to focus on the recurring themes of energy management not the idiosyncratic programming details, it may further offer insights to researchers interested in broader scopes of green software engineering, such as software architectures, design patterns, software quality, and domain-specific modeling.

Do's and Don'ts for Empirical Software Engineering: Three different perspectives

Massimiliano Di Penta (University of Sannio), Jens Knodel (Fraunhofer Institute for Experimental Software Engineering IESE), and Carl Worms (Credit Suisse)

Location: Mariinsky Hall

Improving the body of knowledge in software engineering, and facilitating its adoption in the practice, requires two fundamental stages: (1) development of novel approaches, and (2) their empirical evaluation. The tutorial consolidates experiences and lessons learned in planning and conducting empirical evaluations from three different perspectives: academia, technology transfer in applied research, and industrial practice. Depending on the context in which an empirical evaluation needs to be conducted, e.g., a classroom in academic context, a live project in industry, or a pilot, experimental project, different kinds of studies (controlled experiments, quasi-experiments, or case studies) needs to be conducted. Much on the same way, while some studies may require the use of rigorous statistical procedures to test specific hypotheses, in other cases the use of qualitative analysis might be much more appropriate. Then, whenever possible, the combination of quantitative and qualitative analyses will allow for providing practical explanations to the purely quantitative findings. The aim of this tutorial is to provide guidelines on what kinds of empirical evaluation should be conducted in different circumstances, based on the research questions to be addressed and on the context in which the study will be conducted, and how such evaluation needs to be properly planned. Specifically, the tutorial will provide insights on (a) how to design and setup empirical studies depending on the context in which the study will be conducted; (b) when to get statistically valid statements, when it is more appropriate to just qualitative insights out of a study, and when the two things should be combined; (c) how to conceive effective qualitative studies as opposed to quantitative ones; (d) how to avoid "hidden assumptions'' corrupting the validity of empirical studies; and (e) how to report study results in a way that can be effective for different stakeholders, e.g., from academia or industry.

Supported by