Elsevier

Information and Software Technology

Utilising CI environment for efficient and effective testing of NFRs

Abstract

Context

Continuous integration (CI) is a practice that aims to continuously verify quality aspects of a software intensive system both for functional and non-functional requirements (NFRs). Functional requirements are the inputs of development and can be tested in isolation, utilising either manual or automated tests. In contrast, some NFRs are difficult to test without functionality, for NFRs are often aspects of functionality and express quality aspects. Lacking this testability attribute makes NFR testing complicated and, therefore, underrepresented in industrial practice. However, the emergence of CI has radically affected software development and created new avenues for software quality evaluation and quality information acquisition. Research has, consequently, been devoted to the utilisation of this additional information for more efficient and effective NFR verification.

Objective

We aim to identify the state-of-the-art of utilising the CI environment for NFR testing, hereinafter referred to as CI-NFR testing.

Method

Through rigorous selection, from an initial set of 747 papers, we identified 47 papers that describe how NFRs are tested in a CI environment. Evidence-based analysis, through coding, is performed on the identified papers in this SLR.

Results

Firstly, ten CI approaches are described by the papers selected, each describing different tools and nine different NFRs where reported to be tested. Secondly, although possible, CI-NFR testing is associated with eight challenges that adversely affect its adoption. Thirdly, the identified CI-NFR testing processes are tool-driven, but there is a lack of NFR testing tools that can be used in the CI environment. Finally, we proposed a CI framework for NFRs testing.

Conclusion

A synthesised CI framework is proposed for testing various NFRs, and associated CI tools are also mapped. This contribution is valuable as results of the study also show that CI-NFR testing can help improve the quality of NFR testing in practices.

Introduction

Frequent integration and automation testing [1] in a rapid iteration to enhance software quality becomes an important competence in software development [2].

Continuous Integration (CI) [2], [3] is widely adopted to improve software quality [4], [5], [6], [7], [8] through verifying and validating each change of a product in fast iterations by using automation tools and technologies [9].

By utilising the CI environment, developers' commits or pull requests are verified and validated continuously as early as possible against the requirements. Generally, a CI environment contains at least a CI server like Jenkins, a source code management system that hosts source code [10], and hardware infrastructure to execute several types of tests [11]. For example, first developers push a commit as a pull request to Github, then this pull request is tested by the Jenkins tool [10] on the Github infrastructure cloud. After the test, developers collect test results in the Github project.

Consequently, the developers get fast feedback about their changes and the quality of the source code, which prevents faults slipping through to the later stages of the software development lifecycle.

However, the existing CI environments mainly focus on software functionality in many software companies [2], [5], [8], but the non-functional requirements (NFRs), i.e. qualities of the system [12], are still mostly tested manually [13].

NFRs can be divided into internal and external qualities, where external NFRs include user experience, performance and security and internal NFRs include maintainability, extendability, and testability of the software [14]. Moreover, the functionality of a software system may not be usable without some necessary NFRs [12]. This implies that the qualities of the system, or NFRs if you will, provide at least as much, or even more, value than the functionality to customers and developers. Thus, making research into better quality evaluation, analysis and assurance of importance for industrial practice.

According to the research of Mairiza et al. [14], the most commonly considered NFRs in the academic literature contain performance, reliability, usability, security, and maintainability, as shown in Table 2. However, the list of NFR types is considerably larger, and given the importance of software quality, research into more types is warranted.

Some NFRs are difficult to test independently of having functionality, because we are not good at testing certain items as they are not one item or part of one functional part, but rather on an overall system level (e.g. performance) [11]. Some NFRs are just hard to test (e.g. hedonic aspects, usability), and they are not easy to automate from the quality assurance perspective. There is research that suggests that NFRs can efficiently and effectively be tested by making use of a CI environment [13], but to our knowledge no comprehensive study has been performed to evaluate the state-of-the-art (SOTA) on the topic.

The underutilisation of the CI environment for NFR testing, in continuation referred to as CI-NFR testing, and its confirmed benefits [13] in terms of effectiveness and efficiency indicates that more work should be done on the NFRs' aspects. This leads us to investigate what CI approaches and tools are effective to evaluate NFRs; which industrial practices and challenges have been reported and documented in this area. To understand the existing knowledge of using a CI environment and associated tests to verify NFRs, we conducted this Systematic Literature Review (SLR). Furthermore, to clarify, in this study we are not interested in the NFRs of the CI environment itself, but the NFRs of a system or project which is developed using a CI environment.

This paper is organised as follows: Section 2 introduces related work. Section 3 illustrates the systematic literature review (SLR) method. Section 4 presents the results of SLR based on extracted data. Section 5 discusses the synthesis and the proposed CI framework for NFR testing. The conclusions and ideas for future work are in Section 7.

Section snippets

Related work

We identified four existing SLRs related to the topic of our SLR as shown in Table 1. These SLRs, being trinary data, were not included in our selected papers since we focused on the secondary data.

The four SLRs in Table 1 were summarised as follows:

In 2014 Stahl and Bosch [15] investigated software continuous integration (CI) and proposed a descriptive model by focussing on CI processes for industrial practices. They mentioned that non-functional system tests are a part of the CI automation

Methodology

In this section we introduce our research questions, search strategies, inclusion and exclusion criteria, inter-rater reliability analysis, data collection, and synthesis.

Results

This section will report the results from analysing and synthesising the extracted data to answer the research questions. The results are based on the synthesised data directly with our interpretations.

Discussion

In this section, we interpret and reflect on the results in the previous section.

Proposed CI framework

We proposed a baseline CI framework based on the results we found through this SLR which aims to facilitate effective and efficient NFR testing. Moreover, we mapped all extracted CI tools and testable NFRs to each component of the proposed CI framework.

The proposed CI framework, including optional tools and what components of the CI chain that focus on what NFRs, is illustrated in Fig. 17.

Conclusion and future work

This section presents the conclusions of this SLR, the CI framework proposal, and the future work.

Acknowledgments

We would like to acknowledge that this work was supported by the KKS foundation through the S.E.R.T. Research Profile project at Blekinge Institute of Technology.

References (57)

  • et al.

    Evolution of software in automated production systems: challenges and research directions

    J. Syst. Softw.

    (2015)

  • R. Nouacer et al.

    Equitas: a tool-chain for functional safety and reliability improvement in automotive systems

    Microprocess. Microsyst.

    (2016)

  • D. Preuveneers et al.

    Systematic scalability assessment for feature oriented multi-tenant services

    J. Syst. Softw.

    (2016)

  • E. Laukkanen et al.

    Problems, causes and solutions when adopting continuous delivery a systematic literature review

    Inf. Softw. Technol.

    (2017)

  • P. Rodríguez et al.

    Continuous deployment of software intensive products and services: a systematic mapping study

    J. Syst. Softw.

    (2017)

  • D. Ståhl et al.

    Modeling continuous integration practice differences in industry software development

    J. Syst. Softw.

    (2014)

  • B. Fitzgerald et al.

    Continuous software engineering: a roadmap and agenda

    J. Syst. Softw.

    (2017)

  • L. Chen

    Continuous delivery: huge benefits, but challenges too

    IEEE Softw.

    (2015)

  • J. Bosch

    Continuous software engineering: an introduction

    Continuous software engineering

    (2014)

  • N. Rathod et al.

    Test orchestration a framework for continuous integration and continuous deployment

    Pervasive Computing (ICPC), 2015 International Conference on

    (2015)

  • E. Knauss et al.

    Continuous integration beyond the team: a tooling perspective on challenges in the automotive industry

    Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement

    (2016)

  • A. Miller

    A hundred days of continuous integration

    Agile, 2008. AGILE'08. Conference

    (2008)

  • S. Dösinger et al.

    Communicating continuous integration servers for increasing effectiveness of automated testing

    Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering

    (2012)

  • A. Janes et al.

    A continuous software quality monitoring approach for small and medium enterprises

    Proceedings of the 8th ACM/SPEC on International Conference on Performance Engineering Companion

    (2017)

  • M. Shahin et al.

    Continuous integration, delivery and deployment: a systematic review on approaches, tools, challenges and practices

    IEEE Access

    (2017)

  • R. Abreu et al.

    Codeaware: sensor-based fine-grained monitoring and management of software artifacts

    Software Engineering (ICSE), 2015 IEEE/ACM 37th IEEE International Conference on

    (2015)

  • K.-T. Rehmann et al.

    Performance monitoring in SAP HANA's continuous integration process

    ACM SIGMETRICS Perform. Eval. Rev.

    (2016)

  • L. Chung et al.

    On non-functional requirements in software engineering

    Conceptual Modeling: Foundations and Applications

    (2009)

  • K.V.R. Paixão et al.

    On the interplay between non-functional requirements and builds on continuous integration

    Proceedings of the 14th International Conference on Mining Software Repositories

    (2017)

  • D. Mairiza et al.

    An investigation into the notion of non-functional requirements

    Proceedings of the 2010 ACM Symposium on Applied Computing

    (2010)

  • K.L. Gwet

    Computing inter-rater reliability and its variance in the presence of high agreement

    Br. J. Math. Stat.Psychol.

    (2008)

  • A. Agresti, Categorical data analysis. 2013,...
  • J.L. Fleiss

    Measuring nominal scale agreement among many raters

    Psychol. Bull.

    (1971)

  • J. Cohen

    A coefficient of agreement for nominal scales

    Educ. Psychol. Meas.

    (1960)

  • D.G. Altman

    Practical Statistics for Medical Research

    (1990)

  • J.R. Landis et al.

    The measurement of observer agreement for categorical data

    Biometrics

    (1977)

  • C. Wohlin

    Guidelines for snowballing in systematic literature studies and a replication in software engineering

    Proceedings of the 18th International Conference on Evaluation and Assessment in Software Engineering

    (2014)

  • D.S. Cruzes et al.

    Recommended steps for thematic synthesis in software engineering

    2011 International Symposium on Empirical Software Engineering and Measurement

    (2011)

  • Cited by (8)

    View full text