Ciclo di Seminari A+ - Secondo incontro
-
Day - Time:
25 March 2020, h.10:00
-
Place:
Area della Ricerca CNR di Pisa - Room: C-29
Speakers
Referent
Fabrizio Falchi
Abstract
Antonia Bertolino - "Adaptive Test Case Allocation, Selection and Generation using Coverage Spectrum and Operational Profile"Abstract: We present an adaptive software testing strategy for test case allocation, selection and generation, based on the combined use of operational profile and coverage spectrum, aimed at achieving high delivered reliability of the program under test. Operational profile-based testing is a black-box technique considered well suited when reliability is a major concern, as it selects the test cases having the largest impact on failure probability in operation. Coverage spectrum is a characterization of a program's behavior in terms of the code entities (e.g., branches, statements, functions) that are covered as the program executes. The proposed strategy - named covrel+ - complements operational profile information with white-box coverage measures, so as to adaptively select/generate the most effective test cases for improving reliability as testing proceeds. We assess covrel+ through experiments with subjects commonly used in software testing research, comparing results with traditional operational testing. The results show that exploiting operational and coverage data in an integrated adaptive way allows generally to outperform operational testing at achieving a given reliability target, or at detecting faults under the same testing budget, and that covrel+ has greater ability than operational testing in detecting hard-to-detect faults.Alejandro Moreo Fernandez - "Funnelling: A New Ensemble Method for Heterogeneous Transfer Learning and its Application to Cross-Lingual Text Classification"Abstract: Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when "naïvely" classifying each document via its corresponding language-specific classifier. To obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle "multilabel" CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespective of language, are classified by the same (second-tier) classifier. For this classifier, all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by first-tier, language-dependent classifiers. This allows the classification of all test documents, of any language, to benefit from the information present in all training documents, of any language. We present substantial experiments, run on publicly available multilingual text collections, in which funnelling is shown to significantly outperform a number of state-of-the-art baselines. All code and datasets (in vector form) are made publicly available.