These are the contents of Verification and Validation (CIS841) course offered at Kansas State University in Fall'15-'17. This is an upper-level graduate course. It requires students to have taken graduate-level courses in software engineering and software specification and to have some experience developing software.
At the beginning of the semester, a subset of papers from the following list is selected for discussion. We consdier the interests and strengths of students enrolled in the course while selecting the papers. For example, if most of the students are employed in the industry, then we pick more industry paper (i.e., tools, tech transfer, studies). Then, each student is assigned a set of papers from the set.
In each class, one student presents an assigned paper to the rest of the class while the rest of the class participates in discussing the assigned paper. The presenter is required to be aware of the entire content of the paper and to try to answer any questions raised during the discussion. At times, the presenter may not understand some parts of the paper. In such situation, he/she can consult the instructor about such parts of the paper before presenting the paper or he/she can state his/her understanding and seek clarification from the instructor in class while presenting the paper. The instructor will steer the in-class discussion (e.g., by asking questions/clarifications about specific parts of papers) and clarify any doubts about the paper that were unanswered by the presenter.
When there are more than few students in a class, students are required to summarize the paper before the class, update the summary after discussing the paper, and then submit the updated summary to the instructor for evaluation.
- A Decade of Software Model Checking with SLAM. Thomas Ball, Vladimir Levin, and Sriram Rajamani. CACM'11. Model Checking
- A Randomized Scheduler with Probabilistic Guarantees of Finding Bugs. Sebastian Burckhardt, Pravesh Kothari, Madanlal Musuvathi, and Santosh Nagarakatte. ASPLOS'10. Bug Finding
- [A Survey of Symbolic Execution Techniques.] Roberto Baldoni, Emilio Coppa, Daniele Cono D'Elia, Camil Demetrescu, and Irene Finocchi. Arxiv'16. Symbolic Execution
- Achievements, open problems and challenges for search based software testing. Mark Harman, Yue Jia and Yuanyuan Zhang. ICST'15. Testing
- Addressing Dynamic Issues of Program Model Checking. Flavio Lerda, and Willem Visser. SPIN'01. Model Checking
- An Analysis and Survey of the Development of Mutation Testing. Yue Jia and Mark Harman. TSE'11. Testing
- An introduction to Property-based Testing. Testing
- Astree: From Research to Industry. David Delmas and Jean Souyris. SAS'07. Static Analysis
- Automated White-box Testing. Patrice Godefroid, Michael Levin, and David Molnar. NDSS'08. Fuzzing, Testing
- Bandera: a source-level interface for model checking Java programs. James C. Corbett, Matthew B. Dwyer, John Hatcliff, and Robby. ICSE'00. Model Checking, Tooling
- Basic Concepts and Taxonomy of Dependable and Secure Computing. Algridas Avizienis and Brian Randell. IEEE Transactions on Dependable and Secure Computing 2004. Concepts
- Boeing Flies on 99% Ada.
- CUTE: a concolic unit testing engine for C. Koushik Sen, Darko Marinov, and Gul Agha. FSE'05. Symbolic Execution, Testing
- Cadena: An Integrated Development, Analysis, and Verification Environment for Component-based Systems. John Hatcliff, William Deng, Matthew B. Dwyer, Georg Jung, and Venkatesh Prasad Ranganath. ICSE'03. MDD, Tooling
- [Choosing Properties for Property-based Testing.] (http://fsharpforfunandprofit.com/posts/property-based-testing-2/) Testing
- Compatibility Testing via Patterns-based Trace Comparison. Venkatesh-Prasad Ranganath, Pradip Vallathol, and Pankaj Gupta. ASE'14. Tech Transfer, Tooling
- DART: Directed Automated Random Testing. Patrice Godefroid, Nils Klarlund, and Koushik Sen. PLDI'05. Symbolic Execution, Testing
- DARWIN: An Approach for Debugging Evolving Programs. Dawei Qi, Abhik Roychoudoury, Zhenkai Liang, and Kapil Vaswani. FSE'09. Testing
- Differential Testing for Software. William M. McKeeman. DTJ'98. Testing
- Enabling Efficient Partial Order Reductions for Model Checking Object-Oriented Programs Using Static Calculation of Program Dependences. Venkatesh Prasad Ranganath, John Hatcliff, and Robby. Santos-TR2007-2. Model Checking, POR
- Evaluating Static Analysis Defect Warnings On Production Software. Nathaniel Ayewah, William Pugh, J. David Morgenthaler, John Penix, and YuQian Zhou. PASTE'07. Evaluation, Static Analysis, Tooling
- Evaluating the Effectiveness of Slicing for Model Reduction of Concurrent Object-Oriented Programs. Matthew B. Dwyer, John Hatcliff, Matthew Hoosier, Venkatesh Prasad Ranganath, Robby, and Todd Wallentine. TACAS'06. Evaluation, Program Slicing, Model Checking
- Feedback-directed random test generation. Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball. ICSE'07. Testing
- How AWS uses formal methods. Newcombe, Rath, Zhang, Munteanu, Brooker, and Dearduff. CACM'15e. Specification
- How Complex Systems Fail. Richard I. Cook. Systems and Failures
- How Microsoft built, and is still building, Windows 10. Testing
- Isolating Cause-Effect Chains from Computer Programs. Andreas Zeller. FSE'02. Debugging
- Locating Causes of Program Failures. Holger Cleve and Andreas Zeller. ICSE'05. Debugging
- Model-Based Quality Assurance of Protocol Documentation: Tools and Methodology. Wolfgang Grieskamp, Nicolas Kicillof, Keith Stobie, and Victor Braberman. STVR (ICST'08). Specification
- Model-Based Testing: Where does it stand? Robert V. Binder, Bruno Legeard, and Anne Kramer. CACM'15. Evaluation, Testing
- Model Checking Programs. Willem Visser, Klaus Havelund, Guillaume Brat, SeungJoon Park, and Flavio Lerda. ASE'03. Model Checking
- Moving Fast with Software Verication. Cristiano Calcagno, Dino Distefano, Jeremy Dubreil, Dominik Gabi, Pieter Hooimeijer, Martino Luca, Peter O'Hearn, Irene Papakonstantinou, Jim Purbrick, and Dulma Rodriguez. NFM'15. Static Analysis, Tech Transfer
- Pex - White Box Test Generation for .NET. Nikolai Tillmann and Peli de Halleux. TAP'08. Symbolic Execution, Testing
- Race Conditions, Distribution, Interactions -- Testing the Hard Stuff and Staying Sane. Testing
- Realizing quality improvement through test driven development: results and experiences of four industrial teams. Nachiappan Nagappan, E. Michael Maximillien, Thirumalesh Bhat, and Laurie Williams. ESE'08 TDD
- Regression Testing Minimisation, Selection and Prioritisation : A Survey. Yoo and Harman. STVR'12. Concepts, Testing
- SLAM and Static Driver Verifier: Technology Transfer for Formal Methods inside Microsoft. Thomas Ball, Byron Cook, Vladimir Levin, and Sriram Rajamani. IFM'04. Tech Transfer
- Satisfiability Modulo Theories: Introduction and Applications. Leonardo De Moura and Nikolaj Bjorner. CACM'11. SMT
- Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems. Ding Yuan, Yu Luo, Xin Zhuang, Guilherme Renna Rodrigues, Xu Zhao, Yongle Zhang, Pranay U. Jain, and Michael Stumm. OSDI'14. Testing
- Software Model Checking. Ranjit Jhala and Rupak Majumdar. ACM CS'09. Model Checking
- Symbolic Execution for Software Testing: Three Decades Later. Cristian Cadar and Koushik Sen. CACM'13. Symbolic Execution, Testing
- The Model Checker SPIN. Gerard J. Holzmann. TSE'97 Model Checking
- Transferring an Automated Test Generation Tool to Practice: From Pex to Fakes and Code Digger. Nikolai Tillmann, Peli De Halleux, and Tao Xie. ASE'14. Tech Transfer
- Who Builds a House without Drawing Blueprints? Leslie Lamport. CACM'15. Specification
- Why Amazon chose TLA+? Chris Newcombe. ABZ'14. Specification
Copyright (c) 2015, Venkatesh-Prasad Ranganath
This work is licensed under a Creative Commons Attribution 4.0 International License.
Compiler: Venkatesh-Prasad Ranganath