Skip to Content
DocsBibliography

A Bibliography of Publications that Cite the PPL

Here are some papers that cite the Parma Polyhedra Library and/or cite papers on which we have described innovations that are implemented in the library. If you are aware of a paper that should be mentioned in this bibliography, please let us know.

The corresponding BibTeX source is also available.

"Termination Analysis of Java Bytecode"(article)
Authors: Albert, E., Arenas, P., Codish, M., Genaim, S., Puebla, G., Zanardini, D.
Editors: , Boer
Date: 2008
Pages: 2-18
Termination analysis has received considerable attention, traditionally in the context of declarative programming, and recently also for imperative languages. In existing approaches, termination is performed on source programs. However, there are many situations, including mobile code, where only the compiled code is available. In this work we present an automatic termination analysis for sequential Java Bytecode programs. Such analysis presents all of the challenges of analyzing a low-level language as well as those introduced by object-oriented languages. Interestingly, given a bytecode program, we produce a \\emph\constraint logic program\, CLP, whose termination entails termination of the bytecode program. This allows applying the large body of work in termination of CLP programs to termination of Java bytecode. A prototype analyzer is described and initial experimentation is reported.

"Automatic Inference of Upper Bounds for Recurrence Relations in Cost Analysis"(article)
Authors: Albert, E., Arenas, P., Genaim, S., Puebla, G.
Editors: ,
Date: 2008
Pages: 221-237
The classical approach to automatic cost analysis consists of two phases. Given a program and some measure of cost, we first produce \\emph\recurrence relations\ (RRs) which capture the cost of our program in terms of the size of its input data. Second, we convert such RRs into \\emph\closed form\ (i.e., without recurrences). Whereas the first phase has received considerable attention, with a number of cost analyses available for a variety of programming languages, the second phase has received comparatively little attention. In this paper we first study the features of RRs generated by automatic cost analysis and discuss why existing computer algebra systems are not appropriate for automatically obtaining closed form solutions nor upper bounds of them. Then we present, to our knowledge, the first practical framework for the fully automatic generation of reasonably accurate upper bounds of RRs originating from cost analysis of a wide range of programs. It is based on the inference of \\emph\ranking functions\ and \\emph\loop invariants\ and on \\emph\partial evaluation\.

"COSTA: Design and Implementation of a Cost and Termination Analyzer for Java Bytecode"(anthos)
Authors: Albert, E., Arenas, P., Genaim, S., Puebla, G., Zanardini, D.
Editors: , , ,
Date: 2008
Pages: 113-132
This paper describes the architecture of COSTA, an abstract interpretation based COST and termination Analyzer for Java bytecode. The system receives as input a bytecode program, (a choice of) a \\emph\resource\ of interest and tries to obtain an upper bound of the resource consumption of the program. COSTA provides several non-trivial notions of cost, as the consumption of the heap, the number of bytecode instructions executed and the number of calls to a specific method. Additionally, COSTA tries to prove \\emph\termination\ of the bytecode program which implies the boundedness of any resource consumption. Having cost and termination together is interesting, as both analyses share most of the machinery to, respectively, infer cost \\emph\upper bounds\ and to prove that the execution length is always \\emph\finite\ (i.e., the program terminates). We report on experimental results which show that COSTA can deal with programs of realistic size and complexity, including programs which use Java libraries. To the best of our knowledge, this system provides for the first time evidence that resource usage analysis can be applied to a realistic object-oriented, bytecode programming language.
Revised papers presented at the 6th International Symposium on Formal Methods for Components and Objects (FMCO 2007), Amsterdam, The Netherlands, October 24–26, 2007

"Symbolic Analysis for Improving Simulation Coverage of Simulink/Stateflow Models"(article)
Authors: Alur, R., Kanade, A., Ramesh, S., Shashidhar, K.
Editors: ,
Date: 2008
Pages: 89-98
Aimed at verifying safety properties and improving simulation coverage for hybrid systems models of embedded controlsoftware, we propose a technique that combines numerical simulation and symbolic methods for computing state-sets. We consider systems with linear dynamics described in the commercial modeling tool Simulink/Stateflow. Given an initial state $x$, and a discrete-time simulation trajectory, our method computes a set of initial states that are guaranteed to be equivalent to $x$, where two initial states are considered to be equivalent if the resulting simulation trajectories contain the same discrete components at each step of the simulation. We illustrate the benefits of our method on two case studies. One case study is a benchmark proposed in the literature for hybrid systems verification and another is a Simulink demo model from Mathworks.

"IMITATOR~II: A Tool for Solving the Good Parameters Problem in Timed Automata"(article)
Authors: André, É.
Editors: ,
Date: 2010
Pages: 91-99
We present here \\textsc\Imitator\~II, a new version of \\textsc\Imitator\, a tool implementing the ``inverse method'' for parametric timed automata: given a reference valuation of the parameters, it synthesizes a constraint such that, for any valuation satisfying this constraint, the system behaves the same as under the reference valuation in terms of traces, i.e., alternating sequences of locations and actions. \\textsc\Imitator\~II also implements the ``behavioral cartography algorithm'', allowing us to solve the following good parameters problem: find a set of valuations within a given bounded parametric domain for which the system behaves well. We present new features and optimizations of the tool, and give results of applications to various examples of asynchronous circuits and communication protocols.

"Abstraction Refinement of Linear Programs with Arrays"(article)
Authors: Armando, A., Benerecetti, M., Mantovani, J.
Editors: ,
Date: 2007
Pages: 373-388
In previous work we presented a model checking procedure for linear programs, i.e. programs in which variables range over a numeric domain and expressions involve linear combinations of the variables. In this paper we lift our model checking procedure for linear programs to deal with arrays via iterative abstraction refinement. While most approaches are based on predicate abstraction and therefore the abstraction is relative to sets of predicates, in our approach the abstraction is relative to sets of variables and array indexes, and the abstract program can express complex correlations between program variables and array elements. Thus, while arrays are problematic for most of the approaches based on predicate abstraction, our approach treats them in a precise way. This is an important feature as arrays are ubiquitous in programming. We provide a detailed account of both the abstraction and the refinement processes, discuss their implementation in the eureka tool, and present experimental results that confirm the effectiveness of our approach on a number of programs of interest.

"Recent Progress in Continuous and Hybrid Reachability Analysis"(article)
Authors: Asarin, E., Dang, T., Frehse, G., Girard, A., Le Guernic, C., Maler, O.
Date: 2006
Set-based reachability analysis computes all possible states a system may attain, and in this sense provides knowledge about the system with a completeness, or coverage, that a finite number of simulation runs can not deliver. Due to its inherent complexity, the application of reachability analysis has been limited so far to simple systems, both in the continuous and the hybrid domain. In this paper we present recent advances that, in combination, significantly improve this applicability, and allow us to find better balance between computational cost and accuracy. The presentation covers, in a unified manner, a variety of methods handling increasingly complex types of continuous dynamics (constant derivative, linear, nonlinear). The improvements include new geometrical objects for representing sets, new approximation schemes, and more flexible combinations of graph-search algorithm and partition refinement. We report briefly some preliminary experiments that have enabled the analysis of systems previously beyond reach.

"Size-Change Termination and Bound Analysis"(article)
Authors: Avery, J.
Editors: ,
Date: 2006
Pages: 192-207
Despite its simplicity, the size-change termination principle, presented by Lee, Jones and Ben-Amram in [LJB01], is surprisingly strong and is able to show termination for a large class of programs. A significant limitation for its use, however, is the fact that the SCT requires data types to be well-founded, and that all mechanisms used to determine termination must involve decreases in these global, well-founded partial orders. Following is an extension of the size-change principle that allows for non-well founded data types, and a realization of this principle for integer data types. The extended size-change principle is realized through combining abstract interpretation over the domain of convex polyhedra with the use of size-change graphs. In the cases when data types \\emph\are\ well founded, the method handles every case that is handled by LJB size-change termination. The method has been implemented in a subject language independent shared library, \\texttt\libesct\ (available at [Ave05a]), as well as for the ANSI C specializer $\\texttt\C-Mix\_\\texttt\II\$, handling a subset of its internal language \\texttt\Core-C\.

"Generation of Basic Semi-algebraic Invariants Using Convex Polyhedra"(article)
Authors: Bagnara, R., Rodr\iguez-Carbonell, E., Zaffanella, E.
Editors: ,
Date: 2005
Pages: 19-34
A technique for generating invariant polynomial \\emph\inequalities\ of bounded degree is presented using the abstract interpretation framework. It is based on overapproximating basic semi-algebraic sets, i.e., sets defined by conjunctions of polynomial inequalities, by means of convex polyhedra. While improving on the existing methods for generating invariant polynomial \\emph\equalities\, since polynomial inequalities are allowed in the guards of the transition system, the approach does not suffer from the prohibitive complexity of the methods based on quantifier-elimination. The application of our implementation to benchmark programs shows that the method produces non-trivial invariants in reasonable time. In some cases the generated invariants are essential to verify safety properties that cannot be proved with classical linear invariants.

"Generation of Basic Semi-algebraic Invariants Using Convex Polyhedra"(report)
Authors: Bagnara, R., Rodr\iguez-Carbonell, E., Zaffanella, E.
Date: 2005
Available at \\url\{http://www.lsi.upc.es/dept/techreps/techreps.html\}

"Analysis of Linear Hybrid Systems in CLP"(anthos)
Authors: Banda, G., Gallagher, J. P.
Editor:
Date: 2009
Pages: 55-70
In this paper we present a procedure for representing the semantics of linear hybrid automata (LHAs) as constraint logic programs (CLP); flexible and accurate analysis and verification of LHAs can then be performed using generic CLP analysis and transformation tools. LHAs provide an expressive notation for specifying real-time systems. The main contributions are (i) a technique for capturing the reachable states of the continuously changing state variables of the LHA as CLP constraints; (ii) a way of representing events in the LHA as constraints in CLP, along with a product construction on the CLP representation including synchronisation on shared events; (iii) a framework in which various kinds of reasoning about an LHA can be flexibly performed by combining standard CLP transformation and analysis techniques. We give experimental results to support the usefulness of the approach and argue that we contribute to the general field of using static analysis tools for verification.
Revised selected papers presented at the 18th International Symposium on Logic-Based Program Synthesis and Transformation (LOPSTR 2008), Valencia, Spain, July 17–18, 2008

"Decomposition of Decidable First-Order Logics over Integers and Reals"(article)
Authors: Bouchy, F., Finkel, A., Leroux, J.
Date: 2008
Pages: 147-155
We tackle the issue of representing infinite sets of real-valued vectors. This paper introduces an operator for combining integer and real sets. Using this operator, we decompose three well-known logics extending Presburger with reals. Our decomposition splits a logic into two parts : one integer, and one decimal (i.e. on the interval $[0, 1[$). We also give a basis for an implementation of our representation.

"Iterating Octagons"(article)
Authors: Bozga, M., Gîrlea, C., Iosif, R.
Editors: ,
Date: 2009
Pages: 337-351
In this paper we prove that the transitive closure of a nondeterministic octagonal relation using integer counters can be expressed in Presburger arithmetic. The direct consequence of this fact is that the reachability problem is decidable for flat counter automata with octagonal transition relations. This result improves the previous results of Comon and Jurski [Hubert Comon and Yan Jurski. Multiple counters automata, safety analysis and presburger arithmetic. In \\emph\CAV\, LNCS 1427, pages 268–279, 1998] and Bozga, Iosif and Lakhnech [Marius Bozga, Radu Iosif, and Yassine Lakhnech. Flat parametric counter automata. In \\emph\ICALP\, LNCS 4052, pages 577–588. Springer-Verlag, 2006] concerning the computation of transitive closures for difference bound relations. The importance of this result is justified by the wide use of octagons to computing sound abstractions of real-life systems [A. Miné. The octagon abstract domain. \\emph\Higher-Order and Symbolic Computation\, 19(1):31–100, 2006]. We have implemented the octagonal transitive closure algorithm in a prototype system for the analysis of counter automata, called FLATA, and we have experimented with a number of test cases.

"Safety Verification of Fault Tolerant Goal-based Control Programs with Estimation Uncertainty"(article)
Authors: Braman, J. M. B., Murray, R. M.
Date: 2008
Pages: 27-32
Fault tolerance and safety verification of control systems that have state variable estimation uncertainty are essential for the success of autonomous robotic systems. A software control architecture called mission data system, developed at the Jet Propulsion Laboratory, uses goal networks as the control program for autonomous systems. Certain types of goal networks can be converted into linear hybrid systems and verified for safety using existing symbolic model checking software. A process for calculating the probability of failure of certain classes of verifiable goal networks due to state estimation uncertainty is presented. A verifiable example task is presented and the failure probability of the control program based on estimation uncertainty is found.

"Proving Parameterized Systems: The Use of a Widening Operator and Pseudo-Pipelines in Polyhedral Logic"(report)
Authors: Cachera, D., Morin-Allory, K.
Date: 2005
We propose proof techniques and tools for the formal verification of regular parameterized systems. These systems are expressed in the polyhedral model, which combines affine recurrence equations with index sets of polyhedral shape. We extend a previously defined proof system based on a polyhedral logic with the detection of pseudo-pipelines, that are particular patterns in the variable definitions generalizing the notion of pipeline. The combination of pseudo-pipeline detection with the use of a simple widening operator greatly improves the effectiveness of our proof techniques.

"Languages and Tools for Hybrid Systems Design"(article)
Authors: Carloni, L. P., Passerone, R., Pinto, A., Sangiovanni-Vincentelli, A. L.
Date: 2006
Pages: 1-193
The explosive growth of embedded electronics is bringing information and control systems of increasing complexity to every aspects of our lives. The most challenging designs are safety-critical systems, such as transportation systems (e.g., airplanes, cars, and trains), industrial plants and health care monitoring. The difficulties reside in accommodating constraints both on functionality and implementation. The correct behavior must be guaranteed under diverse states of the environment and potential failures; implementation has to meet cost, size, and power consumption requirements. The design is therefore subject to extensive mathematical analysis and simulation. However, traditional models of information systems do not interface well to the continuous evolving nature of the environment in which these devices operate. Thus, in practice, different mathematical representations have to be mixed to analyze the overall behavior of the system. \\emph\Hybrid systems\ are a particular class of mixed models that focus on the combination of discrete and continuous subsystems. There is a wealth of tools and languages that have been proposed over the years to handle hybrid systems. However, each tool makes different assumptions on the environment, resulting in somewhat different notions of hybrid system. This makes it difficult to share information among tools. Thus, the community cannot maximally leverage the substantial amount of work that has been directed to this important topic. In this paper, we review and compare hybrid system tools by highlighting their differences in terms of their underlying semantics, expressive power and mathematical mechanisms. We conclude our review with a comparative summary, which suggests the need for a unifying approach to hybrid systems design. As a step in this direction, we make the case for a \\emph\semantic-aware interchange format\, which would enable the use of joint techniques, make a formal comparison between different approaches possible, and facilitate exporting and importing design representations.

"Reasoning about Synchronization in GALS Systems"(article)
Authors: Chakraborty, S., Mekie, J., Sharma, D. K.
Date: 2006
Pages: 153-169
Correct design of interface circuits is crucial for the development of System-on-Chips (SoC) using off-the-shelf IP cores. For correct operation, an interface circuit must meet strict synchronization timing constraints, and also respect sequencing constraints between events dictated by interfacing protocols and rational clock relations. In this paper, we propose a technique for automatically analyzing the interaction between independently specified synchronization constraints and sequencing constraints between events. We show how this analysis can be used to derive delay constraints for correct operation of interface circuits in a GALS system. Our methodology allows an SoC designer to mix and match different interfacing protocols, rational clock relations and synchronization constraints for communication between a pair of modules, and automatically explore their implications on correct interface circuit design.

"A Sound Floating-Point Polyhedra Abstract Domain"(article)
Authors: Chen, L., Miné, A., Cousot, P.
Editor:
Date: 2008
Pages: 3-18
The polyhedra abstract domain is one of the most powerful and commonly used numerical abstract domains in the field of static program analysis based on abstract interpretation. In this paper, we present an implementation of the polyhedra domain using floating-point arithmetic without sacrificing soundness. Floating-point arithmetic allows a compact memory representation and an efficient implementation on current hardware, at the cost of some loss of precision due to rounding. Our domain is based on a constraint-only representation and employs sound floating-point variants of Fourier-Motzkin elimination and linear programming. The preliminary experimental results of our prototype are encouraging. To our knowledge, this is the first time that the polyhedra domain is adapted to floating-point arithmetic in a sound way.

"Static Detection of Vulnerabilities in x86 Executables"(article)
Authors: Cova, M., Felmetsger, V., Banks, G., Vigna, G.
Date: 2006
Pages: 269-278
In the last few years, several approaches have been proposed to perform vulnerability analysis of applications written in high-level languages. However, little has been done to automatically identify security-relevant flaws in binary code. In this paper, we present a novel approach to the identification of vulnerabilities in x86 executables in ELF binary format. Our approach is based on static analysis and symbolic execution techniques. We implemented our approach in a proof-of-concept tool and used it to detect taint-style vulnerabilities in binary code. The results of our evaluation show that our approach is both practical and effective.

"An Abstract Interpretation Based Combinator for Modelling While Loops in Constraint Programming"(article)
Authors: Denmat, T., Gotlieb, A., Ducassé, M.
Editor:
Date: 2007
Pages: 241-255
We present the $w$ constraint combinator that models while loops in Constraint Programming. Embedded in a finite domain constraint solver, it allows programmers to develop non-trivial arithmetical relations using loops, exactly as in an imperative language style. The deduction capabilities of this combinator come from abstract interpretation over the polyhedra abstract domain. This combinator has already demonstrated its utility in constraint-based verification and we argue that it also facilitates the rapid prototyping of arithmetic constraints (e.g. power, gcd or sum).

"Polyhedra-Based Approach for Incremental Validation of Real-Time Systems"(article)
Authors: Doose, D., Mammeri, Z.
Editors: , , , ,
Date: 2005
Pages: 184-193
Real-time embedded systems can be used in hightly important or even vital tasks (avionic and medical systems, etc.), thus having strict temporal constraints that need to be validated. Existing solutions use temporal logic, automata or scheduling techniques. However, scheduling techniques are often pessimistic and require an almost complete knowledge of the system, and formal methods can be ill-fitted to manipulate some of the concepts involved in real-time systems. In this article, we propose a method that gives to the designer the advantages of formal methods and some simplicity in manipulating real-time systems notions. This method is able to model and validate all the classical features of real-time systems, without any pessimism, while guaranteeing the terminaison of the validation process. Moreover, its formalism enables to study systems of which we have only a partial knowledge, and thus to validate or invalidate a system still under design. This latest point is very important, since it greatly decreases the cost of design backtracks.

"Automatic Rectangular Refinement of Affine Hybrid Systems"(article)
Authors: Doyen, L., Henzinger, T. A., Raskin, J.-F.
Editors: ,
Date: 2005
Pages: 144-161
We show how to automatically construct and refine rectangular abstractions of systems of linear differential equations. From a hybrid automaton whose dynamics are given by a system of linear differential equations, our method computes automatically a sequence of rectangular hybrid automata that are increasingly precise overapproximations of the original hybrid automaton. We prove an optimality criterion for successive refinements. We also show that this method can take into account a safety property to be verified, refining only relevant parts of the state space. The practicability of the method is illustrated on a benchmark case study.

"Automatic Rectangular Refinement of Affine Hybrid Systems"(report)
Authors: Doyen, L., Henzinger, T. A., Raskin, J.-F.
Date: 2005
We show how to automatically construct and refine rectangular abstractions of systems of linear differential equations. From a hybrid automaton whose dynamics are given by a system of linear differential equations, our method computes automatically a sequence of rectangular hybrid automata that are increasingly precise overapproximations of the original hybrid automaton. We prove an optimality criterion for successive refinements. We also show that this method can take into account a safety property to be verified, refining only relevant parts of the state space. The practicability of the method is illustrated on a benchmark case study.

"Algorithmic Analysis of Complex Semantics for Timed and Hybrid Automata"(thesis)
Authors: Doyen, L.
Date: 2006-06
In the field of formal verification of real-time systems, major developments have been recorded in the last fifteen years. It is about logics, automata, process algebra, programming languages, etc. From the beginning, a formalism has played an important role: \\emph\timed automata\ and their natural extension, \\emph\hybrid automata\. Those models allow the definition of real-time constraints using real-valued \\emph\clocks\, or more generally \\emph\analog variables\ whose evolution is governed by differential equations. They generalize finite automata in that their semantics defines \\emph\timed words\ where each symbol is associated with an occurrence timestamp. The \\emph\decidability\ and \\emph\algorithmic analysis\ of timed and hybrid automata have been intensively studied in the literature. The central result for timed automata is that they are positively decidable. This is not the case for hybrid automata, but semi-algorithmic methods are known when the dynamics is relatively simple, namely a linear relation between the derivatives of the variables. With the increasing complexity of nowadays systems, those models are however limited in their classical semantics, for modelling realistic implementations or dynamical systems. In this thesis, we study the algorithmics of \\emph\complex semantics\ for timed and hybrid automata. On the one hand, we propose implementable semantics for timed automata and we study their computational properties: by contrast with other works, we identify a semantics that is implementable and that has decidable properties. On the other hand, we give new algorithmic approaches to the analysis of hybrid automata whose dynamics is given by an affine function of its variables.

"Fully Automatic Verification of Absence of Errors via Interprocedural Integer Analysis"(thesis)
Authors: Ellenbogen, R.
Date: 2004-12
We present a interprocedural C String Static Verifier (iCSSV), a whole program analysis algorithm for verifying the safety of string operations in C programs. The algorithm automatically proves linear relationships among pointer expressions. The algorithm is conservative, i.e., it infers only valid relationships although it may fail to detect some of them. The algorithm is targeted to programs with ``shallow'' pointers and complex integer relationships. Therefore, the algorithm combines context-sensitive flow-insensitive pointer analysis of pointer updates with contextsensitive and flow-sensitive integer analysis of properties of allocation sites. Context-sensitivity is achieved by specializing pointer aliases to the context and functional integer analysis. The algorithm is powerful enough to verify the absence of string manipulation errors such as accesses beyond buffer length and null terminating character. Here the interprocedural analysis guarantees that our algorithm is fully automatic, i.e., does not require user annotations or any other intervention. A prototype of the algorithm was implemented. Several novel techniques are employed to make the interprocedural analysis of realistic programs feasible.

"From Model-Checking to Temporal Logic Constraint Solving"(article)
Authors: Fages, F., Rizk, A.
Editor:
Date: 2009
Pages: 319-334
In this paper, we show how model-checking can be generalized to temporal logic constraint solving, by considering temporal logic formulae with free variables over some domain $\\\mathcal D\$, and by computing a validity domain for the variables rather than a truth value for the formula. This allows us to define a continuous degree of satisfaction for a temporal logic formula in a given structure, opening up the field of model-checking to optimization. We illustrate this approach with reverse-engineering problems coming from systems biology, and provide some performance figures on parameter optimization problems with respect to temporal logic specifications.

"Interprozedurale Analyse linearer Ungleichungen"(thesis)
Authors: Flexeder, A.
Date: 2005-07
Diese Arbeit beschreibt eine intra- und auch interprozedurale Datenflussanalyse, welche an jedem Programmpunkt statisch die Beziehungen, die zwischen den Programmvariablen gelten, bestimmen können. Die intraprozeduralen Analyse, beruhend auf einem Modell von Cousot [P.~Cousot and N.~Halbwachs. Automatic discovery of linear restraints among variables of a program. Conference Record of the 5th Annunal ACM Symposium on Principles of Programming Languages, pages 84–96, 1978] interpretiert lineare Zuweisungen und Bedingungen und betrachtet die nicht linearen Konstrukte mit Hilfe von nicht linearen Zuweisungen. Mit dieser Abstraktion versucht man lineare Gleichheits- und Ungleichheitsbeziehungen zwischen den Programmvariablen in Form von Polyedern rauszufinden. Da man nicht nur eine Funktion, sondern ganze Programme als Zusammenspiel mehrerer Funktionen, analysieren möchte, ist eine interprozedurale Analyse nötig [M.~Mueller-Olm and H.~Seidl. Precise Interprocedural Analysis through Linear Algebra. POPL, 2004]. Diese soll mit den Mitteln der linearen Algebra die affinen Beziehungen, welche zwischen den Programmvariablen an einem bestimmten Programmpunkt gelten, erkennen. Die Behandlung von Prozeduraufrufen steht dabei im Vordergrund.
In German

"Strategies for Cooperating Constraint Solvers"(thesis)
Authors: Frank, S., Mai, P. R.
Date: 2002-07
Cooperative constraint solving has been investigated by several different research groups and individuals as it provides a comfortable mechanism to attack multi-domain constraint problems. The theoretical framework of Hofstedt [P.~Hofstedt. Cooperation and Coordination of Constraint Solvers. PhD thesis, Technische Universität Dresden, March 2001. Shaker Verlag, Aachen] provided the basis for the prototypical implementation described in [E.~Godehardt and D.~Seifert. Kooperation und Koordination von Constraint Solvern — Implementierung eines Prototyps. Master's thesis, Technische Universität Berlin, January 2001]. Taking aboard the lessons learned in the prototype, we introduce a revised implementation of the framework, to serve as a flexible basis for the conception and evaluation of advanced strategies for solver cooperation. Several additional enhancements and optimisations over the preceding implementation or the underlying theoretical framework are described, proving the correctness of those changes where necessary. Using the newly implemented framework, we propose and benchmark a set of new cooperation strategies, iteratively refining them to the point where we can offer a set of generally useful (i.e. non-problem specific) strategies. Finally we introduce a strategy language, that allows the user to define problem-specific strategies, either from scratch or derived from other strategies, and demonstrate the effectiveness of the language on a well-known example problem.

"Compositional Verification of Hybrid Systems with Discrete Interaction Using Simulation Relations"(article)
Authors: Frehse, G.
Date: 2004
Simulation relations can be used to verify refinement between a system and its specification, or between models of different complexity. It is known that for the verification of safety properties, simulation between hybrid systems can be defined based on their labeled transition system semantics. We show that for hybrid systems without shared variables, which therefore only interact at discrete events, this simulation preorder is compositional, and present assume-guarantee rules that help to counter the state explosion problem. Some experimental results for simulation checking of linear hybrid automata are provided using a prototype tool with exact arithmetic and unlimited digits.

"Assume-Guarantee Reasoning for Hybrid I/O-Automata by Over-Approximation of Continuous Interaction"(article)
Authors: Frehse, G., Han, Z., Krogh, B.
Date: 2004
This paper extends assume-guarantee reasoning (AGR) to hybrid dynamic systems that interact through continuous state variables. We use simulation relations for timed transition systems to analyze compositions of hybrid I/O automata. This makes it possible to perform compositional reasoning that is conservative in the sense of over approximating the composed behaviors. In contrast to previous approaches that require global receptivity conditions, circularity is broken in our approach by a state-based nonblocking condition that can be checked in the course of computing the AGR simulation relations. The proposed procedures for AGR are implemented in a computational tool for the case of linear hybrid I/O automata, and the approach is illustrated with a simple example.

"PHAVer: Algorithmic Verification of Hybrid Systems Past HyTech"(article)
Authors: Frehse, G.
Editors: ,
Date: 2005
Pages: 258-273
In 1995, HyTech broke new ground as a potentially powerful tool for verifying hybrid systems — yet it has remained severely limited in its applicability to more complex systems. We address the main problems of HyTech with PHAVer, a new tool for the exact verification of safety properties of hybrid systems with piecewise constant bounds on the derivatives. Affine dynamics are handled by on-the-fly overapproximation and by partitioning the state space based on user-definable constraints and the dynamics of the system. PHAVer's exact arithmetic is robust due to the use of the Parma Polyhedra Library, which supports arbitrarily large numbers. To manage the complexity of the polyhedral computations, we propose methods to conservatively limit the number of bits and constraints of polyhedra. Experimental results for a navigation benchmark and a tunnel diode circuit show the effectiveness of the approach.

"Compositional Verification of Hybrid Systems using Simulation Relations"(thesis)
Authors: Frehse, G.
Date: 2005-10

"PHAVer: Algorithmic Verification of Hybrid Systems Past HyTech"(article)
Authors: Frehse, G.
Date: 2008
Pages: 263-279
In 1995, HyTech broke new ground as a potentially powerful tool for verifying hybrid systems — yet its appicability remains limited to relatively simple systems. We address the main problems of HyTech with PHAVer, a new tool for the exact verification of safety properties of hybrid systems with piecewise constant bounds on the derivatives. Affine dynamics are handled by on-the-fly overapproximation, partitioning the state space based on user-definable constraints and the dynamics of the system. PHAVer features exact arithmetic in a robust implementation that, based on the Parma Polyhedra Library, supports arbitrarily large numbers. To manage the complexity of the polyhedral computations, we propose methods to conservatively limit the number of bits and constraints of polyhedra. Experimental results for a navigation benchmark and a tunnel diode circuit show the effectiveness of the approach.

"Time Domain Verification of Oscillator Circuit Properties"(article)
Authors: Frehse, G., Krogh, B. H., Rutenbar, R. A., Maler, O.
Date: 2006
Pages: 9-22
The application of formal methods to analog and mixed signal circuits requires efficient methods for constructing abstractions of circuit behaviors. This paper concerns the verification of properties of oscillator circuits. Generic monitor automata are proposed to facilitate the application of hybrid system reachability computations to characterize time domain features of oscillatory behavior, such as bounds on the signal amplitude and jitter. The approach is illustrated for a nonlinear tunnel-diode circuit model using PHAVer, a hybrid system analysis tool that provides sound verification results based on linear hybrid automata approximations and infinite precision computations.

"Verifying Analog Oscillator Circuits Using Forward/Backward Refinement"(article)
Authors: Frehse, G., Krogh, B. H., Rutenbar, R. A.
Date: 2006
Properties of analog circuits can be verified formally by partitioning the continuous state space and applying hybrid system verification techniques to the resulting abstraction. To verify properties of oscillator circuits, cyclic invariants need to be computed. Methods based on forward reachability have proven to be inefficient and in some cases inadequate in constructing these invariant sets. In this paper we propose a novel approach combining forward- and backward-reachability while iteratively refining partitions at each step. The technique can yield dramatic memory and runtime reductions. We illustrate the effectiveness by verifying, for the first time, the limit cycle oscillation behavior of a third-order model of a differential VCO circuit.
{CD-ROM} publication

"Towards putting abstract interpretation of Prolog into practice: design, implementation, and evaluation of a tool to verify and optimise Prolog programs"(thesis)
Authors: Gobert, F.
Date: 2007-12
Logic programming is an attractive paradigm that allows the programmer to concentrate on the meaning (the logic) of the problem to be solved - the \\emph\declarative layer\. An execution model is then used as the problem solver - the \\emph\operational layer\. In practice, for efficiency reasons, the semantics of the two layers do not always match. For instance, in Prolog, the computation of solutions is based on an incomplete depth-first search rule, unifications and negations may be unsound, some builtin language primitives are not multidirectional, and there exist extralogical features like the cut or dynamic predicates. A large number of work has been realised to reconcile the declarative and operational features of Prolog. Methodologies have been proposed to construct operationally correct and efficient Prolog code. Researchers have designed methods to automate the verification of specific operational properties on which optimisation of logic programs can be based. A few tools have been implemented but there is a lack of a unified framework. The goal and topic of this thesis is the design, implementation, and evaluation of a static analyser of Prolog programs to integrate `state-of-the-art' techniques into a unified abstract interpretation framework. Abstract interpretation is an adequate methodology to design, justify, and combine complex analyses. The analyser that we present in this thesis is based on a non-implemented original proposal. The original framework defines the notion of \\emph\abstract sequence\, which allows one to verify many desirable operational properties of a logic procedure. The properties include verifying type, mode, and sharing of terms, proving termination, sure success or failure, and determinacy of logic procedures, as well as linear relations between the size of input/output terms and the number of solutions to a call. An abstract sequence maintains information about the input and output terms, as well as the non-failure conditions on input terms, and the number of solutions for such inputs. The domains of abstract sequences cooperate together and improve each other. The abstract execution is performed during a single global analysis, and abstract sequences are derived at each program point (the information of the various domains are computed simultaneously). The intended operational properties of a procedure are written in formal specifications. The original framework is an interesting starting point for combining several analyses inside a unified framework. However, it is limited and inaccurate in many ways: it is not implemented, and therefore, it has not been validated by experiments, it accepts only a subset of Prolog (without negation, cut, conditional and disjunctive constructs), and some of the proposed domains are not precise enough. The basic framework is only oriented towards the verification of Prolog programs, but it cannot always prove the desirable properties. In this thesis, we implement and evaluate the basic framework, and, more importantly, we overcome its limitations to make it accurate and usable in practice: the improved framework accepts any Prolog program with modules, new abstract domains and operations are added, and the language of specifications is more expressive. We also design and implement an optimiser that generates specialised code. Optimisation is essential in Prolog, but it is not easy to perform by hand and it is error prone. The optimiser uses the information to safely apply source-to-source transformations. Code transformations include clause and literal reordering, introduction of cuts, and removal of redundant literals. The optimiser follows a precise strategy to choose the most rewarding transformations in best order. This thesis shows the feasibility of a unified framework that integrates many complex analyses in a single global analysis. Practically and theoretically, a single global analysis is more attractive than a combination of a lot of separate analyses and frameworks. Many extensions have been performed to obtain an accurate and usable tool devoted to verification and optimisation of Prolog programs.

"A System to Check Operational Properties of Logic Programs"(article)
Authors: Gobert, F., Le Charlier, B.
Editors: , , ,
Date: 2007
Pages: 245-259
An implemented static analyser of logic programs is presented. The system is based on a unied abstract interpretation framework, which allows the integration of several analyses devoted to verication and optimisation. The analyser is able to verify many desirable properties of logic programs executed with the search-rule and other specic features of Prolog. Such operational properties include verifying type, mode, sharing, and linearity of terms, proving termination, occur-check freeness, sure success or failure, and determinacy of logic procedures, as well as linear relations between the size of input/output terms and the number of solutions to a call. It is emphasized how each analysis may contribute to each other.

"Combining Widening and Acceleration in Linear Relation Analysis"(article)
Authors: Gonnord, L., Halbwachs, N.
Editor:
Date: 2006
Pages: 144-160
Linear Relation Analysis [CH78,Hal79] is one of the first, but still one of the most powerful, abstract interpretations working in an infinite lattice. As such, it makes use of a widening operator to enforce the convergence of fixpoint computations. While the approximation due to widening can be arbitrarily refined by delaying the application of widening, the analysis quickly becomes too expensive with the increase of delay. Previous attempts at improving the precision of widening are not completely satisfactory, since none of them is guaranteed to improve the precision of the result, and they can nevertheless increase the cost of the analysis. In this paper, we investigate an improvement of Linear Relation Analysis consisting in computing, when possible, the exact (abstract) effect of a loop. This technique is fully compatible with the use of widening, and whenever it applies, it improves both the precision and the performance of the analysis.

"Numeric Program Analysis Techniques with Applications to Array Analysis and Library Summarization"(thesis)
Authors: Gopan, D.
Date: 2007-08
Numeric program analysis is of great importance for the areas of software engineering, software verification, and security: to identify many program errors, such as out-of-bounds array accesses and integer overflows, which constitute the lion's share of security vulnerabilities reported by CERT, an analyzer needs to establish numeric properties of program variables. Many important program analyses, such as low-level code analysis, memory-cleanness analysis, and shape analysis, rely in some ways on numeric-program-analysis techniques. However, existing numeric abstractions are complex (numeric abstract domains are typically non-distributive, and form infinite-height lattices); thus, obtaining precise numeric-analysis results is by no means a trivial undertaking. In this thesis, we develop a suite of techniques with the common goal of improving the precision and applicability of numeric program analysis. The techniques address various aspects of numeric analysis, such as handling dynamically-allocated memory, dealing with programs that manipulate arrays, improving the precision of extrapolation (widening), and performing interprocedural analysis. The techniques use existing numeric abstractions as building blocks. The communication with existing abstractions is done strictly through a generic abstract-domain interface. The abstractions constructed by our techniques also expose that same interface, and thus, are compatible with existing analysis engines. As a result, our techniques are independent from specific abstractions and specific analysis engines, can be easily incorporated into existing program-analysis tools, and should be readily compatible with new abstractions to be introduced in the future. A practical application of numeric analysis that we consider in this thesis is the automatic generation of summaries for library functions from their low-level implementation (that is, from a library's binary). The source code for library functions is typically not available. This poses a stumbling block for many source-level program analyses. Automatic generation of summary functions will both speed up and improve the accuracy of library-modeling, a process that is currently carried out by hand. This thesis addresses the automatic generation of summaries for memory-safety analysis.

"Lookahead Widening"(article)
Authors: Gopan, D., Reps, T. W.
Editors: ,
Date: 2006
Pages: 452-466
We present \\emph\lookahead widening\, a novel technique for using existing widening and narrowing operators to improve the precision of static analysis. This technique is both self-contained and fully-automatic in the sense that it does not rely on separate analyzes or human involvement. We show how to integrate lookahead widening into existing analyzers with minimal effort. Experimental results indicate that the technique is able to achieve sizable precision improvements at reasonable costs.

"Low-Level Library Analysis and Summarization"(article)
Authors: Gopan, D., Reps, T. W.
Editors: ,
Date: 2007
Pages: 68-81
Programs typically make extensive use of libraries, including dynamically linked libraries, which are often not available in source-code form, and hence not analyzable by tools that work at source level (i.e., that analyze intermediate representations created from source code). A common approach is to write \\emph\library models\ by hand. A library model is a collection of function stubs and variable declarations that capture some aspect of the library code's behavior. Because these are hand-crafted, they are likely to contain errors, which may cause an analysis to return incorrect results. This paper presents a method to construct summary information for a library function automatically by analyzing its low-level implementation (i.e., the library's binary).

"Guided Static Analysis"(article)
Authors: Gopan, D., Reps, T. W.
Editors: ,
Date: 2007
Pages: 349-365
In static analysis, the semantics of the program is expressed as a set of equations. The equations are solved iteratively over some abstract domain. If the abstract domain is distributive and satisfies the ascending-chain condition, an iterative technique yields the most precise solution for the equations. However, if the above properties are not satisfied, the solution obtained is typically imprecise. Moreover, due to the properties of widening operators, the precision loss is sensitive to the order in which the state-space is explored. In this paper, we introduce \\emph\guided static analysis\, a framework for controlling the exploration of the state-space of a program. The framework guides the state-space exploration by applying standard static-analysis techniques to a sequence of modified versions of the analyzed program. As such, the framework does not require any modifications to existing analysis techniques, and thus can be easily integrated into existing static-analysis tools. We present two instantiations of the framework, which improve the precision of widening in (i) loops with multiple phases and (ii) loops in which the transformation performed on each iteration is chosen non-deterministically.

"A Framework for Numeric Analysis of Array Operations"(article)
Authors: Gopan, D., Reps, T. W., Sagiv, M.
Date: 2005
Pages: 338-350
Automatic discovery of relationships among values of array elements is a challenging problem due to the unbounded nature of arrays. We present a framework for analyzing array operations that is capable of capturing numeric properties of array elements. In particular, the analysis is able to establish that all array elements are initialized by an array-initialization loop, as well as to discover numeric constraints on the values of initialized elements. The analysis is based on the combination of canonical abstraction and summarizing numeric domains. We describe a prototype implementation of the analysis and discuss our experience with applying the prototype to several examples, including the verification of correctness of an insertion-sort procedure.

"Constraint-Based Abstraction of a Model Checker for Infinite State Systems"(article)
Authors: Banda, G., Gallagher, J. P.
Editors: ,
Date: 2010
Pages: 109-124
Abstract interpretation-based model checking provides an approach to verifying properties of infinite-state systems. In practice, most previous work on abstract model checking is either restricted to verifying universal properties, or develops special techniques for temporal logics such as modal transition systems or other dual transition systems. By contrast we apply completely standard techniques for constructing abstract interpretations to the abstraction of a CTL semantic function, without restricting the kind of properties that can be verified. Furthermore we show that this leads directly to implementation of abstract model checking algorithms for abstract domains based on constraints, making use of an SMT solver.

"Counterexample Driven Refinement for Abstract Interpretation"(article)
Authors: Gulavani, B. S., Rajamani, S. K.
Editors: ,
Date: 2006
Pages: 474-488
Abstract interpretation techniques prove properties of programs by computing abstract fixpoints. All such analyses suffer from the possibility of false errors. We present a new counterexample driven refinement technique to reduce false errors in abstract interpretations. Our technique keeps track of the precision losses during forward fixpoint computation, and does a precise backward propagation from the error to either confirm the error as a true error, or identify a refinement so as to avoid the false error. Our technique is quite simple, and is independent of the specific abstract domain used. An implementation of our technique for affine transition systems is able to prove invariants generated by the StInG tool [19] without doing any specialized analysis for linear relations. Thus, we hope that the technique can work for other abstract domains as well.We sketch how our technique can be used to perform shape analysis by simply defining an appropriate widening operator over shape graphs.

"A Combination Framework for Tracking Partition Sizes"(article)
Authors: Gulwani, S., Lev-Ami, T., Sagiv, S.
Editors: ,
Date: 2009
Pages: 239-251
We describe an abstract interpretation based framework for proving relationships between sizes of memory partitions. Instances of this framework can prove traditional properties such as memory safety and program termination but can also establish upper bounds on usage of dynamically allocated memory. Our framework also stands out in its ability to prove properties of programs manipulating both heap and arrays which is considered a difficult task. Technically, we define an abstract domain that is parameterized by an abstract domain for tracking memory partitions (sets of memory locations) and by a numerical abstract domain for tracking relationships between cardinalities of the partitions. We describe algorithms to construct the transfer functions for the abstract domain in terms of the corresponding transfer functions of the parameterized abstract domains. A prototype of the framework was implemented and used to prove interesting properties of realistic programs, including programs that could not have been automatically analyzed before.

"Some Ways To Reduce the Space Dimension in Polyhedra Computations"(article)
Authors: Halbwachs, N., Merchat, D., Gonnord, L.
Date: 2006
Pages: 79-95
Convex polyhedra are often used to approximate sets of states of programs involving numerical variables. The manipulation of convex polyhedra relies on the so-called \\emph\double description\, consisting of viewing a polyhedron both as the set of solutions of a system of linear inequalities, and as the convex hull of a \\emph\system of generators\, i.e., a set of vertices and rays. The cost of these manipulations is highly dependent on the number of numerical variables, since the size of each representation can be exponential in the dimension of the space. In this paper, we investigate some ways for reducing the dimension: On one hand, when a polyhedron satisfies \\emph\affine equations\, these equations can obviously be used to eliminate some variables. On the other hand, when groups of variables are unrelated with each other, this means that the polyhedron is in fact a \\emph\Cartesian product\ of polyhedra of lower dimensions. Detecting such Cartesian factoring is not very difficult, but we adapt also the operations to work on Cartesian products. Finally, we extend the applicability of Cartesian factoring by applying suitable \\emph\variable change\, in order to maximize the factoring.

"Cartesian Factoring of Polyhedra in Linear Relation Analysis"(article)
Authors: Halbwachs, N., Merchat, D., Parent-Vigouroux, C.
Editor:
Date: 2003
Pages: 355-365
Linear Relation Analysis [CH78] suffers from the cost of operations on convex polyhedra, which can be exponential with the number of involved variables. In order to reduce this cost, we propose to detect when a polyhedron is a Cartesian product of polyhedra of lower dimensions, i.e., when groups of variables are unrelated with each other. Classical operations are adapted to work on such factored polyhedra. Our implementation shows encouraging experimental results.

"Abstract Interpretation of PIC Programs through Logic Programming"(article)
Authors: Henriksen, K. S., Gallagher, J. P.
Date: 2006
Pages: 184-196
A logic based general approach to abstract interpretation of low-level machine programs is reported. It is based on modelling the behavior of the machine as a logic program. General purpose program analysis and transformation of logic programs, such as partial evaluation and convex hull analysis, are applied to the logic based model of the machine. A small PIC microcontroller is used as a case study. An emulator for this microcontroller is written in Prolog, and standard programming transformations and analysis techniques are used to specialise this emulator with respect to a given PIC program. The specialised emulator can now be further analysed to gain insight into the given program for the PIC microcontroller. The method describes a general framework for applying abstractions, illustrated here by linear constraints and convex hull analysis, to logic programs. Using these techniques on the specialised PIC emulator, it is possible to obtain constraints on and linear relations between data registers, enabling detection of for instance overflows, branch conditions and so on.

"A Logic Programming Based Approach to Applying Abstract Interpretation to Embedded Software"(thesis)
Authors: Henriksen, K. S.
Date: 2007-10
Abstract interpretation is a general framework for static program analysis. In recent years this framework has been used outside academia for verification of embedded and real-time systems. Airbus and the European Space Agency are examples of organisations that have successfully adapted this analysis framework for verification of critical components. Logic programming is a programming paradigm with a sound mathematical foundation. One of its characteristics is the separation of logic (the meaning of a program) and control (how it is executed); hence logic programming, and in particular its extension with constraints, is a language comparatively well suited for program analysis. In this thesis logic programming is used to analyse software developed for embedded systems. The particular embedded system is modeled as an emulator written as a constraint logic program. The emulator is specialised with respect to some object program in order to obtain a constraint logic program isomorphic to this object program. Applying abstract interpretation based analysers to the specialised emulator will provide analysis results that can directly be related back to the object program due to the isomorphism maintained between the object program and the specialised emulator. Two abstract interpretation based analysers for logic programs have been developed. The first is a convex polyhedron analyser for constraint logic programs implementing a set of widening techniques for improved precision of the analysis. The second analyser is a type analysis tool for logic programs that automatically derives a pre-interpretation from a regular type definition. Additionallly, a framework for using a restricted form of logic programming, namely Datalog, to express and check program properties is described. At the end of the thesis it is shown how instrumenting the semantics of the emulator can be used to obtain, for instance, a fully automatic Worst Case Execution Time analysis by applying the convex polyhedron analyser to the instrumented and specialised emulator. The tools developed in this thesis have all been made available online for demonstration.
Published as Computer Science Research Report #117

"Static Analysis of Gated Data Dependence Graphs"(article)
Authors: Hymans, C., Upton, E.
Editor:
Date: 2004
Pages: 197-211
Several authors have advocated the use of the gated data dependence graph as a compiler intermediate representation. If this representation is to gain acceptance, it is important to show that we may construct static analyses which operate directly on it. In this paper we present the first example of such an analysis, developed using the methodology of abstract interpretation. The analysis is shown to be sound with respect to a concrete semantics for the representation. Experimental results are presented which indicate that the analysis performs well in comparison to conventional techniques.

"Software Model Checking"(article)
Authors: Jhala, R., Majumdar, R.
Date: 2009
Pages: 1-54
We survey recent progress in software model checking.

"Apron: A Library of Numerical Abstract Domains for Static Analysis"(article)
Authors: Jeannet, B., Miné, A.
Editors: ,
Date: 2009
Pages: 661-667
This article describes \\textsc\Apron\, a freely available library dedicated to the static analysis of the numerical variables of programs by abstract interpretation. Its goal is threefold: provide analysis implementers with ready-to-use numerical abstractions under a unified API, encourage the research in numerical abstract domains by providing a platform for integration and comparison, and provide teaching and demonstration tools to disseminate knowledge on abstract interpretation.

"On Control Signals for Multi-Dimensional Time"(anthos)
Authors: Kim, D., Gupta, G., Rajopadhye, S. V.
Editors: , ,
Date: 2007
Pages: 141-155
Affine control loops (ACLs) comprise an important class of compute- and data-intensive computations. The theoretical framework for the automatic parallelization of ACLs is well established. However, the hardware compilation of arbitrary ACLs is still in its infancy. An important component for an efficient hardware implementation is a control mechanism that informs each processing element (PE) which computation needs to be performed and when. We formulate this \\emph\control signal problem\ in the context of compiling arbitrary ACLs parallelized with a multi-dimensional schedule into hardware. We characterize the logical time instants when PEs need a control signal indicating which particular computations need to be performed. Finally, we present an algorithm to compute the minimal set of logical time instants for these control signals.
Revised papers presented at the 19th International Workshop on Languages and Compilers for Parallel Computing (LCPC 2006), New Orleans, Louisiana, USA, November 2–4, 2006

"Automating Mimicry Attacks Using Static Binary Analysis"(article)
Authors: Kruegel, C., Kirda, E., Mutz, D., Robertson, W., Vigna, G.
Date: 2005
Pages: 161-176
Intrusion detection systems that monitor sequences of system calls have recently become more sophisticated in defining legitimate application behavior. In particular, additional information, such as the value of the program counter and the configuration of the program's call stack at each system call, has been used to achieve better characterization of program behavior. While there is common agreement that this additional information complicates the task for the attacker, it is less clear to which extent an intruder is constrained. In this paper, we present a novel technique to evade the extended detection features of state-of-the-art intrusion detection systems and reduce the task of the intruder to a traditional mimicry attack. Given a legitimate sequence of system calls, our technique allows the attacker to execute each system call in the correct execution context by obtaining and relinquishing the control of the application's execution flow through manipulation of code pointers. We have developed a static analysis tool for Intel x86 binaries that uses symbolic execution to automatically identify instructions that can be used to redirect control flow and to compute the necessary modifications to the environment of the process. We used our tool to successfully exploit three vulnerable programs and evade detection by existing state-of-the-art system call monitors. In addition, we analyzed three real-world applications to verify the general applicability of our techniques.

"Termination Analysis with Types Is More Accurate"(article)
Authors: Lagoon, V., Mesnard, F., Stuckey, P. J.
Date: 2003
Pages: 254-268
In this paper we show how we can use size and groundness analyses lifted to regular and (polymorphic) Hindley/Milner typed programs to determine more accurate termination of (type correct) programs. Type information for programs may be either inferred automatically or declared by the programmer. The analysis of the typed logic programs is able to completely reuse a framework for termination analysis of untyped logic programs by using abstract compilation of the type abstraction. We show that our typed termination analysis is uniformly more accurate than untyped termination analysis for regularly typed programs, and demonstrate how it is able to prove termination of programs which the untyped analysis can not.

"SubPolyhedra: A (More) Scalable Approach to Infer Linear Inequalities"(article)
Authors: Laviron, V., Logozzo, F.
Editors: ,
Date: 2009
Pages: 229-244
We introduce Subpolyhedra (\\textsf\SubPoly\) a new numerical abstract domain to infer and propagate linear inequalities. \\textsf\SubPoly\ is as expressive as Polyhedra, but it drops some of the deductive power to achieve scalability. \\textsf\SubPoly\ is based on the insight that the reduced product of linear equalities and intervals produces powerful yet scalable analyses. Precision can be recovered using hints. Hints can be automatically generated or provided by the user in the form of annotations. We implemented \\textsf\SubPoly\ on the top of \\texttt\Clousot\, a generic abstract interpreter for \\texttt\.Net\. \\texttt\Clousot\ with \\textsf\SubPoly\ analyzes very large and complex code bases in few minutes. \\textsf\SubPoly\ can efficiently capture linear inequalities among hundreds of variables, a result well-beyond state-of-the-art implementations of Polyhedra.

"Extending a CP Solver with Congruences as Domains for Program Verification"(article)
Authors: Leconte, M., Berstel, B.
Editors: , ,
Date: 2006
Pages: 22-33
Constraints generated for Program Verification tasks very often involve integer variables ranging on all the machine-representable integer values. Thus, if the propagation takes a time that is linear in the size of the domains, it will not reach a fix point in practical time. Indeed, the propagation time needed to reduce the interval domains for as simple equations as $x = 2y + 1$ and $x = 2z$ is proportional to the size of the initial domains of the variables. To avoid this \\emph\slow convergence\ phenomenon, we propose to enrich a Constraint Programming Solver (CP Solver) with \\emph\congruence domains\. This idea has been introduced by [Granger, P.: Static analysis of arithmetic congruences. International Journal of Computer Math (1989) 165–199] in the abstract interpretation community and we show how a CP Solver can benefit from it, for example in discovering immediately that $12x + |y| = 3$ and $4z + 7y = 0$ have no integer solution.
Available at \\url\{http://www.irisa.fr/manifestations/2006/CSTVA06/\}.

"Romeo: A Parametric Model-Checker for Petri Nets with Stopwatches"(article)
Authors: Lime, D., Roux, O. H., Seidner, C., Traonouez, L.-M.
Editors: ,
Date: 2009
Pages: 54-57
Last time we reported on Romeo, analyses with this tool were mostly based on translations to other tools. This new version provides an integrated TCTL model-checker and has gained in expressivity with the addition of parameters. Although there exists other tools to compute the state-space of stopwatch models, Romeo is the first one that performs TCTL model-checking on stopwatch models. Moreover, it is the first tool that performs TCTL model-checking on timed parametric models. Indeed, Romeo now features an efficient model-checking of time Petri nets using the Uppaal DBM Library, the model-checking of stopwatch Petri nets and parametric stopwatch Petri nets using the Parma Polyhedra Library and a graphical editor and simulator of these models. Furthermore, its audience has increased leading to several industrial contracts. This paper reports on these recent developments of Romeo.

"Pentagons: A Weakly Relational Abstract Domain for the Efficient Validation of Array Accesses"(article)
Authors: Logozzo, F., Fähndrich, M.
Date: 2008
Pages: 184-188
We introduce Pentagons (\\textsf\Pntg\), a weakly relational numerical abstract domain useful for the validation of array accesses in byte-code and intermediate languages (IL). This abstract domain captures properties of the form of $x \\in [a. b] \\wedge x < y$. It is more precise than the well known Interval domain, but it is less precise than the Octagon domain. The goal of \\textsf\Pntg\ is to be a lightweight numerical domain useful for adaptive static analysis, where \\textsf\Pntg\ is used to quickly prove the safety of most array accesses, restricting the use of more precise (but also more expensive) domains to only a small fraction of the code. We implemented the \\textsf\Pntg\ abstract domain in \\texttt\Clousot\, a generic abstract interpreter for .NET assemblies. Using it, we were able to validate 83% of array accesses in the core runtime library \\texttt\mscorlib.dll\ in less than 8 minutes.

"An Evaluation of Two Recent Reachability Analysis Tools for Hybrid Systems"(article)
Authors: Makhlouf, I. B., Kowalewski, S.
Editors: , , ,
Date: 2006
The hybrid systems community is still struggling to provide practically applicable verification tools. Recently, two new tools, PHAVer and Hsolver, were introduced which promise to be a further step in this direction. We evaluate and compare both tools with the help of several benchmark examples. The results show that both have their strengths and weaknesses, and that there still is no all-purpose reachability analysis tool for hybrid systems.

"Partially Disjunctive Heap Abstraction"(article)
Authors: Manevich, R., Sagiv, M., Ramalingam, G., Field, J.
Editor:
Date: 2004
Pages: 265-279
One of the continuing challenges in abstract interpretation is the creation of abstractions that yield analyses that are both tractable and precise enough to prove interesting properties about real-world programs. One source of difficulty is the need to handle programs with different behaviors along different execution paths. Disjunctive (powerset) abstractions capture such distinctions in a natural way. However, in general, powerset abstractions increase the space and time by an exponential factor. Thus, powerset abstractions are generally perceived as being very costly. In this paper we partially address this challenge by presenting and empirically evaluating a new heap abstraction. The new heap abstraction works by merging shape descriptors according to a partial isomorphism similarity criteria, resulting in a partially disjunctive abstraction. We implemented this abstraction in TVLA — a generic system for implementing program analyses. We conducted an empirical evaluation of the new abstraction and compared it with the powerset heap abstraction. The experiments show that analyses based on the partially disjunctive abstraction are as precise as the ones based on the fully disjunctive abstraction. In terms of performance, analyses based on the partially disjunctive abstraction are often superior to analyses based on the fully disjunctive heap abstraction. The empirical results show considerable speedups, up to 2 orders of magnitude, enabling previously non-terminating analyses, such as verification of the Deutsch-Schorr-Waite marking algorithm, to terminate with no negative effect on the overall precision. Indeed, experience indicates that the partially disjunctive shape abstraction improves performance across all TVLA analyses uniformly, and in many cases is essential for making precise shape analysis feasible.

"cTI: A Constraint-Based Termination Inference Tool for ISO-Prolog"(article)
Authors: Mesnard, F., Bagnara, R.
Date: 2005
Pages: 243-257
We present cTI, the first system for universal left-termination inference of logic programs. Termination inference generalizes termination analysis and checking. Traditionally, a termination analyzer tries to prove that a given class of queries terminates. This class must be provided to the system, for instance by means of user annotations. Moreover, the analysis must be redone every time the class of queries of interest is updated. Termination inference, in contrast, requires neither user annotations nor recomputation. In this approach, terminating classes for all predicates are inferred at once. We describe the architecture of cTI and report an extensive experimental evaluation of the system covering many classical examples from the logic programming termination literature and several Prolog programs of respectable size and complexity.

"Exploring Multiple Execution Paths for Malware Analysis"(article)
Authors: Moser, A., Krügel, C., Kirda, E.
Date: 2007
Pages: 231-245
Malicious code (or malware) is defined as software that fulfills the deliberately harmful intent of an attacker. Malware analysis is the process of determining the behavior and purpose of a given malware sample (such as a virus, worm, or Trojan horse). This process is a necessary step to be able to develop effective detection techniques and removal tools. Currently, malware analysis is mostly a manual process that is tedious and time-intensive. To mitigate this problem, a number of analysis tools have been proposed that automatically extract the behavior of an unknown program by executing it in a restricted environment and recording the operating system calls that are invoked. The problem of dynamic analysis tools is that only a single program execution is observed. Unfortunately, however, it is possible that certain malicious actions are only triggered under specific circumstances (e.g., on a particular day, when a certain file is present, or when a certain command is received). In this paper, we propose a system that allows us to explore multiple execution paths and identify malicious actions that are executed only when certain conditions are met. This enables us to automatically extract a more complete view of the program under analysis and identify under which circumstances suspicious actions are carried out. Our experimental results demonstrate that many malware samples show different behavior depending on input read from the environment. Thus, by exploring multiple execution paths, we can obtain a more complete picture of their actions.

" Magic-Sets Transformation for the Analysis of Java Bytecode"(article)
Authors: Payet, E., Spoto, F.
Editors: ,
Date: 2007
Pages: 452-467
Denotational static analysis of Java bytecode has a nice and clean compositional definition and an efficient implementation with binary decision diagrams. But it models only the \\emph\functional\ i.e., input/output behaviour of a program $P$, not enough if one needs $P$'s \\emph\internal\ behaviours i.e., from the input to some internal program points. We overcome this limitation with a technique used up to now for logic programs only. It adds new \\emph\magic\ blocks of code to P, whose functional behaviours are the internal behaviours of $P$. We prove this transformation correct with an operational semantics. We define an equivalent denotational semantics, whose denotations for the magic blocks are hence the internal behaviours of $P$. We implement our transformation and instantiate it with abstract domains modelling \\emph\sharing\ of two variables and \\emph\non-cyclicity\ of variables. We get a static analyser for full Java bytecode that is faster and scales better than another operational pair-sharing analyser and a constraint-based pointer analyser.

"Cardinality Abstraction for Declarative Networking Applications"(article)
Authors: Navarro Pérez, J. A., Rybalchenko, A., Singh, A.
Editors: ,
Date: 2009
Pages: 584-598
Declarative Networking is a recent, viable approach to make distributed programming easier, which is becoming increasingly popular in systems and networking community. It offers the programmer a declarative, rule-based language, called P2, for writing distributed applications in an abstract, yet expressive way. This approach, however, imposes new challenges on analysis and verification methods when they are applied to P2 programs. Reasoning about P2 computations is beyond the scope of existing tools since it requires handling of program states defined in terms of collections of relations, which store the application data, together with multisets of tuples, which represent communication events in-flight. In this paper, we propose a cardinality abstraction technique that can be used to analyze and verify P2 programs. It keeps track of the size of relations (together with projections thereof) and multisets defining P2 states, and provides an appropriate treatment of declarative operations, e.g., indexing, unification, variable binding, and negation. Our cardinality abstraction-based verifier successfully proves critical safety properties of a P2 implementation of the Byzantine fault tolerance protocol Zyzzyva, which is a representative and complex declarative networking application.

"GRAPHITE: Polyhedral Analyses and Optimizations for GCC"(report)
Authors: Pop, S., Silber, G.-A., Cohen, A., Bastoul, C., Girbal, S., Vasilache, N.
Date: 2006
We present a plan to add loop nest optimizations in GCC based on polyhedral representations of loop nests. We advocate a static analysis approach based on a hierarchy of interchangeable abstractions with solvers that range from the exact solvers such as OMEGA, to faster but less precise solvers based on more coarse abstractions. The intermediate representation GRAPHITE (GIMPLE Represented as Polyhedra with Interchangeable Envelopes), built on GIMPLE and the natural loops, hosts the high level loop transformations. We base this presentation on the WRaP-IT project developed in the Alchemy group at INRIA Futurs and Paris-Sud University, on the PIPS compiler developed at École des mines de Paris, and on a joint work with several members of the static analysis and polyhedral compilation community in France. The main goal of this project is to bring more high level loop optimizations to GCC: loop fusion, tiling, strip mining, etc. Thanks to the WRaP-IT experience, we know that the polyhedral analyzes and transformations are affordable in a production compiler. A second goal of this project is to experiment with compile time reduction versus attainable precision when replacing operations on polyhedra with faster operations on more abstract domains. However, the use of a too coarse representation for computing might also result in an over approximated solution that cannot be used in subsequent computations. There exists a trade off between speed of the computation and the attainable precision that has not yet been analyzed for real world programs.
Contribution to the GNU Compilers Collection Developers Summit 2006 (GCC Summit 06), Ottawa, Canada, June 28–30, 2006

"Inferring Disjunctive Postconditions"(anthos)
Authors: Popeea, C., Chin, W.-N.
Editors: ,
Date: 2008
Pages: 331-345
Polyhedral analysis [9] is an abstract interpretation used for automatic discovery of invariant linear inequalities among numerical variables of a program. Convexity of this abstract domain allows efficient analysis but also loses precision via convex-hull and widening operators. To selectively recover the loss of precision, sets of polyhedra (disjunctive elements) may be used to capture more precise invariants. However a balance must be struck between precision and cost. We introduce the notion of affinity to characterize how closely related is a pair of polyhedra. Finding related elements in the polyhedron (base) domain allows the formulation of precise hull and widening operators lifted to the disjunctive (powerset extension of the) polyhedron domain. We have implemented a modular static analyzer based on the disjunctive polyhedral analysis where the relational domain and the proposed operators can progressively enhance precision at a reasonable cost.
Revised selected papers presented at the 11th Asian Computing Science Conference, Tokyo, Japan, December 6–8, 2006

"A General Computational Method for Robustness Analysis with Applications to Synthetic Gene Networks"(article)
Authors: Rizk, A., Batt, G., Fages, F., Soliman, S.
Date: 2009
Pages: i169-i178
\\textbf\Motivation:\ Robustness is the capacity of a system to maintain a function in the face of perturbations. It is essential for the correct functioning of natural and engineered biological systems. Robustness is generally defined in an \\emph\ad hoc\, problem-dependent manner, thus hampering the fruitful development of a theory of biological robustness, recently advocated by Kitano. \\textbf\Results:\ In this article, we propose a general definition of robustness that applies to any biological function expressible in temporal logic LTL (linear temporal logic), and to broad model classes and perturbation types. Moreover, we propose a computational approach and an implementation in BIOCHAM 2.8 for the automated estimation of the robustness of a given behavior with respect to a given set of perturbations. The applicability and biological relevance of our approach is demonstrated by testing and improving the robustness of the timed behavior of a synthetic transcriptional cascade that could be used as a biological timer for synthetic biology applications. \\textbf\Availability:\ Version 2.8 of BIOCHAM and the transcriptional cascade model are available at \\url\http://contraintes.inria.fr/BIOCHAM/\
Paper accepted for presentation at the 2009 ISMB/ECCB Conference, Stockholm, Sweden, June 27–July 2, 2009. Available at \\url\{http://bioinformatics.oxfordjournals.org/cgi/content/abstract/25/12/i169\}

"Program Analysis Using Symbolic Ranges"(article)
Authors: Sankaranarayanan, S., Ivancic, F., Gupta, A.
Editors: ,
Date: 2007
Pages: 366-383
Interval analysis seeks static lower and upper bounds on the values of program variables. These bounds are useful, especially for inferring invariants to prove buffer overflow checks. In practice, however, intervals by themselves are often inadequate as invariants due to the lack of relational information among program variables. In this paper, we present a technique for deriving symbolic bounds on variable values. We study a restricted class of polyhedra whose constraints are stratified with respect to some variable ordering provided by the user, or chosen heuristically. We define a notion of normalization for such constraints and demonstrate polynomial time domain operations on the resulting domain of symbolic range constraints. The abstract domain is intended to complement widely used domains such as intervals and octagons for use in buffer overflow analysis. Finally, we study the impact of our analysis on commercial software using an overflow analyzer for the C language.

"Constraint-Based Linear-Relations Analysis"(article)
Authors: Sankaranarayanan, S., Sipma, H. B., Manna, Z.
Editor:
Date: 2004
Pages: 53-68
Linear-relations analysis of transition systems discovers linear invariant relationships among the variables of the system. These relationships help establish important safety and liveness properties. Efficient techniques for the analysis of systems using polyhedra have been explored, leading to the development of successful tools like HyTech. However, existing techniques rely on the use of approximations such as widening and extrapolation in order to ensure termination. In an earlier paper, we demonstrated the use of Farkas' Lemma to provide a translation from the linear-relations analysis problem into a system of constraints on the unknown coefficients of a candidate invariant. However, since the constraints in question are non-linear, a naive application of the method does not scale. In this paper, we show that by some efficient simplifications and approximations to the quantifier elimination, not only does the method scale to higher dimensions, but also enjoys performance advantages for some larger examples.

"Scalable Analysis of Linear Systems using Mathematical Programming"(article)
Authors: Sankaranarayanan, S., Sipma, H. B., Manna, Z.
Editor:
Date: 2005
Pages: 25-41
We present a method for generating linear invariants for large systems. The method performs forward propagation in an abstract domain consisting of arbitrary polyhedra of a predefined fixed shape. The basic operations on the domain like abstraction, intersection, join and inclusion tests are all posed as linear optimization queries, which can be solved efficiently by existing LP solvers. The number and dimensionality of the LP queries are polynomial in the program dimensionality, size and the number of target invariants. The method generalizes similar analyses in the interval, octagon, and octahedra domains, without resorting to polyhedral manipulations. We demonstrate the performance of our method on some benchmark programs.

"Efficient Strongly Relational Polyhedral Analysis"(article)
Authors: Sankaranarayanan, S., Colón, M., Sipma, H. B., Manna, Z.
Editors: ,
Date: 2006
Pages: 111-125
Polyhedral analysis infers invariant linear equalities and inequalities of imperative programs. However, the exponential complexity of polyhedral operations such as image computation and convex hull limits the applicability of polyhedral analysis. Weakly relational domains such as intervals and octagons address the scalability issue by considering polyhedra whose constraints are drawn from a restricted, user-specified class. On the other hand, these domains rely solely on candidate expressions provided by the user. Therefore, they often fail to produce strong invariants. We propose a polynomial time approach to strongly relational analysis. We provide efficient implementations of join and post condition operations, achieving a trade off between performance and accuracy. We have implemented a strongly relational polyhedral analyzer for a subset of the C language. Initial experimental results on benchmark examples are encouraging.

"Fixed Point Iteration for Computing the Time Elapse Operator"(article)
Authors: Sankaranarayanan, S., Sipma, H. B., Manna, Z.
Editors: ,
Date: 2006
Pages: 537-551
We investigate techniques for automatically generating symbolic approximations to the time solution of a system of differential equations. This is an important primitive operation for the safety analysis of continuous and hybrid systems. In this paper we design a \\emph\time elapse\ operator that computes a symbolic over-approximation of time solutions to a continuous system starting from a given initial region. Our approach is iterative over the cone of functions (drawn from a suitable universe) that are non negative over the initial region. At each stage, we iteratively remove functions from the cone whose Lie derivatives do not lie inside the current iterate. If the iteration converges, the set of states defined by the final iterate is shown to contain all the time successors of the initial region. The convergence of the iteration can be forced using abstract interpretation operations such as widening and narrowing. We instantiate our technique to linear hybrid systems with piecewise-affine dynamics to compute polyhedral approximations to the time successors. Using our prototype implementation TimePass, we demonstrate the performance of our technique on benchmark examples.

"Static Analysis in Disjunctive Numerical Domains"(article)
Authors: Sankaranarayanan, S., Ivančić, F., Shlyakhter, I., Gupta, A.
Editor:
Date: 2006
Pages: 3-17
The convexity of numerical domains such as polyhedra, octagons, intervals and linear equalities enables tractable analysis of software for buffer overflows, null pointer dereferences and floating point errors. However, convexity also causes the analysis to fail in many common cases. Powerset extensions can remedy this shortcoming by considering disjunctions of predicates. Unfortunately, analysis using powerset domains can be exponentially more expensive as compared to analysis on the base domain. In this paper, we prove structural properties of fixed points computed in commonly used powerset extensions. We show that a fixed point computed on a powerset extension is also a fixed point in the base domain computed on an ``elaboration'' of the program's CFG structure. Using this insight, we build analysis algorithms that approach path sensitive static analysis algorithms by performing the fixed point computation on the base domain while discovering an ``elaboration'' on the fly. Using restrictions on the nature of the elaborations, we design algorithms that scale polynomially in terms of the number of disjuncts. We have implemented a light-weight static analyzer for C programs with encouraging initial results.

"Controller Synthesis of Discrete Linear Plants Using Polyhedra"(report)
Authors: Slanina, M., Sankaranarayanan, S., Sipma, H. B., Manna, Z.
Date: 2007
We study techniques for synthesizing synchronous controllers for affine plants with disturbances, based on safety specifications. Our plants are modeled in terms of discrete linear systems whose variables are partitioned into system, control, and disturbance variables. We synthesize non-blocking controllers that satisfy a user-provided safety specification by means of a fixed point iteration over the control precondition state transformer. Using convex polyhedra to represent sets of states, we present both precise and approximate algorithms for computing control preconditions and discuss strategies for forcing convergence of the iteration. We present technique for automatically deriving controllers from the result of the analysis, and demonstrate our approach on examples.

"Definition and Implementation of a Points-To Analysis for C-like Languages"(report)
Authors: Soffia, S.
Date: 2008
The points-to problem is the problem of determining the possible run-time targets of pointer variables and is usually considered part of the more general aliasing problem, which consists in establishing whether and when different expressions can refer to the same memory address. Aliasing information is essential to every tool that needs to reason about the semantics of programs. However, due to well-known undecidability results, for all interesting languages that admit aliasing, the exact solution of nontrivial aliasing problems is not generally computable. This work focuses on approximated solutions to this problem by presenting a store-based, flow-sensitive points-to analysis, for applications in the field of automated software verification. In contrast to software testing procedures, which heuristically check the program against a finite set of executions, the methods considered in this work are static analyses, where the computed results are valid for all the possible executions of the analyzed program. We present a simplified programming language and its execution model; then an approximated execution model is developed using the ideas of abstract interpretation theory. Finally, the soundness of the approximation is formally proved. The aim of developing a realistic points-to analysis is pursued by presenting some extensions to the initial simplified model and discussing the correctness of their formulation. This work contains original contributions to the issue of points-to analysis, as it provides a formulation of a filter operation on the points-to abstract domain and a formal proof of the soundness of the defined abstract operations: these, as far as we now, are lacking from the previous literature.
Available from \\url\{http://arxiv.org/\}

"Multi-Stage Construction of a Global Static Analyzer"(article)
Authors: Starynkevitch, B.
Date: 2007
Pages: 143-151
We describe ongoing work about global static analysis for GCC4 within the GlobalGCC European project, funded thru the ITEA Programme. The aim of this work is to provide global (whole program) static analysis, notably based upon abstract interpretation and knowledge based techniques, within the GCC compiler, targeted for analysis of medium sized C, Fortran or C++ programs. This will facilitate the adoption of GCC in the area of safetycritical software development, by providing features found in a few expensive commercial tools (PolySpace, AbsInt) or research prototypes (Astree). In this perspective, the emphasis is on the quality of analysis, at the expense of much bigger compilation times, without sacrificing scalability. Such analysis can be used for several purposes: statically compute some interesting properties of the program at most control points (possibly reporting them the user); provide clever, contextual, warnings about possible hazards in the user program (null pointer dereferences, zero divide, conversion loss, out of bound array access, …) while avoiding too much false alarms; enable additional optimisations, like conditional contextual constant folding, C++ method call devirtualization, an other contextual optimizations. The compiler's rich program manipulation infrastructure facilitates the development of these advanced analysis capabilities. To facilitate the development high-level semantical analyses, a domain specific language has been developped, and is translated (thru C) into dynamically loaded code. It uses the Parma Polyhedra Library (also used in the GRAPHITE project) for relational analysis on scalars and gives more expressivity to develop analaysis algorithms. It permits multi-staged generation of the specific analysis tailored to the analyzed source code. Presenting this work at the 2007 GCC summit will allow us to stress the importance of all outputs of the compiler, not only object-code, and to expose the complementary contribution of static analyses and dynamic/instrumentation approaches like mudflap.

"SPHIN: A Model Checker for Reconfigurable Hybrid Systems Based on SPIN"(article)
Authors: Song, H., Compton, K. J., Rounds, W. C.
Editors: ,
Date: 2006
Pages: 167-183
We present SPHIN, a model checker for reconfigurable hybrid systems based on the model checker SPIN. We observe that physical (analog) mobility can be modeled in the same way as logical (discrete) mobility is modeled in the $\\pi$-calculus by means of channel name passing. We chose SPIN because it supports channel name passing and can model reconfigurations. We extend the syntax of PROMELA and the verification algorithms based on the expected semantics. We demonstrate the tool's capabilities by modeling and verifying a reconfigurable hybrid system.

"Parametric Model-Checking of Time Petri Nets with Stopwatches Using the State-Class Graph"(article)
Authors: Traonouez, L.-M., Lime, D., Roux, O. H.
Editors: ,
Date: 2008
Pages: 280-294
In this paper, we propose a new framework for the parametric verification of time Petri nets with stopwatches controlled by inhibitor arcs. We first introduce an extension of time Petri nets with inhibitor arcs (ITPNs) with temporal parameters. Then, we define a symbolic representation of the parametric state space based on the classical state class graph method. The parameters of the model are embedded into the firing domains of the classes, that are represented by convex polyhedra. Finally, we propose semi-algorithms for the parametric model-checking of a subset of parametric TCTL formulae on ITPNs. We can thus generate the set of the parameter valuations that satisfy the formulae.

"GRAPHITE Two Years After: First Lessons Learned From Real-World Polyhedral Compilation"(article)
Authors: Trifunovic, K., Cohen, A., Edelsohn, D., Feng, L., Grosser, T., Jagasia, H., Ladelsky, R., Pop, S., Sjödin, J., Upadrasta, R.
Date: 2010
Pages: 4-19
Modern compilers are responsible for adapting the semantics of source programs into a form that makes efficient use of a highly complex, heterogeneous machine. This adaptation amounts to solve an optimization problem in a huge and unstructured search space, while predicting the performance outcome of complex sequences of program transformations. The polyhedral model of compilation is aimed at these challenges. Its geometrical, non-inductive semantics enables the construction of better-structured optimization problems and precise analytical models. Recent work demonstrated the scalability of the main polyhedral algorithms to real-world programs. Its integration into production compilers is under way, pioneered by the graphite branch of the GNU Compiler Collection (GCC). Two years after the effective beginning of the project, this paper reports on original questions and innovative solutions that arose during the design and implementation of \\textsc\graphite\.

"Robust Branch-Cut-and-Price for the Capacitated Minimum Spanning Tree Problem over a Large Extended Formulation"(report)
Authors: Uchoa, E., Fukasawa, R., Lysgaard, J., Pessoa, A., Poggi de Aragão, M., Andrade, D.
Date: 2006
This paper presents a robust branch-cut-and-price algorithm for the Capacitated Minimum Spanning Tree Problem (CMST). The variables are associated to $q$-arbs, a structure that arises from a relaxation of the capacitated prize-collecting arborescence probem in order to make it solvable in pseudo-polynomial time. Traditional inequalities over the arc formulation, like Capacity Cuts, are also used. Moreover, a novel feature is introduced in such kind of algorithms. Powerful new cuts expressed over a very large set of variables could be added, without increasing the complexity of the pricing subproblem or the size of the LPs that are actually solved. Computational results on benchmark instances from the OR-Library show very significant improvements over previous algorithms. Several open instances could be solved to optimality.

"Verifying Generalized Soundness for Workflow Nets"(article)
Authors: van Hee, K., Oanea, O., Sidorova, N., Voorhoeve, M.
Editors: ,
Date: 2006
Pages: 231-244
We improve the decision procedure from [K. van Hee, N. Sidorova, and M. Voorhoeve. Generalized soundness of workflow nets is decidable. In Proc. of ICATPN'2004, volume 3099 of LNCS, pages 197–216, 2004] for the problem of generalized soundness for workflow nets: ``Every marking reachable from an initial marking with $k$ tokens on the initial place terminates properly, i.e. it can reach a marking with $k$ tokens on the final place, for an arbitrary natural number $k$''. Moreover, our new decision procedure returns a counterexample in case the workflow net is not generalized sound. We also report on experimental results obtained with the prototype we made and explain how the procedure can be used for the compositional verification of large workflows.

"Space Cost Analysis Using Sized Types"(thesis)
Authors: Vasconcelos, P. B.
Date: 2008-08
Programming resource-sensitive systems, such as real-time embedded systems, requires guaranteeing both the functional correctness of computations and also that time and space usage fit within constraints imposed by hardware limits or the environment. Functional programming languages have proved very good at meeting the former logical kind of guarantees but not the latter resource guarantees. This thesis contributes to demonstrate the applicability of functional programming in resource-sensitive systems with an automatic program analysis for obtaining guaranteed upper bounds on dynamic space usage of functional programs. Our analysis is developed for a core subset of \\emph\Hume\, a domain-specific functional language targeting resource-sensitive systems (Hammond et al. 2007), and presented as a type and effeect system that builds on previous sized type systems (Hughes et al. 1996, Chin and Khoo 2001) and effeect systems for costs (Dornic et al. 1992, Reistad and Gifford 1994, Hughes and Pareto 1999). It extends previous approaches by using abstract interpretation techniques to \\emph\automatically\ infer linear approximations of the sizes of recursive data types and the stack and heap costs of recursive functions. The correctness of the analysis is formally proved with respect to an operational semantics for the language and an inferrence algorithm that automatically reconstructs size and cost bounds is presented. A prototype implementation of the analysis and operational semantics has been constructed and used to experimentally assess the quality of the cost bounds with some examples, including implementations of textbook functional programming algorithms and simplified embedded systems.

"Static Disassembly and Code Analysis"(anthos)
Authors: Vigna, G.
Editors: , , , ,
Date: 2007
The classification of an unknown binary program as malicious or benign requires two steps. In the first step, the stream of bytes that constitutes the program has to be transformed (or disassembled) into the corresponding sequence of machine instructions. In the second step, based on this machine code representation, static or dynamic code analysis techniques can be applied to determine the properties and function of the program. Both the disassembly and code analysis steps can be foiled by techniques that obfuscate the binary representation of a program. Thus, robust techniques are required that deliver reliable results under such adverse circumstances. In this chapter, we introduce a disassemble technique that can deal with obfuscated binaries. Also, we introduce a static code analysis approach that can identify high-level semantic properties of code that are difficult to conceal.

"Mixed Symbolic Representations for Model Checking Software Programs"(article)
Authors: Yang, Z., Wang, C., Gupta, A., Ivančić, F.
Date: 2006
Pages: 17-26
We present an efficient symbolic search algorithm for software model checking. The algorithm combines multiple symbolic representations to efficiently represent the transition relation and reachable states and uses a combination of decision procedures for Boolean and integer representations. Our main contributions include: (1) mixed symbolic representations to model C programs with rich data types and complex expressions; and (2) new symbolic search strategies and optimization techniques specific to sequential programs that can significantly improve the scalability of model checking algorithms. Our controlled experiments on real-world software programs show that the new symbolic search algorithm can achieve several orders-of-magnitude improvements over existing methods. The proposed techniques are extremely competitive in handling sequential models of non-trivial sizes, and also compare favorably to popular Boolean-level model checking algorithms based on BDDs and SAT.

"Model Checking Sequential Software Programs Via Mixed Symbolic Analysis"(article)
Authors: Yang, Z., Wang, C., Gupta, A., Ivančić, F.
Date: 2009
Pages: 1-26
We present an efficient symbolic search algorithm for software model checking. Our algorithms perform word-level reasoning by using a combination of decision procedures in Boolean and integer and real domains, and use novel symbolic search strategies optimized specifically for sequential programs to improve scalability. Experiments on real-world C programs show that the new symbolic search algorithms can achieve several orders-of-magnitude improvements over existing methods based on bit-level (Boolean) reasoning.

Last updated on