Skip to Content
DocsCitations

PPL Citations

Here are some papers that you may want to read for a full understanding of the Parma Polyhedra Library, the theoretical principles on which it is based, the algorithms and data structures it employs, and the history of all this.

The corresponding BibTeX source is also available.

"Génération automatique de codes de transfert pour multiprocesseurs à mémoires locales"(thesis)
Authors: Ancourt, C.
Date: 1991-03
Parallel tasks generated by automatic parallelizers do not take advantage of supercomputer memory hierarchies. This thesis presents algorithms to transform a parallel task into an equivalent one that uses data with fast access memory. Algorithms to automatically generate code to move data between two different memory levels of (super)computer are presented. These copy codes should move back and forth array elements that are accessed when an elementary processor execute an array reference located in a set of loops. This set of array elements is characterized by a set of integer points in $\\mathcal\Z\^p$ that is not necessarily a convex polyhedron. In the case of data transfers from global memory to local memory, it is possible to copy a superset of accessed elements, for instance its convex hull. A trade-off has to be made between local memory space, transfer volume and loop bound complexity. To copy data back from local memory to global memory is more difficult because global memory consistency must be preserved. Each processor (or processus) should only copy its own results to avoid errors and, secondarily, to decrease global memory traffic. The input of our main algorithm is an integer convex polyhedron defining the computation space and an affine function representing the index expressions. Its output is set of nested loops containing a new array reference whose execution copies exactly accessed elements. Each element is copied only once. Loop bound expressions use integer divisions to generate non-convex sets. For most practical programs this algorithm provides optimal code in number of data movements and control overhead. Associated with a dependence analysis phase, it can be used to generate data movements in distributed memory multiprocessors. When data partitionning in local memories is specified, it eliminates most of execution guards used to compute only local values on each processor.

"Data-Flow Analysis for Constraint Logic-Based Languages"(thesis)
Authors: Bagnara, R.
Date: 1997-03
We aim at the the development of precise, practical, and theoretically well-founded data-flow analyzers for constraint logic-based languages. The design and development of such an analyzer fostered a number of research problems that we had to address. A hierarchy of constraint systems is introduced that is suitable for designing and combining abstract domains. The rational construction of a generic domain for the structural analysis of CLP programs is presented. We also address the problem of the ``missing \\emph\occur-check\'' in many implemented languages. We introduce a new family of domains, based on constraint propagation techniques, for the abstraction of the numerical leaves that occur in the terms of CLP languages. Despite the fact that groundness analysis for logic-based languages is a widely studied subject, a novel domain for groundness analysis is presented that outperforms the existing domains from several points of view. Finally, we present a bottom-up analysis technique for CLP languages that allows for the precise derivation of both call- and success-patterns preserving the connection between them.
Printed as Report TD-1/97

"A Hierarchy of Constraint Systems for Data-Flow Analysis of Constraint Logic-Based Languages"(article)
Authors: Bagnara, R.
Date: 1998
Pages: 119-155
Many interesting analyses for constraint logic-based languages are aimed at the detection of \\emph\monotonic\ properties, that is to say, properties that are preserved as the computation progresses. Our basic claim is that most, if not all, of these analyses can be described within a unified notion of constraint domains. We present a class of constraint systems that allows for a smooth integration within an appropriate framework for the definition of non-standard semantics of constraint logic-based languages. Such a framework is also presented and motivated. We then show how such domains can be built, as well as construction techniques that induce a hierarchy of domains with interesting properties. In particular, we propose a general methodology for domain combination with asynchronous interaction (i.e., the interaction is not necessarily synchronized with the domains' operations). By following this methodology, interesting combinations of domains can be expressed with all the the semantic elegance of concurrent constraint programming languages.

"A Linear Domain for Analyzing the Distribution of Numerical Values"(report)
Authors: Bagnara, R., Dobson, K., Hill, P. M., Mundell, M., Zaffanella, E.
Date: 2005
This paper explores the abstract domain of \\emph\grids\, a domain that is able to represent sets of equally spaced points and hyperplanes over an $n$-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values program variables can take. Besides the bare abstract domain, we present a complete set of operations on grids that includes all that is necessary to define the abstract semantics and the widening operators required to compute it in a finite number of steps. The definition of the domain and its operations exploit well-known techniques from linear algebra as well as a dual representation that allows, among other things, for a concise and efficient implementation.

"Grids: A Domain for Analyzing the Distribution of Numerical Values"(article)
Authors: Bagnara, R., Dobson, K., Hill, P. M., Mundell, M., Zaffanella, E.
Editor:
Date: 2007
Pages: 219-235
This paper explores the abstract domain of \\emph\grids\, a domain that is able to represent sets of equally spaced points and hyperplanes over an $n$-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values program variables can take. We present the domain, its representation and the basic operations on grids necessary to define the abstract semantics. We show how the definition of the domain and its operations exploit well-known techniques from linear algebra as well as a dual representation that allows, among other things, for a concise and efficient implementation.

"A Practical Tool for Analyzing the Distribution of Numerical Values"(misc)
Authors: Bagnara, R., Dobson, K., Hill, P. M., Mundell, M., Zaffanella, E.
Date: 2006
The abstract domain of grids (or lattices) is a domain that is able to represent sets of equally spaced points and hyperplanes over an n-dimensional vector space. Such a domain is useful for the static analysis of the patterns of distribution of the values that program variables can take. This paper explores how this domain may be used in program analysis, describing grid operations such as affine image, affine preimage and widenings needed by such an application. The paper also shows how any grid may be approximated by a less precise non-relational grid and describes how such an approximation can be computed. Illustrative examples show how the domain may be used in the analysis of programs containing simple assignment statements, while loops and recursive procedures.
Available at \\url\{http://www.comp.leeds.ac.uk/hill/Papers/papers.html\}.

"Widening Operators for Weakly-Relational Numeric Abstractions"(misc)
Authors: Bagnara, R., Hill, P. M., Mazzi, E., Zaffanella, E.
Date: 2004
We discuss the divergence problems recently identified in some extrapolation operators for weakly-relational numeric domains. We identify the cause of the divergences and point out that resorting to more concrete, syntactic domains can be avoided by researching suitable algorithms for the elimination of redundant constraints in the chosen representation.
Extended abstract. Contribution to the \\emph\{International workshop on ``Numerical & Symbolic Abstract Domains''\} (NSAD'05, Paris, January 21, 2005). Available at \\url\{http://arxiv.org/\} and \\url\{http://bugseng.com/products/ppl/\}

"A New Encoding of Not Necessarily Closed Convex Polyhedra"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Editors: , ,
Date: 2002
Pages: 147-153
Published as TR Number CLIP4/02.0, Universidad Politécnica de Madrid, Facultad de Informática

"A New Encoding and Implementation of Not Necessarily Closed Convex Polyhedra"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2002
Convex polyhedra, commonly employed for the analysis and verification of both hardware and software, may be defined either by a finite set of linear inequality constraints or by finite sets of generating points and rays of the polyhedron. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a vector space having one extra dimension and reuse the tools and techniques already available for closed polyhedra. Previously, this embedding has been designed so that a constant number of constraints and a linear number of generators have to be added to the original NNC specification of the polyhedron. In this paper we explore an alternative approach: while still using an extra dimension to represent the NNC polyhedron by a closed polyhedron, the new embedding adds a linear number of constraints and a constant number of generators. We discuss the relative benefits of these two implementations and how the choice of representation can affect the efficiency of the polyhedral operations. As far as the issue of providing a non-redundant description of the NNC polyhedron is concerned, we generalize the results established in a previous paper so that they apply to both encodings.
Available at \\url\{http://www.cs.unipr.it/Publications/\}

"Precise Widening Operators for Convex Polyhedra"(article)
Authors: Bagnara, R., Hill, P. M., Ricci, E., Zaffanella, E.
Date: 2005
Pages: 28-56
In the context of static analysis via abstract interpretation, convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating the convergence of fixpoint computations. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of \\emph\standard widening\ since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is an unfulfilled demand for more precise widening operators. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new widening operators that are never less precise than a given widening. The framework is then instantiated on the domain of convex polyhedra so as to obtain a new widening operator that improves on the standard widening by combining several heuristics. A preliminary experimental evaluation has yielded promising results. We also suggest an improvement to the well-known widening delay technique that allows to gain precision while preserving its overall simplicity.

"Widening Operators for Weakly-Relational Numeric Abstractions"(report)
Authors: Bagnara, R., Hill, P. M., Mazzi, E., Zaffanella, E.
Date: 2005
We discuss the construction of proper widening operators on several weakly-relational numeric abstractions. Our proposal differs from previous ones in that we actually consider the semantic abstract domains, whose elements are \\emph\geometric shapes\, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices. Since the closure by entailment operator preserves geometric shapes, but not their syntactic expressions, our widenings are immune from the divergence issues that could be faced by the previous approaches when interleaving the applications of widening and closure. The new widenings, which are variations of the \\emph\standard widening\ for convex polyhedra defined by Cousot and Halbwachs, can be made as precise as the previous proposals working on the syntactic domains. The implementation of each new widening relies on the availability of an effective reduction procedure for the considered constraint description: we provide such an algorithm for the domain of \\emph\octagonal shapes\.
Available at \\url\{http://www.cs.unipr.it/Publications/\}

"Widening Operators for Weakly-Relational Numeric Abstractions"(article)
Authors: Bagnara, R., Hill, P. M., Mazzi, E., Zaffanella, E.
Editors: ,
Date: 2005
Pages: 3-18
We discuss the construction of proper widening operators on several weakly-relational numeric abstractions. Our proposal differs from previous ones in that we actually consider the semantic abstract domains, whose elements are \\emph\geometric shapes\, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices. Since the closure by entailment operator preserves geometric shapes, but not their syntactic expressions, our widenings are immune from the divergence issues that could be faced by the previous approaches when interleaving the applications of widening and closure. The new widenings, which are variations of the \\emph\standard widening\ for convex polyhedra defined by Cousot and Halbwachs, can be made as precise as the previous proposals working on the syntactic domains. The implementation of each new widening relies on the availability of an effective reduction procedure for the considered constraint description: we provide such an algorithm for the domain of \\emph\octagonal shapes\.

"The Automatic Synthesis of Linear Ranking Functions: The Complete Unabridged Version"(report)
Authors: Bagnara, R., Mesnard, F., Pescetti, A., Zaffanella, E.
Date: 2010
The classical technique for proving termination of a generic sequential computer program involves the synthesis of a \\emph\ranking function\ for each loop of the program. \\emph\Linear\ ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van~Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.
Superseded by \\cite\{BagnaraMPZ12TR\}.

"The Automatic Synthesis of Linear Ranking Functions: The Complete Unabridged Version"(misc)
Authors: Bagnara, R., Mesnard, F., Pescetti, A., Zaffanella, E.
Date: 2012
The classical technique for proving termination of a generic sequential computer program involves the synthesis of a \\emph\ranking function\ for each loop of the program. \\emph\Linear\ ranking functions are particularly interesting because many terminating loops admit one and algorithms exist to automatically synthesize it. In this paper we present two such algorithms: one based on work dated 1991 by Sohn and Van~Gelder; the other, due to Podelski and Rybalchenko, dated 2004. Remarkably, while the two algorithms will synthesize a linear ranking function under exactly the same set of conditions, the former is mostly unknown to the community of termination analysis and its general applicability has never been put forward before the present paper. In this paper we thoroughly justify both algorithms, we prove their correctness, we compare their worst-case complexity and experimentally evaluate their efficiency, and we present an open-source implementation of them that will make it very easy to include termination-analysis capabilities in automatic program verifiers.
Available at \\url\{http://arxiv.org/\} and \\url\{http://bugseng.com/products/ppl/\}. Improved version of \\cite\{BagnaraMPZ10TR\}.

"Possibly Not Closed Convex Polyhedra and the Parma Polyhedra Library"(article)
Authors: Bagnara, R., Ricci, E., Zaffanella, E., Hill, P. M.
Editors: ,
Date: 2002
Pages: 213-229
The domain of convex polyhedra is employed in several systems for the analysis and verification of hardware and software components. Current applications span imperative, functional and logic languages, synchronous languages and synchronization protocols, real-time and hybrid systems. Since the seminal work of P.~Cousot and N.~Halbwachs, convex polyhedra have thus played an important role in the formal methods community and several critical tasks rely on their software implementations. Despite this, existing libraries for the manipulation of convex polyhedra are still research prototypes and suffer from limitations that make their usage problematic, especially in critical applications. Furthermore, there is inadequate support for polyhedra that are not necessarily closed (NNC), i.e., polyhedra that are described by systems of constraints where strict inequalities are allowed to occur. This paper presents the Parma Polyhedra Library, a new, robust and complete implementation of NNC convex polyhedra, concentrating on the distinctive features of the library and on the novel theoretical underpinnings.

"Possibly Not Closed Convex Polyhedra and the Parma Polyhedra Library"(report)
Authors: Bagnara, R., Ricci, E., Zaffanella, E., Hill, P. M.
Date: 2002
The domain of convex polyhedra is employed in several systems for the analysis and verification of hardware and software components. Current applications span imperative, functional and logic languages, synchronous languages and synchronization protocols, real-time and hybrid systems. Since the seminal work of P.~Cousot and N.~Halbwachs, convex polyhedra have thus played an important role in the formal methods community and several critical tasks rely on their software implementations. Despite this, existing libraries for the manipulation of convex polyhedra are still research prototypes and suffer from limitations that make their usage problematic, especially in critical applications. These limitations concern inaccuracies in the documentation of the underlying theory, code and interfaces; numeric overflow and underflow; use of not fully dynamic data-structures and poor mechanisms for error handling and recovery. In addition, there is inadequate support for polyhedra that are not necessarily closed (NNC), i.e., polyhedra that are described by systems of constraints where strict inequalities are allowed to occur. This paper presents the Parma Polyhedra Library, a new, robust and complete implementation of NNC convex polyhedra, concentrating on the distinctive features of the library and on the novel theoretical underpinnings.
See also \\cite\{BagnaraRZH02TRerrata\}. Available at \\url\{http://www.cs.unipr.it/Publications/\}

"Errata for Technical Report ``Quaderno 286''"(misc)
Authors: Bagnara, R., Ricci, E., Zaffanella, E., Hill, P. M.
Date: 2002
See \\cite\{BagnaraRZH02TR\}

"Precise Widening Operators for Convex Polyhedra"(article)
Authors: Bagnara, R., Hill, P. M., Ricci, E., Zaffanella, E.
Editor:
Date: 2003
Pages: 337-354
Convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating convergence of the fixpoint computation. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of \\emph\standard widening\ since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is an unfulfilled demand for more precise widening operators. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new and precise widening operators for convex polyhedra. The framework is then instantiated so as to obtain a new widening operator that combines several heuristics and uses the standard widening as a last resort so that it is never less precise. A preliminary experimental evaluation has yielded promising results.

"Precise Widening Operators for Convex Polyhedra"(report)
Authors: Bagnara, R., Hill, P. M., Ricci, E., Zaffanella, E.
Date: 2003
Convex polyhedra constitute the most used abstract domain among those capturing numerical relational information. Since the domain of convex polyhedra admits infinite ascending chains, it has to be used in conjunction with appropriate mechanisms for enforcing and accelerating convergence of the fixpoint computation. Widening operators provide a simple and general characterization for such mechanisms. For the domain of convex polyhedra, the original widening operator proposed by Cousot and Halbwachs amply deserves the name of \\emph\standard widening\ since most analysis and verification tools that employ convex polyhedra also employ that operator. Nonetheless, there is demand for more precise widening operators that still has not been fulfilled. In this paper, after a formal introduction to the standard widening where we clarify some aspects that are often overlooked, we embark on the challenging task of improving on it. We present a framework for the systematic definition of new and precise widening operators for convex polyhedra. The framework is then instantiated so as to obtain a new widening operator that combines several heuristics and uses the standard widening as a last resort so that it is never less precise. A preliminary experimental evaluation has yielded promising results. We also suggest an improvement to the well-known widening delay technique that allows to gain precision while preserving its overall simplicity.
Available at \\url\{http://www.cs.unipr.it/Publications/\}

"A New Encoding and Implementation of Not Necessarily Closed Convex Polyhedra"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Editors: , ,
Date: 2003
Pages: 161-176
Convex polyhedra, commonly employed for the analysis and verification of both hardware and software, may be defined either by a finite set of linear inequality constraints or by finite sets of generating points and rays of the polyhedron. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a vector space having one extra dimension and reuse the tools and techniques already available for closed polyhedra. Previously, this embedding has been designed so that a constant number of constraints and a linear number of generators have to be added to the original NNC specification of the polyhedron. In this paper we explore an alternative approach: while still using an extra dimension to represent the NNC polyhedron by a closed polyhedron, the new embedding adds a linear number of constraints and a constant number of generators. We discuss the relative benefits of these two implementations and how the choice of representation can affect the efficiency of the polyhedral operations. As far as the issue of providing a non-redundant description of the NNC polyhedron is concerned, we generalize the results established in a previous paper so that they apply to both encodings.
Published as TR Number DSSE-TR-2003-2, University of Southampton

"Widening Operators for Powerset Domains"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Editors: ,
Date: 2003
Pages: 135-148
The \\emph\finite powerset construction\ upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. In this paper we define two generic widening operators for the finite powerset abstract domain. Both widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators. We illustrate the proposed techniques by instantiating our widenings on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.

"Widening Operators for Powerset Domains"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2004
The \\emph\finite powerset construction\ upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. In this paper we define two generic widening operators for the finite powerset abstract domain. Both widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators. We illustrate the proposed techniques by instantiating our widenings on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.
Available at \\url\{http://www.cs.unipr.it/Publications/\}

"Not Necessarily Closed Convex Polyhedra and the Double Description Method"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2005
Pages: 222-257
Since the seminal work of Cousot and Halbwachs, the domain of convex polyhedra has been employed in several systems for the analysis and verification of hardware and software components. Although most implementations of the polyhedral operations assume that the polyhedra are topologically closed (i.e., all the constraints defining them are non-strict), several analyzers and verifiers need to compute on a domain of convex polyhedra that are not necessarily closed (NNC). The usual approach to implementing NNC polyhedra is to embed them into closed polyhedra in a higher dimensional vector space and reuse the tools and techniques already available for closed polyhedra. In this work we highlight and discuss the issues underlying such an embedding for those implementations that are based on the \\emph\double description\ method, where a polyhedron may be described by a system of linear constraints or by a system of generating rays and points. Two major achievements are the definition of a theoretically clean, high-level user interface and the specification of an efficient procedure for removing redundancies from the descriptions of NNC polyhedra.

"Widening Operators for Powerset Domains"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2006
Pages: 449-466
The \\emph\finite powerset construction\ upgrades an abstract domain by allowing for the representation of finite disjunctions of its elements. While most of the operations on the finite powerset abstract domain are easily obtained by ``lifting'' the corresponding operations on the base-level domain, the problem of endowing finite powersets with a provably correct widening operator is still open. In this paper we define three generic widening methodologies for the finite powerset abstract domain. The widenings are obtained by lifting any widening operator defined on the base-level abstract domain and are parametric with respect to the specification of a few additional operators that allow all the flexibility required to tune the complexity/precision trade-off. As far as we know, this is the first time that the problem of deriving non-trivial, provably correct widening operators in a domain refinement is tackled successfully. We illustrate the proposed techniques by instantiating our widening methodologies on powersets of convex polyhedra, a domain for which no non-trivial widening operator was previously known.
In the printed version of this article, all the figures have been improperly printed (rendering them useless). See \\cite\{BagnaraHZ06STTTerratum\}.

"Widening Operators for Powerset Domains"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2007
Pages: 413-414
Erratum to \\cite\{BagnaraHZ06STTT\} containing all the figures properly printed.

"The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2006
Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the \\emph\Parma Polyhedra Library\ has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.
Available at \\url\{http://www.cs.unipr.it/Publications/\}. Also published as {\\tt arXiv:cs.MS/0612085}, available from \\url\{http://arxiv.org/\}.

"Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2007
Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.
Available at \\url\{http://www.cs.unipr.it/Publications/\}. Also published as {\\tt arXiv:cs.CG/0701122}, available from \\url\{http://arxiv.org/\}.

"An Improved Tight Closure Algorithm for Integer Octagonal Constraints"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2007
Integer octagonal constraints (a.k.a.\\ \\emph\Unit Two Variables Per Inequality\ or \\emph\UTVPI integer constraints\) constitute an interesting class of constraints for the representation and solution of integer problems in the fields of constraint programming and formal analysis and verification of software and hardware systems, since they couple algorithms having polynomial complexity with a relatively good expressive power. The main algorithms required for the manipulation of such constraints are the satisfiability check and the computation of the inferential closure of a set of constraints. The latter is called \\emph\tight\ closure to mark the difference with the (incomplete) closure algorithm that does not exploit the integrality of the variables. In this paper we present and fully justify an $O(n^3)$ algorithm to compute the tight closure of a set of UTVPI integer constraints.
Available at \\url\{http://www.cs.unipr.it/Publications/\}. Also published as {\\tt arXiv:0705.4618v2 [cs.DS]}, available from \\url\{http://arxiv.org/\}.

"An Improved Tight Closure Algorithm for Integer Octagonal Constraints"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Editors: , ,
Date: 2008
Pages: 8-21
Integer octagonal constraints(a.k.a.\\ \\emph\Unit Two Variables Per Inequality\ or \\emph\UTVPI integer constraints\) constitute an interesting class of constraints for the representation and solution of integer problems in the fields of constraint programming and formal analysis and verification of software and hardware systems, since they couple algorithms having polynomial complexity with a relatively good expressive power. The main algorithms required for the manipulation of such constraints are the satisfiability check and the computation of the inferential closure of a set of constraints. The latter is called \\emph\tight\ closure to mark the difference with the (incomplete) closure algorithm that does not exploit the integrality of the variables. In this paper we present and fully justify an $O(n^3)$ algorithm to compute the tight closure of a set of UTVPI integer constraints.

"The Parma Polyhedra Library: Toward a Complete Set of Numerical Abstractions for the Analysis and Verification of Hardware and Software Systems"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2008
Pages: 3-21
Since its inception as a student project in 2001, initially just for the handling (as the name implies) of convex polyhedra, the \\emph\Parma Polyhedra Library\ has been continuously improved and extended by joining scrupulous research on the theoretical foundations of (possibly non-convex) numerical abstractions to a total adherence to the best available practices in software development. Even though it is still not fully mature and functionally complete, the Parma Polyhedra Library already offers a combination of functionality, reliability, usability and performance that is not matched by similar, freely available libraries. In this paper, we present the main features of the current version of the library, emphasizing those that distinguish it from other similar libraries and those that are important for applications in the field of analysis and verification of hardware and software systems.

"Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2010
Pages: 453-473
Deciding whether the union of two convex polyhedra is itself a convex polyhedron is a basic problem in polyhedral computations; having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In such application fields though, general convex polyhedra are just one among many, so-called, \\emph\numerical abstractions\, which range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices —that is, partial orders where any finite set of elements has a least upper bound—, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra —which, as far as we know, is the only one already studied in the literature— we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.

"Applications of Polyhedral Computations to the Analysis and Verification of Hardware and Software Systems"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2009
Pages: 4672-4691
Convex polyhedra are the basis for several abstractions used in static analysis and computer-aided verification of complex and sometimes mission critical systems. For such applications, the identification of an appropriate complexity-precision trade-off is a particularly acute problem, so that the availability of a wide spectrum of alternative solutions is mandatory. We survey the range of applications of polyhedral computations in this area; give an overview of the different classes of polyhedra that may be adopted; outline the main polyhedral operations required by automatic analyzers and verifiers; and look at some possible combinations of polyhedra with other numerical abstractions that have the potential to improve the precision of the analysis. Areas where further theoretical investigations can result in important contributions are highlighted.

"Weakly-Relational Shapes for Numeric Abstractions: Improved Algorithms and Proofs of Correctness"(article)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2009
Pages: 279-323
Weakly-relational numeric constraints provide a compromise between complexity and expressivity that is adequate for several applications in the field of formal analysis and verification of software and hardware systems. We address the problems to be solved for the construction of full-fledged, efficient and provably correct abstract domains based on such constraints. We first propose to work with \\emph\semantic\ abstract domains, whose elements are \\emph\geometric shapes\, instead of the (more concrete) syntactic abstract domains of constraint networks and matrices on which the previous proposals are based. This allows to solve, once and for all, the problem whereby \\emph\closure by entailment\, a crucial operation for the realization of such domains, seemed to impede the realization of proper widening operators. In our approach, the implementation of widenings relies on the availability of an effective reduction procedure for the considered constraint description: one for the domain of \\emph\bounded difference shapes\ already exists in the literature; we provide algorithms for the significantly more complex cases of rational and integer \\emph\octagonal shapes\. We also improve upon the state-of-the-art by presenting, along with their proof of correctness, closure by entailment algorithms of reduced complexity for domains based on rational and integer octagonal constraints. The consequences of implementing weakly-relational numerical domains with floating point numbers are also discussed.

"Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions"(report)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2009
Deciding whether the union of two convex polyhedra is a convex polyhedron is a basic problem in polyhedral computation having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In these application fields, though, general convex polyhedra are only one among many so-called \\emph\numerical abstractions\: these range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices —that is, partial orders where any finite set of elements has a least upper bound—, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra —which, as far as we know, is the only one already studied in the literature— we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.
Available at \\url\{http://www.cs.unipr.it/Publications/\}. A corrected and improved version (corrected an error in the statement of condition (3) of Theorem~3.6, typos corrected in statement and proof of Theorem~6.8) has been published in \\cite\{BagnaraHZ09TRb\}.

"Exact Join Detection for Convex Polyhedra and Other Numerical Abstractions"(misc)
Authors: Bagnara, R., Hill, P. M., Zaffanella, E.
Date: 2009
Deciding whether the union of two convex polyhedra is itself a convex polyhedron is a basic problem in polyhedral computations; having important applications in the field of constrained control and in the synthesis, analysis, verification and optimization of hardware and software systems. In such application fields though, general convex polyhedra are just one among many, so-called, \\emph\numerical abstractions\, which range from restricted families of (not necessarily closed) convex polyhedra to non-convex geometrical objects. We thus tackle the problem from an abstract point of view: for a wide range of numerical abstractions that can be modeled as bounded join-semilattices —that is, partial orders where any finite set of elements has a least upper bound—, we show necessary and sufficient conditions for the equivalence between the lattice-theoretic join and the set-theoretic union. For the case of closed convex polyhedra —which, as far as we know, is the only one already studied in the literature— we improve upon the state-of-the-art by providing a new algorithm with a better worst-case complexity. The results and algorithms presented for the other numerical abstractions are new to this paper. All the algorithms have been implemented, experimentally validated, and made available in the Parma Polyhedra Library.
Available at \\url\{http://arxiv.org/\} and \\url\{http://bugseng.com/products/ppl/\}

"A Technique for Summarizing Data Access and Its Use in Parallelism Enhancing Transformations"(article)
Authors: Balasundaram, V., Kennedy, K.
Editor:
Date: 1989
Pages: 41-53

"Convexity Recognition of the Union of Polyhedra"(report)
Authors: Bemporad, A., Fukuda, K., Torrisi, F. D.
Date: 2000
In this paper we consider the following basic problem in polyhedral computation: Given two polyhedra in $R^d$, $P$ and $Q$, decide whether their union is convex, and, if so, compute it. We consider the three natural specializations of the problem: (1) when the polyhedra are given by half-spaces (H-polyhedra) (2) when they are given by vertices and extreme rays (V-polyhedra) (3) when both H- and V-polyhedral representations are available. Both the bounded (polytopes) and the unbounded case are considered. We show that the first two problems are polynomially solvable, and that the third problem is strongly-polynomially solvable.

"Convexity Recognition of the Union of Polyhedra"(article)
Authors: Bemporad, A., Fukuda, K., Torrisi, F. D.
Date: 2001
Pages: 141-154
In this paper we consider the following basic problem in polyhedral computation: Given two polyhedra in $R^d$, $P$ and $Q$, decide whether their union is convex, and, if so, compute it. We consider the three natural specializations of the problem: (1) when the polyhedra are given by halfspaces (H-polyhedra), (2) when they are given by vertices and extreme rays (V-polyhedra), and (3) when both H- and V-polyhedral representations are available. Both the bounded (polytopes) and the unbounded case are considered. We show that the first two problems are polynomially solvable, and that the third problem is strongly-polynomially solvable.

"Polyhedral Analysis for Synchronous Languages"(article)
Authors: Besson, F., Jensen, T. P., Talpin, J.-P.
Editors: ,
Date: 1999
Pages: 51-68
We define an operational semantics for the \\textsc\Signal\ language and design an analysis which allows to verify properties pertaining to the relation between values of the numeric and boolean variables of a reactive system. A distinguished feature of the analysis is that it is expressed and proved correct with respect to the source program rather than on an intermediate representation of the program. The analysis calculates a safe approximation to the set of reachable states by a symbolic fixed point computation in the domain of convex polyhedra using a novel widening operator based on the convex hull representation of polyhedra.

"Lessons Learned in Benchmarking — Floating Point Benchmarks: Can You Trust Them?"(article)
Authors: Bjørndalen, J. M., Anshus, O.
Date: 2005
Pages: 89-100
Benchmarks are important tools for understanding the implication of design choices for systems, and for studying increasingly complex hardware architectures and software systems. One of the assumptions for benchmarking within systems research seems to be that the execution time of floating point operations do not change much with different input values. We report on a problem where a textbook benchmark showed significant variation in execution time depending on the input values, and how a small fraction of \\emph\denormalized\ floating point values (a representation automatically used by the CPU to represent values close to zero) in the benchmark could lead to the wrong conclusions about the relative efficiency of PowerPC and Intel P4 machines. Furthermore, a parallel version of the same benchmark is demonstrated to incorrectly indicate scalability problems in the application or communication subsystem. There is significant overhead in handling these exceptions on-chip on modern Intel hardware, even if the program can continue uninterrupted. We have observed that the execution time of benchmarks can increase by up to two orders of magnitude. In one benchmark, 3.88% denormalized numbers in a matrix slowed down the benchmark by a factor 3.83. We suggest some remedies and guidelines for avoiding the problem.

"Design and Implementation of a Special-Purpose Static Program Analyzer for Safety-Critical Real-Time Embedded Software"(anthos)
Authors: Blanchet, B., Cousot, P., Cousot, R., Feret, J., Mauborgne, L., Miné, A., Monniaux, D., Rival, X.
Editors: , ,
Date: 2002
Pages: 85-108
We report on a successful preliminary experience in the design and implementation of a special-purpose Abstract Interpretation based static program analyzer for the verification of safety critical embedded real-time software. The analyzer is both precise (zero false alarm in the considered experiment) and efficient (less than one minute of analysis for 10,000 lines of code). Even if it is based on a simple interval analysis, many features have been added to obtain the desired precision: expansion of small arrays, widening with several thresholds, loop unrolling, trace partitioning, relations between loop counters and other variables. The efficiency of the tool mainly comes from a clever representation of abstract environments based on balanced binary search trees.

"Model-Checking Concurrent Systems with Unbounded Integer Variables: Symbolic Representations, Approximations, and Experimental Results"(article)
Authors: Bultan, T., Gerber, R., Pugh, W.
Date: 1999
Pages: 747-789
Model checking is a powerful technique for analyzing large, finite-state systems. In an infinite state system, however, many basic properties are undecidable. In this article, we present a new symbolic model checker which conservatively evaluates safety and liveness properties on programs with unbounded integer variables. We use Presburger formulas to symbolically encode a program's transition system, as well as its model-checking computations. All fixpoint calculations are executed symbolically, and their convergence is guaranteed by using approximation techniques. We demonstrate the promise of this technology on some well-known infinite-state concurrency problems.

"Algorithm for Finding a General Formula for the Non-Negative Solutions of System of Linear Equations"(article)
Authors: Chernikova, N. V.
Date: 1964
Pages: 151-158

"Algorithm for Finding a General Formula for the Non-Negative Solutions of System of Linear Inequalities"(article)
Authors: Chernikova, N. V.
Date: 1965
Pages: 228-233
The present note proposes a computational scheme for finding a general formula for the non-negative solutions of a system of linear inequalities analogous to the scheme described in \\cite\Chernikova64\ for finding a general formula for the non-negative solutions of a system of linear equations.

"Algorithm for Discovering the Set of all Solutions of a Linear Programming Problem"(article)
Authors: Chernikova, N. V.
Date: 1968
Pages: 282-293
In this paper two versions of a canonical algorithm for discovering all the optimal solutions of a linear programming problem with the condition of non-negativeness of the variables are presented: the first for the case of canonical notation, the second for the standard notation.

"Static Determination of Dynamic Properties of Programs"(article)
Authors: Cousot, P., Cousot, R.
Editor:
Date: 1976
Pages: 106-130
In high level languages, compile time type verifications are usually incomplete, and dynamic coherence checks must be inserted in object code. For example, in PASCAL one must dynamically verify that the values assigned to subrange type variables, or index expressions lie between two bounds, or that pointers are not \\texttt\nil\, ... We present here a general algorithm allowing most of these certifications to be done at compile time.

"Systematic Design of Program Analysis Frameworks"(article)
Authors: Cousot, P., Cousot, R.
Date: 1979
Pages: 269-282
Semantic analysis of programs is essential in optimizing compilers and program verification systems. It encompasses data flow analysis, data type determination, generation of approximate invariant assertions, etc. Several recent papers (among others Cousot & Cousot[77a], Graham & Wegman[76], Kam & Ullmann[76], Killdall[73], Rosen[78], Tarjan[76], Wegbreit[75]) have introduced abstract approaches to program analysis which are tantamount to the use of a \\emph\program analysis framework\ $(A, t, \\gamma)$ where $A$ is a lattice of (approximate) assertions, $t$ is an (approximate) predicate transformer and $\\gamma$ is an often implicit function specifying the meaning of the elements of $A$. This paper is devoted to the systematic and correct design of program analysis frameworks with respect to a formal semantics.

"Comparing the Galois Connection and Widening/Narrowing Approaches to Abstract Interpretation"(article)
Authors: Cousot, P., Cousot, R.
Editors: ,
Date: 1992
Pages: 269-295
The use of infinite abstract domains with widening and narrowing for accelerating the convergence of abstract interpretations is shown to be more powerful than the Galois connection approach restricted to finite lattices (or lattices satisfying the chain condition).

"Automatic Discovery of Linear Restraints Among Variables of a Program"(article)
Authors: Cousot, P., Halbwachs, N.
Date: 1978
Pages: 84-96
The model of abstract interpretation of programs developed by Cousot & Cousot [1976] and Cousot & Cousot [1977] is applied to the static determination of linear equality or inequality relations among variables of programs.

"Linear Programming and Extensions"(book)
Authors: Dantzig, G. B.
Date: 1963

"Parametric Integer Programming"(article)
Authors: Feautrier, P.
Date: 1988
Pages: 243-268
When analysing computer programs (especially numerical programs in which arrays are used extensively), one is often confronted with integer programming problems. These problems have three peculiarities: feasible points are ranked according to lexicographic order rather than the usual linear economic function; the feasible set depends on integer parameters; one is interested only in exact solutions. The difficulty is somewhat alleviated by the fact that problems sizes are usually quite small. In this paper we show that: the classical simplex algorithm has no difficulty in handling lexicographic ordering; the algorithm may be executed in symbolic mode, thus giving the solution of continuous parametric problems; the method may be extended to problems in integers. We prove that the resulting algorithm always terminate and give an estimate of its complexity.

"PIP/PipLib: A Solver for Parametric Integer Programming Problems"(reference)
Authors: Feautrier, P., Collard, J.-F., Bastoul, C.
Date: 2007-07
This manual is for PIP and PipLib version 1.4.0, a software which solves Parametric Integer Programming problems. That is, PIP finds the lexicographic minimum of the set of integer points which lie inside a convex polyhedron, when that polyhedron depends linearly on one or more integral parameters.
Distributed with {PIP/PipLib} 1.4.0

"Polyhedral Computation FAQ"(misc)
Authors: Fukuda, K.
Date: 1998
This is an FAQ to answer some basic questions arising from certain geometric computation in general dimensional (mostly Euclidean) space. The main areas to be covered are the convex hull computation of a finite point set, the vertex enumeration for a convex polytope, the computation of Voronoi diagram and Delaunay triangulation, in $R^d$. We illustrate typical solution processes with small examples and publicly available codes such as cdd+ and lrs.
Swiss Federal Institute of Technology, Lausanne and Zurich, Switzerland, available at \\url\{http://www.ifor.math.ethz.ch/~fukuda/polyfaq/polyfaq.html\}

"Double Description Method Revisited"(article)
Authors: Fukuda, K., Prodon, A.
Editors: , ,
Date: 1996
Pages: 91-111
The double description method is a simple and useful algorithm for enumerating all extreme rays of a general polyhedral cone in $R^d$, despite the fact that we can hardly state any interesting theorems on its time and space complexities. In this paper, we reinvestigate this method, introduce some new ideas for efficient implementations, and show some empirical results indicating its practicality in solving highly degenerate problems.

"\\tt polymake: A Framework for Analyzing Convex Polytopes"(anthos)
Authors: Gawrilow, E., Joswig, M.
Editors: ,
Date: 2000
Pages: 43-74
\\tt polymake is a software tool designed for the algorithmic treatment of polytopes and polyhedra. We give an overview of the functionally as well as for the structure. This paper can be seen as a first approximation to a \\tt polymake handbook. The tutorial starts with the very basic and ends with a few \\tt polymake applications to research problems. Then we present the main features of the system including the interfaces to other software products.

"\\tt polymake: An Approach to Modular Software Design in Computational Geometry"(article)
Authors: Gawrilow, E., Joswig, M.
Date: 2001
Pages: 222-231
\\tt polymake is a software package designed for the study of the combinatorics and the geometry of convex polytopes and polyhedra. It offers access to a wide variety of algorithms and tools within a common framework. As a key design feature it allows to incorporate the functionality of a great variety of other software packages in a modular way.

"Numeric Domains with Summarized Dimensions"(article)
Authors: Gopan, D., DiMaio, F., Dor, N., Reps, T. W., Sagiv, M.
Editors: ,
Date: 2004
Pages: 512-529
We introduce a systematic approach to designing summarizing abstract numeric domains from existing numeric domains. Summarizing domains use summary dimensions to represent potentially unbounded collections of numeric objects. Such domains are of benefit to analyses that verify properties of systems with an unbounded number of numeric objects, such as shape analysis, or systems in which the number of numeric objects is bounded, but large.

"Static Analysis of Linear Congruence Equalities among Variables of a Program"(article)
Authors: Granger, P.
Editors: ,
Date: 1991
Pages: 169-192
In this paper, a new kind of static (or semantic) analysis is defined: congruence analysis, which is conceived to discover the properties of the following type: ``the integer valued variable $X$ is congruent to $c$ modulo $m$'', where $c$ and $m$ are automatically determined integers. This analysis is then related to an algebraic framework and wholly characterized. Moreover, we show an example how it can be useful for automatic vectorization. Finally, we present some extensions of it, namely its combination with the analysis of bounds, and also some analysis defined when the modulus of congruences is given \\emph\a priori\.

"Static Analyses of Congruence Properties on Rational Numbers (Extended Abstract)"(article)
Authors: Granger, P.
Editor:
Date: 1997
Pages: 278-292
We present several new static analysis frameworks applying to rational numbers, and more precisely, designed for discovering congruence properties satisfied by rational (or real) variables of programs. Two of them deal with additive congruence properties and generalize linear equation analysis [M. Karr, \\emph\Affine Relationships among Variables of a Program\, Acta Informatica, 6:133–151, 1976] and congruence analysis on integer numbers [P. Granger, \\emph\Static Analysis of Arithmetical Congruences\, International Journal of Computer Mathematics, 30:165–190, 1989], [P. Granger, \\emph\Static Analysis of Linear Congruence Equalities among Variables of a Program\, TAPSOFT'91: Proceedings of the International Joint Conference on Theory and Practice of Software Development, Volume 1: Colloquium on Trees in Algebra and Programming (CAAP'91), Lecture Notes in Computer Science, 493, pp. 169–192]. The others are based on multiplicative congruence properties in the set of positive rational numbers. Among other potential applications, we exemplify the interest of all these analyses for optimizing the representation of rational or real valued variables.

"A Practical Steepest-Edge Simplex Algorithm"(article)
Authors: Goldfarb, D., Reid, J. K.
Date: 1977
Pages: 361-371
It is shown that suitable recurrences may be used in order to implement in practice the steepest-edge simplex linear programming algorithm. In this algorithm each iteration is along an edge of the polytope of feasible solutions on which the objective function decreases most rapidly with respect to distance in the space of all the variables. Results of computer comparisons on medium-scale problems indicate that the resulting algorithm requires less iterations but about the same overall time as the algorithm of Harris [8], which may be regarded as approximating the steepest-edge algorithm. Both show a worthwhile advantage over the standard algorithm.

"Détermination Automatique de Relations Linéaires Vérifiées par les Variables d'un Programme"(thesis)
Authors: Halbwachs, N.
Date: 1979-03

"Delay Analysis in Synchronous Programs"(article)
Authors: Halbwachs, N.
Editor:
Date: 1993
Pages: 333-346
Linear relation analysis [CH78, Hal79] has been proposed a long time ago as an abstract interpretation which permits to discover linear relations invariantly satisfied by the variables of a program. Here, we propose to apply this general method to variables used to count delays in synchronous programs. The ``regular'' behavior of these counters makes the results of the analysis especially precise. These results can be applied to code optimization and to the verification of real-time properties of programs.

"Verification of Linear Hybrid Systems by Means of Convex Approximations"(article)
Authors: Halbwachs, N., Proy, Y.-E., Raymond, P.
Editor:
Date: 1994
Pages: 223-237
We present a new application of the abstract interpretation by means of convex polyhedra, to a class of hybrid systems, i.e., systems involving both discrete and continuous variables. The result is an efficient automatic tool for approximate, but conservative, verification of reachability properties of these systems.

"POLyhedra INtegrated Environment"(reference)
Authors: Halbwachs, N., Kerbrat, A., Proy, Y.-E.
Date: 1995-09
documentation taken from source code

"Verification of Real-Time Systems using Linear Relation Analysis"(article)
Authors: Halbwachs, N., Proy, Y.-E., Roumanoff, P.
Date: 1997
Pages: 157-185
Linear Relation Analysis [11] is an abstract interpretation devoted to the automatic discovery of invariant linear inequalities among numerical variables of a program. In this paper, we apply such an analysis to the verification of quantitative time properties of two kinds of systems: synchronous programs and linear hybrid systems.

"Cylindric Algebras: Part I"(book)
Authors: Henkin, L., Monk, J. D., Tarski, A.
Date: 1971

"A Note on Abstract Interpretation Strategies for Hybrid Automata"(article)
Authors: Henzinger, T. A., Ho, P.-H.
Editors: , , ,
Date: 1995
Pages: 252-264
We report on several abstract interpretation strategies that are designed to improve the performance of \\sc HyTech, a symbolic model checker for linear hybrid systems. We (1) simultaneously compute the target region from different directions, (2) conservatively approximate the target region by dropping constraints, and (3) iteratively refine the approximation until sufficient precision is obtained. We consider the standard abstract convex-hull operator and a novel abstract extrapolation operator.

"Some Lessons from the \\sc HyTech Experience"(article)
Authors: Henzinger, T. A., Preussig, J., Wong-Toi, H.
Date: 2001
Pages: 2887-2892
We provide an overview of the current status of the tool \\sc HyTech, and reflect on some of the lessons learned from our experiences with the tool. HyTech is a symbolic model checker for mixed discrete-continuous systems that are modeled as automata with piecewise-constant polyhedral differential inclusions. The use of a formal input language and automated procedures for state-space traversal lay the foundation for formally verifying properties of hybrid dynamical systems. We describe some recent experiences analyzing three hybrid systems. We point out the successes and limitations of the tool. The analysis procedure has been extended in a number of ways to address some of the tool's shortcomings. We evaluate these extensions, and conclude with some desiderata for verification tools for hybrid systems.

"Exact Dependence Analysis Using Data Access Descriptors"(report)
Authors: Huelsbergen, L., Hahn, D., Larus, J.
Date: 1990
\\emph\Data Access Descriptors\ provide a method for summarizing and representing the portion of an array accessed by a program statement. A Data Access Descriptor does not, however, indicate if its characterization is exact or conservative, nor does it record the temporal order of accesses. Exactness is necessary to expose maximal parallelism. Temporal information is necessary to calculate \\emph\direction vectors\ for inter-loop dependences. This paper presents an extension to basic Data Access Descriptors that identies exact representations. We illustrate the value of extended Data Access Descriptors by showing how to calculate information typically provided by direction vectors and by refining potential conflicts between statements with \\emph\array kill\ information.

"Beyond Finite Domains"(article)
Authors: Jaffar, J., Maher, M. J., Stuckey, P. J., Yap, R. H. C.
Editor:
Date: 1994
Pages: 86-94

"Generating All Vertices of a Polyhedron Is Hard"(article)
Authors: Khachiyan, L., Boros, E., Borys, K., Elbassioni, K., Gurvich, V.
Date: 2006
We show that generating all negative cycles of a weighted graph is a hard enumeration problem, in both the directed and undirected cases. More precisely, given a family of such cycles, it is NP-complete problem to decide whether this family can be extended or there is no other negative directed cycles in the graph, implying that directed negative cycles cannot be generated in polynomial output time, unless P=NP. As a corollary, we solve in the negative two well-known generating problems from linear programming: (i) Given an (infeasible) system of linear inequalities, generating all minimal infeasible subsystems is hard. Yet, for generating maximal feasible subsystems the complexity remains open. (ii) Given a (feasible) system of linear inequalities, generating all vertices of the corresponding polyhedron is hard. Yet, in case of bounded polyhedra the complexity remains open.
Invited contribution. To appear.

"The Modulo Interval: A Simple and Practical Representation for Program Analysis"(article)
Authors: Nakanishi, T., Joe, K., Polychronopoulos, C. D., Fukuda, A.
Date: 1999
Pages: 91-96
In this paper, the modulo interval, an extension of the traditional interval on real numbers, and its useful mathematical properties are presented as a representation for program analysis Only with two additional parameters to the interval on real numbers, namely the modulus and the residue, the modulo interval can represent information on program having cyclicity such as loop indices, array subscripts etc. at reasonable complexity and more accuracy. Well-defined arithmetic and set operations on the modulo interval make implementation of compilers simple and reliable. Moreover, application of the modulo interval to program analysis for parallelizing compilers is discussed in this paper.

"Modulo Interval Arithmetic and Its Application to Program Analysis"(article)
Authors: Nakanishi, T., Fukuda, A.
Date: 2001
Pages: 829-837

"Convex Polyhedra Library"(reference)
Authors: Jeannet, B.
Date: 2002-03
documentation of the ``New Polka'' library.

"Solvability and Consistency for Linear Equations and Inequalities"(article)
Authors: Kuhn, H. W.
Date: 1956
Pages: 217-232

"A note on Chernikova's Algorithm"(report)
Authors: Le Verge, H.
Date: 1992
This paper describes an implementation of Chernikova's algorithm for finding an irredundant set of vertices for a given polyhedron defined by a set of linear inequalities and equations. This algorithm can also be used for the dual problem: given a set of extremal rays and vertices, find the associated irredundant set of facet supporting hyperplanes. The method is an extension of initial Chernikova's algorithm (non negative domain), and is mainly based on the polyhedral cone duality principle. A new enhancement for extremal ray detection together with its effects on a class of polyhedra.

"Loop Nest Synthesis Using the Polyhedral Library"(report)
Authors: [object Object], Wilde, D. K.
Date: 1994
A new method to synthesis loop nests given a polyhedral domain, the context domain, and the loop nesting order is described. The method is based on functions in the IRISA polyhedral library.

"Parameterized Polyhedra and Their Vertices"(article)
Authors: Loechner, V., Wilde, D. K.
Date: 1997
Pages: 525-549
Algorithms specified for parametrically sized problems are more general purpose and more reusable than algorithms for fixed sized problems. For this reason, there is a need for representing and symbolically analyzing linearly parameterized algorithms. An important class of parallel algorithms can be described as systems of parameterized affine recurrence equations (PARE). In this representation, linearly parameterized polyhedra are used to described the domains of variables. This paper describes an algorithm which computes the set of parameterized vertices of a polyhedron, given its representation as a system of parameterized inequalities. This provides an important tool for the symbolic analysis of the parameterized domains used to define variables and computation domains in PARE's. A library of operations on parameterized polyhedra based on the Polyhedral Library has been written in C and is freely distributed.

"\\it PolyLib\\/ : A Library for Manipulating Parameterized Polyhedra"(misc)
Authors: Loechner, V.
Date: 1999-03
Declares itself to be a continuation of \\cite\{Wilde93th\}

"Array Operations Abstraction Using Semantic Analysis of Trapezoid Congruences"(article)
Authors: Masdupuy, F.
Date: 1992
Pages: 226-235
With the growing use of vector supercomputers, efficient and accurate data structure analyses are needed. What we propose in this paper is to use the quite general framework of Cousot's abstract interpretation for the particular analysis of multi-dimensional array indexes. While such indexes are integer tuples, a relational integer analysis is first required. This analysis results of a combination of existing ones that are interval and congruence based. Two orthogonal problems are directly concerned with the results of such an analysis, that are the parallelization/vectorization with the dependence analysis and the data locality problem used for array storage management. After introducing the analysis algorithm, this paper describes on a complete example how to use it in order to optimize array storage.

"Array Indices Relational Semantic Analysis Using Rational Cosets and Trapezoids"(thesis)
Authors: Masdupuy, F.
Date: 1993-12
Semantic analysis of program numerical variables consists in statically and automatically discovering properties verified at execution time. Different sets of properties (equality, inequality and congruence relations) have already been studied. This thesis proposes a generalization of some of the below patterns. More specifically, the abstract interpretation is used to design on the one hand a set of properties generalizing intervals and cosets on $\\mathbbb\Z\$ and on the other hand, a generalization of trapezoids and linear congruence equation systems on $\\mathbbb\Z\^n$. A rational abstraction of these properties is defined to get safe approximations, with a polynomial complexity in the number of the considered variables, of the integer properties operators. Those analyses, more precise than the combination of the analysis they come from in general, allow to dynamically choose the kind of properties (inequality or congruence relations) leading to relevant information for the considered program. The described relationnal analysis corresponds to numerous patterns encountered in the field of scientific computation. It is very well adapted to the analysis of array indices variables and also to the abstract description of integer arrays.

"A New Numerical Abstract Domain Based on Difference-Bound Matrices"(article)
Authors: Miné, A.
Editors: ,
Date: 2001
Pages: 155-172
This paper presents a new numerical domain for static analysis by abstract interpretation. This domain allows us to represent invariants of the form and , where and are variables values and is an integer or real constant. Abstract elements are represented by Difference-Bound Matrices, widely used by model-checkers, but we had to design new operators to meet the needs of abstract interpretation. The result is a complete lattice of infinite height featuring widening, narrowing and common transfer functions. We focus on giving an efficient representation and graph-based algorithms—where is the number of variables—and claim that this domain always performs more precisely than the well-known interval domain. To illustrate the precision/cost tradeoff of this domain, we have implemented simple abstract interpreters for toy imperative and parallel languages which allowed us to prove some non-trivial algorithms correct.

"The Octagon Abstract Domain"(article)
Authors: Miné, A.
Date: 2001
Pages: 310-319
This article presents a new numerical abstract domain for static analysis by abstract interpretation. It extends our previously proposed DBM-based numerical abstract domain and allows us to represent invariants of the form ($\\pm x \\pm y \\leq c$), where $x$ and $y$ are program variables and $c$ is a real constant. We focus on giving an efficient representation based on Difference-Bound Matrices—$\\mathcal\O\(n^2)$ memory cost, where $n$ is the number of variables—and graph-based algorithms for all common abstract operators—$\\mathcal\O\(n^3)$ time cost. This includes a normal form algorithm to test equivalence of representation and a widening operator to compute least fixpoint approximations.

"A Few Graph-Based Relational Numerical Abstract Domains"(article)
Authors: Miné, A.
Editors: ,
Date: 2002
Pages: 117-132
This article presents the systematic design of a class of relational numerical abstract domains from non-relational ones. Constructed domains represent sets of invariants of the form $(v_j-v_i\\in C)$, where vj and vi are two variables, and $C$ lives in an abstraction of $\\mathcal\P\(\\mathbb \Z\)$, $\\mathcal\P\(\\mathbb \Q\)$, or $\\mathcal\P\(\\mathbb \R\)$. We will call this family of domains weakly relational domains. The underlying concept allowing this construction is an extension of potential graphs and shortest-path closure algorithms in exotic-like algebras. Example constructions are given in order to retrieve well-known domains as well as new ones. Such domains can then be used in the Abstract Interpretation framework in order to design various static analyses. A major benefit of this construction is its modularity, allowing to quickly implement new abstract domains from existing ones.

"Relational Abstract Domains for the Detection of Floating-Point Run-Time Errors"(article)
Authors: Miné, A.
Editor:
Date: 2004
Pages: 3-17
We present a new idea to adapt relational abstract domains to the analysis of IEEE 754-compliant floating-point numbers in order to statically detect, through Abstract Interpretation-based static analyses, potential floating-point run-time exceptions such as overflows or invalid operations. In order to take the non-linearity of rounding into account, expressions are modeled as linear forms with interval coefficients. We show how to extend already existing numerical abstract domains, such as the octagon abstract domain, to efficiently abstract transfer functions based on interval linear forms. We discuss specific fixpoint stabilization techniques and give some experimental results.

"Weakly Relational Numerical Abstract Domains"(thesis)
Authors: Miné, A.
Date: 2005-03
The goal of this thesis is to design techniques related to the automatic analysis of computer programs. One major application is the creation of tools to discover bugs before they actually happen, an important goal in a time when critical yet complex tasks are performed by computers. We will work in the Abstract Interpretation framework, a theory of sound approximations of program semantics. We will focus, in particular, on numerical abstract domains that specialise in the automatic discovery of properties of the numerical variables of programs. In this thesis, we introduce new numerical abstract domains: the zone abstract domain (that can discover invariants of the form $X - Y \\leq c$), the zone congruence domain ($X \\equiv Y + a [b]$), and the octagon domain ($\\pm X \\pm Y \\leq c$), among others. These domains rely on the classical notions of potential graphs, difference bound matrices, and algorithms for the shortest-path closure computation. They are in-between, in terms of cost and precision, between nonrelational domains (such as the interval domain), that are very imprecise, and classical relational domains (such as the polyhedron domain), that are very costly. We will call them ``weakly relational''. We also introduce some methods to apply relational domains to the analysis of floating-point numbers, which was previously only possible using imprecise, non-relational, domains. Finally, we introduce the so-called linearisation and symbolic constant propagation generic symbolic methods to enhance the precision of any numerical domain, for only a slight increase in cost. The techniques presented in this thesis have been integrated within Astrée, an analyser for critical embedded avionic software, and were instrumental in proving the absence of run-time errors in fly-by-wire softwares used in Airbus A340 and A380 planes. Experimental results show the usability of our methods of real-life applications.

"The Double Description Method"(anthos)
Authors: Motzkin, T. S., Raiffa, H., Thompson, G. L., Thrall, R. M.
Editors: ,
Date: 1953
Pages: 51-73
The purpose of this paper is to present a computational method for the determination of the value and of all solutions of a two-person zero-sum game with a finite number of pure strategies, and for the solution of general finite systems of linear inequalities and corresponding maximization problems.

"Fast Decision Algorithms based on Union and Find"(article)
Authors: Nelson, G., Oppen, D. C.
Date: 1977
Pages: 114-119
The journal version of this paper is \\cite\{NelsonO80\}

"Fast Decision Procedures Based on Congruence Closure"(article)
Authors: Nelson, G., Oppen, D. C.
Date: 1980
Pages: 356-364
The notion of the congruence closure of a relation on a graph is defined and several algorithms for computing it are surveyed. A simple proof is given that the congruence closure algorithm provides a decision procedure for the quantifier-free theory of equality. A decision procedure is then given for the quantifier-free theory of LISP list structure based on the congruence closure algorithm. Both decision procedures determine the satisfiability of a conjunction of literals of length n in average time $O(n \\log n)$ using the fastest known congruence closure algorithm. It is also shown that if the axiomatization of the theory of list structure is changed slightly, the problem of determining the satisfiability of a conjunction of literals becomes NP-complete. The decision procedures have been implemented in the authors' simplifier for the Stanford Pascal Verifier.
An earlier version of this paper is \\cite\{NelsonO77\}

"Integer and Combinatorial Optimization"(book)
Authors: Nemhauser, G. L., Wolsey, L. A.
Date: 1988

"A Library for Z-Polyhedral Operations"(report)
Authors: Nookala, S. P. K., Risset, T.
Date: 2000
Polyhedra are commonly used for representing iteration domains of loop nests with unit stride: the iteration domain of a loop is associated with the set of integer points contained in a polyhedron. $\\\cal Z\$-polyhedra are natural extension of polyhedra, in the sense that they represent iteration domains of loop nests with non-unit stride (they are polyhedra intersected with integral lattices). The polyhedral library (Polylib) has been developed for computing on polyhedra, it is now widely used in the automatic parallelization research community. This report describes the implementation of the extension of Polylib to $\\\cal Z\$-polyhedra. We describe algorithms used for computing on lattices and $\\\cal Z\$-polyhedra, and we provide technical documentation for the $\\\cal Z\$-polyhedral library (data structures, functions available).

"Combinatorial Optimization: Algorithms and Complexity"(book)
Authors: Papadimitriou, C. H., Steiglitz, K.
Date: 1998

"Two Easy Theories whose Combination is Hard"(manuscript)
Authors: Pratt, V. R.
Date: 1977-09
We restrict attention to the validity problem for unquantified disjunctions of literals (possibly negated atomic formulae) over the domain of integers, or what is just as good, the satisfiability problem for unquantified conjunctions. When $=$ is the only predicate symbol and all function symbols are left uninterpreted, or when $\\geq$ is the only predicate symbol (taking its standard interpretation on the integers) and the only terms are variables and integers, then satisfiability is decidable in polynomial time. However when $\\geq$ and uninterpreted function symbols are allowed to appear together, satisfiability becomes an NP-complete problem. This combination of the two theories can arise for example when reasoning about arrays (the uninterpreted function symbols) and subscript manipulation (where $\\geq$ arises in considering subscript bounds). These results are unaffected by the presence of successor, which also arises commonly in reasoning about subscript manipulation.
Memo sent to Nelson and Oppen concerning a preprint of their paper \\cite\{NelsonO77\}

"Generation of Efficient Nested Loops from Polyhedra"(article)
Authors: Quilleré, F., Rajopadhye, S. V., Wilde, D.
Date: 2000
Pages: 469-498
Automatic parallelization in the polyhedral model is based on affine transformations from an original computation domain (iteration space) to a target space-time domain, often with a different transformation for each variable. Code generation is an often ignored step in this process that has a significant impact on the quality of the final code. It involves making a trade-off between code size and control code simplification/optimization. Previous methods of doing code generation are based on loop splitting, however they have nonoptimal behavior when working on parameterized programs. We present a general parameterized method for code generation based on dual representation of polyhedra. Our algorithm uses a simple recursion on the dimensions of the domains, and enables fine control over the tradeoff between code size and control overhead.

"On Manipulating Z-Polyhedra"(report)
Authors: Quinton, P., Rajopadhye, S., Risset, T.
Date: 1996-07
We address the problem of computation upon Z-Polyhedra which are intersections of polyhedra and integral lattices. We introduce a canonic representation for Z-polyhedra which allow to perform comparisons and transformations of Z-polyhedra with the help of a computational kernal on polyhedra. This contribution is a step towards the manipulation of images of polyhedra by affine functions, and has application in the domain of automatic parallelization and parallel VLSI synthesis.

"On Manipulating Z-Polyhedra Using a Canonic Representation"(article)
Authors: Quinton, P., Rajopadhye, S., Risset, T.
Date: 1997
Pages: 181-194
Z-Polyhedra are intersections of polyhedra and integral lattices. They arise in the domain of automatic parallelization and VLSI array synthesis. In this paper, we address the problem of computation on Z-polyhedra. We introduce a canonical representation for Z-polyhedra which allows one to perform comparisons and transformations of Z-polyhedra with the help of a computational kernal on polyhedra.

"Intermediate-representation Recovery from Low-level Code"(article)
Authors: Reps, T. W., Balakrishnan, G., Lim, J.
Editors: ,
Date: 2006
Pages: 100-111
The goal of our work is to create tools that an analyst can use to understand the workings of COTS components, plugins, mobile code, and DLLs, as well as memory snapshots of worms and virus-infected code. This paper describes how static analysis provides techniques that can be used to recover intermediate representations that are similar to those that can be created for a program written in a high-level language.

"Rappresentazione e manipolazione di poliedri convessi per l'analisi e la verifica di programmi"(thesis)
Authors: Ricci, E.
Date: 2002-07
In Italian

"Executable Analysis using Abstract Interpretation with Circular Linear Progressions"(article)
Authors: Sen, R., Srikant, Y. N.
Date: 2007
Pages: 39-48
We propose a new abstract domain for static analysis of executable code. Concrete states are abstracted using Circular Linear Progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling over-flow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analyzing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.

"Executable Analysis with Circular Linear Progressions"(report)
Authors: Sen, R., Srikant, Y. N.
Date: 2007
We propose a new abstract domain for static analysis of executable code. Concrete state is abstracted using Circular Linear Progressions (CLPs). CLPs model computations using a finite word length as is seen in any real life processor. The finite abstraction allows handling overflow scenarios in a natural and straight-forward manner. Abstract transfer functions have been defined for a wide range of operations which makes this domain easily applicable for analysing code for a wide range of ISAs. CLPs combine the scalability of interval domains with the discreteness of linear congruence domains. We also present a novel, lightweight method to track linear equality relations between static objects that is used by the analysis to improve precision. The analysis is efficient, the total space and time overhead being quadratic in the number of static objects being tracked.

"Deciding Linear Inequalities by Computing Loop Residues"(article)
Authors: Shostak, R. E.
Date: 1981
Pages: 769-779
V.~R.~Pratt has shown that the real and integer feasibility of sets of linear inequalities of the form $x \\leq y + c$ can be decided quickly by examining the loops in certain graphs. Pratt's method is generalized, first to real feasibility of inequalities in two variables and arbitrary coefficients, and ultimately to real feasibility of arbitrary sets of linear inequalities. The method is well suited to applications in program verification.

"Theory of Linear and Integer Programming"(book)
Authors: Schrijver, A.
Date: 1999

"Taming the Wrapping of Integer Arithmetic"(article)
Authors: Simon, A., King, A.
Editors: ,
Date: 2007
Pages: 121-136
Variables in programs are usually confined to a fixed number of bits and results that require more bits are truncated. Due to the use of 32-bit and 64-bit variables, inadvertent overflows are rare. However, a sound static analysis must reason about overflowing calculations and conversions between unsigned and signed integers; the latter remaining a common source of subtle programming errors. Rather than polluting an analysis with the low-level details of modelling two's complement wrapping behaviour, this paper presents a computationally light-weight solution based on polyhedral analysis which eliminates the need to check for wrapping when evaluating most (particularly linear) assignments.

"Subsumption and Indexing in Constraint Query Languages with Linear Arithmetic Constraints"(article)
Authors: Srivastava, D.
Date: 1993
Pages: 315-343
Bottom-up evaluation of a program-query pair in a constraint query language (CQL) starts with the facts in the database and repeatedly applies the rules of the program, in iterations, to compute new facts, until we have reached a fixpoint. Checking if a fixpoint has been reached amounts to checking if any ``new'' facts were computed in an iteration. Such a check also enhances efficiency in that subsumed facts can be discarded, and not be used to make any further derivations in subsequent iterations, if we use Semi-naive evaluation. We show that the problem of subsumption in CQLs with linear arithmetic constraints is co-NP complete, and present a deterministic algorithm, based on the divide and conquer strategy, for this problem. We also identify polynomial-time sufficient conditions for subsumption and non-subsumption in CQLs with linear arithmetic constraints. We adapt indexing strategies from spatial databases for efficiently indexing facts in such a CQL: such indexing is crucial for performance in the presence of large databases. Based on a recent algorithm by Lassez and Lassez [LL] for quantifier elimination, we present an incremental version of the algorithm to check for subsumption in CQLs with linear arithmetic constraints.

"Convexity and Optimization in Finite Dimensions I"(book)
Authors: Stoer, J., Witzgall, C.
Date: 1970

"Hacker's Delight"(book)
Authors: Warren, H. S., Jr.
Date: 2003

"A Library for Doing Polyhedral Operations"(thesis)
Authors: Wilde, D. K.
Date: 1993-12
Polyhedra are geometric representations of linear systems of equations and inequalities. Since polyhedra are used to represent the iteration domains of nested loop programs, procedures for operating on polyhedra are useful for doing loop transformations and other program restructuring transformations which are needed in parallelizing compilers. Thus a need for a library of polyhedral operations has recently been recognized in the parallelizing compiler community. Polyhedra are also used in the definition of domains of variables in systems of affine recurrence equations (SARE). \\sc Alpha is a language which is based on the SARE formalism in which all variables are declared over finite unions of polyhedra. This report describes a library of polyhedral functions which was developed to support the \\sc Alpha language environment, and which is general enough to satisfy the needs of researchers doing parallelizing compilers. This report describes the data structures used to represent domains, gives motivations for the major design decisions, and presents the algorithms used for doing polyhedral operations. This library has been written and tested, and has been in use since the beginning of 1993 by research facilities in Europe and Canada. The library is freely distributed by ftp.
Also published as IRISA \\emph\{Publication interne\} 785, Rennes, France, 1993

"Elementare Theorie der konvexen Polyeder"(article)
Authors: Weyl, H.
Date: 1935
Pages: 290-306
English translation in \\cite\{Weyl50\}

"The Elementary Theory of Convex Polyhedra"(anthos)
Authors: Weyl, H.
Editor:
Date: 1950
Pages: 3-18
Translated from \\cite\{Weyl35\} by H. W. Kuhn

Last updated on