Contributed Talks
Crystals are highly structured periodic structures, defined by a 3-dimensional unit cell which periodically tiles an infinite three dimensional space. This tiling allows for many functionally identical unit cells, most obviously those that are the same up to translation. To explore the space of possible unit cells within a discrete unit space it is necessary to capture these symmetries.
In one dimension, these symmetries can be easily captured by representing the unit cell as a necklace, which is the lexicographically minimal representation of a cyclic string. Motivated by the applications on crystals, we study the generalisation of necklaces to higher dimensions. This talk will focus on the generalisation of several key results on one-dimensional necklaces to the multi-dimensional case. These are the classical problem of enumeration, algorithms for the efficient generation of all necklaces, and polynomial-time algorithms for the two inverse operations of ranking and unranking of necklaces.
Higher-order logic is the natural language for many areas of mathematics. As such, it would be useful to have strong automation for higher-order reasoning. Unfortunately, automation for higher-order logic has lagged behind that for first-order logic. The Vampire theorem prover, developed in Manchester, along with other leading first-order theorem provers are based on the superposition calculus. Superposition is essentially a brute force search through the set of all conclusions from given axioms. However, it adds powerful simplification techniques that allow the the deletion of redundant conclusion thus keeping the search space manageable. For many years it was an open question whether superposition could be extended to higher-order logic. In this presentation, I provide details on recent research extending superposition to higher-order logic. This has been done in two ways, one of which has been implemented in Vampire.
There are no bit-level machine models for parallelism and concurrency, amongst the standard formal models of computation, that permit computer simulations in tractable amounts of time and space, for the investigation of not just trivial programming constructs, but also more complex high level programs. Such a machine would provide a basis for investigating processes running on a basic device rather than in a formalism abstracted from hardware, without introducing biases from the particulars of higher level architectures. The α-Ram family of deterministic machines provide not only simple semantics and neutral machine platforms for language design, but also opportunities for developing specialized and more general purpose architectures. Physical constraints can be incrementally introduced into the design process in a least restrictive order, thereby reducing bias towards pre-conceived architectural types.
Game comonads, introduced for the pebble game by Abramsky, Dawar and Wang (LiCS 2017), and expanded to EF and bisimulation games by Abramsky and Shah (CSL 2018), offer a compositional perspective on the vast landscape of Spoiler-Duplicator games used in finite model theory and descriptive complexity. These constructions exhibit surprising connections between games and elegantly link the games in question with well-known algorithms for constraint satisfaction and graph isomorphism, and related combinatorial parameters such as tree-width.
In my talk, I will review this new direction in finite model theory and sketch some upcoming work by me and Anuj Dawar on a new family of game comonads which extends these surprising connections to logics with generalised quantifiers, formally linking work of Hella (1990) and Hairgora & Luosto (2014) and introducing a new family of generalised tree-width parameters.
We study the problem of finding "fair" stable matchings in the Stable Marriage problem with Incomplete Lists (SMI). For an instance $I$ of SMI there may be many stable matchings, providing significantly different outcomes for the sets of men and women. We introduce two new notions of fairness in SMI. Firstly, a regret-equal stable matching minimises the difference in ranks of a worst-off man and a worst-off woman, among all stable matchings. Secondly, a min-regret sum stable matching minimises the sum of ranks of a worst-off man and a worst-off woman, among all stable matchings. We present two new efficient algorithms to find stable matchings of these types. Additionally, we discuss experiments that compare several types of fair optimal stable matchings and show that our algorithm to find a regret-equal stable matching produces matchings is competitive with respect to other fairness objectives.
Is there an algorithm that takes a game in normal form as input,
and outputs a Nash equilibrium? If the payoffs are integers, the
answer is yes, and a lot of work has been done in its computational
complexity. If the payoffs are permitted to be real numbers, the
answer is no, for continuity reasons. It is worthwhile to
investigate the precise degree of non-computability (the Weihrauch
degree), since knowing the degree entails what other approaches are
available (eg, is there a randomized algorithm with positive
success change?). The two player case has already been fully
classified, but the multiplayer case remains open and is addressed
here. Our approach involves classifying the degree of finding roots
of polynomials, and lifting this to systems of polynomial
inequalities via cylindrical algebraic decomposition. This is joint
work with Arno Pauly.
Machine Learning (ML) refers to algorithms that use statistical techniques to give computers the ability to learn from data. ML has been successfully applied in a wide variety of domains over the last decade. Our hypothesis is that ML could be used to improve the implementation of symbolic computation algorithms: to steer them away from their worst case complexity results.
We report the results of EPSRC Project EP/R019622/1 which has considered the problem of selecting the variable ordering for a cylindrical algebraic decomposition (CAD). CAD is a key algorithm in real algebraic geometry and computational logic, used e.g. for quantifier elimination. We have experimented with different ML models, implemented techniques for feature generation from polynomial sets, proposed an improved measure of accuracy for such problems and used this to improve cross-validation hyper-parameter selection. These results have been published in the conferences CICM 2019, SC-Square 2019 and MACIS 2019.
Solving is an always reoccuring problem in computer science, which is why there are strong off the shelf universal solving engines like SAT, SMT or even FOL solvers. The problem with these solving engines is however, that they are quite hard to use. They lack proper embeddings into programming languages and their expressive power does not suffice to express general recursion. There are languages with solving capabilities like PROLOG or Curry, but their solving engines are rudimentary when compared to state-of-the-art SMT solvers. This research aims to Create a programming language with SAT-like solving capabilities. A formalism is introduced that could furthermore be used to perform complexity analysis on solvers, or that might even be used to automatically synthesize a solving engine.
Functions with side-effects (e.g. mutable state, exceptions) introduce difficulties in reasoning about program semantics. Functional programmers are well acquainted with representing effectful computation as pure code that uses monads. Wadler and Thiemann have shown that there is a correspondence between monads and effect systems, a kind of static analysis used for computational effects. More recent research has looked into applications of the more general concept of graded monads.
In this talk I will introduce graded monads and focus on how they are convenient for representing different kinds of static analysis. A part of this talk will be based on my recent work with Alan Mycroft and Dominic Orchard.
Voevodsky's Univalent Foundations (UF), otherwise known as Homotopy Type Theory, is a modern foundation for mathematics, based on Martin-L\""of Type Theory. By default, it is a constructive system and without adding resizing axioms, it is predicative.
In this talk we will explain how to develop basic domain theory constructively and predicatively in UF. In particular, we will show how to define the Scott model of the programming language PCF and prove fundamental properties such as soundness and computational adequacy.
We will highlight (1) our constructive treatment, (2) important features of UF and (3) predicativity concerns. To illustrate (1): we use the Escard\'o-Knapp lifting monad to constructively account for the non-termination in PCF. A prime example of (2) in our development is the propositional truncation.
If time permits, we will report on ongoing work, such as our treatment of continuous and algebraic domains.
Property testers are probabilistic algorithms that aim for constant running time while providing accuracy guarantees and hence are highly relevant for solving hard problems for very large instances approximately as encountered in big-data applications.
In this talk we consider the bounded-degree model of property testing. While there are NP-hard problems that are known to have no constant query complexity testers (3-Sat, 3-colourability) there is a class of NP-hard problems, for which constant query complexity property testers exist. This follows from a result of Newman and Sohler in 2013, which implies that on bounded-degree planar graphs all problems are testable of which several are NP-hard (e.g. Hamiltonicity).
We recently discovered that for Hamiltonicity and dominating set there is no constant query complexity tester, using the property testing equivalent of polynomial reductions, namely local reductions. In general however it is not clear, how NP-hardness relates to property testing in the bounded-degree model.
The SAT problem for 2-CNFs is a well-known special case where an NP-complete problem can be solved in linear time. We are interested in minimally unsatisfiable 2-CNFs (2-MUs), and we present a complete classification up to isomorphism (renaming of variables, and flipping of literals) of all 2-MUs.
The bulk of the work is in establishing a link to weak double cycles (WDCs) as a nice class of digraphs: the implication digraphs of 2-MUs in the main cases are WDCs, and WDCs have exactly one skew-symmetry (which transforms a digraph into a 2-CNF) for those implication digraphs. Thus the isomorphism problem for 2-MUs reduces to the isomorphism problem of WDCs, which are essentially one big cycle of small cycles, and thus their isomorphisms are understood via the symmetries of the cycle (the Dihedral group).
In the Product-Mix Auction proposed by Paul Klemperer, bidders express their strong substitutes buying preferences using a novel bidding language by submitting a list of positive and negative `dot bids'. While it has been shown that every strong substitutes buying preferences can be uniquely expressed using positive and negative bids, there exists no mechanism in the literature that assists the bidders in compiling their bid lists.
Assuming access to a demand correspondence oracle, we provide an algorithm that computes the unique list of bids corresponding to a bidder's buying preferences. In the special case where buying preferences can be expressed using positive bids only, we have an efficient algorithm that learns the bids in linear time. We also show super-polynomial lower bounds on the query complexity of computing the unique list of bids in the general case where bids may be positive and negative.
Graph Coloring is a well studied NP-Complete problem in Theoretical Computer Science. In this talk, we will discuss parameterized complexity approaches to Pre-coloring Extension and List Coloring problems. Golovach, Paulusma, and Song (Inf. Compute. 2014) asked to determine the parameterized complexity of the following problems parameterized by k: (1) Given a graph G, a clique modulator D (a clique modulator is a set of vertices whose removal results in a clique) of size at most k for G, and a list L(v) of colors for every vertex v of G, decide whether G has a proper list colouring; (2) Given a graph G, a clique modulator D of size k for G, and a pre-coloring \lambda_P: X \to Q for X \subseteq V(G), decide whether \lambda_P can be extended to a proper coloring of G using colors from Q. For Problem 1, we design an O*(2^k)-time randomized algorithm, and for Problem 2, we obtain a kernel with at most 3k vertices. Banik et al. (IWOCA 2019) proved the following problem is fixed-parameter tractable and asked whether it admits a polynomial kernel: Given a graph G, an integer k, and a list L(v) of exactly (n - k) colors for every vertex v of G, decide whether there is a proper list coloring for G. We obtain a kernel with O(k^2) vertices and colors and a compression to a variation of this problem with O(k) vertices and O(k^2) colors.
This talk is based on a joint work with Gregory Gutin, Sebastian Ordyniak, and Magnus Wahlstrom.
Bitcoin is a decentralised digital currency that serves as an alternative to existing transaction systems based on an external central authority for security. Although Bitcoin has many desirable properties, one of its fundamental shortcomings is its inability to process transactions at high rates. To address this challenge, many subsequent protocols either modify the rules of block acceptance and reward, or alter the graphical structure of the public ledger to a directed acyclic graph (DAG).
We introduce a general framework for ledger growth in a large class of DAG-based implementations. By assuming honest miner behaviour, we explore how different DAG-based protocols perform in terms of fairness (whether the block reward of a miner is proportional to their hash power) as well as efficiency (what proportion of transactions a ledger validates after over time).
Our results demonstrate fundamental structural limits on how well DAG-based ledger protocols cope with a high transaction load.
In this talk we consider two possible formalisations of the three-dimensional Stable Roommates problem (3D-SR). In both, players specify preference lists over their peers, and the task is to partition the players into sets of size 3 based on their preferences.
We consider two methods of generalising preference lists over individuals into preferences over sets. The first (second) method, B-preferences (W-preferences) is based on the ‘best’ (‘worst’) player in a set. The decision problem in each case asks whether we can partition the players into sets of three, such that no three players would prefer to be in a set together than remain in their current triples.
We name the corresponding two problems 3D-SR-B and 3D-SR-W respectively, and we show that each problem is NP-complete. This contrasts with the known polynomial-time solvability of the problems (known as Hedonic Games) if we allow coalition sets of any size.
At BCTCS 2006, I suggested the creation of an online repository of semantic definitions. The main idea was to define individual abstract constructs independently, and to specify programming languages by translation to combinations of abstract constructs.
Since 2011, the PLanCompS project [1] has been developing a component-based framework for semantics. A beta-release of a semantics repository is available [2], together with Haskell-based tools [3] for executing abstract constructs (and hence programs via translation). Cliff Jones has also developed a digital library of historical semantic descriptions [4].
In this talk, I will present the foundations of component-based semantics, and look at how far the PLanCompS project has come towards establishing semantics online.
[1] PLanCompS: Programming Language Components and Specifications. Project website. http://plancomps.org/
[2] Component-based semantics. Beta-release. https://plancomps.github.io/CBS-beta/
[3] Funcon-Tools. Haskell package for generating funcon interpreters. https://hackage.haskell.org/package/funcons-tools/docs/Funcons-Tools.html
[4] Semantic descriptions library. http://plancomps.org/semantic-descriptions-library/
Two-player win/lose games played on finite graphs are a central tool for verification. A common question is whether a winning strategy can be implemented by a finite automaton. As we consider quantitative objections and heterogeneous systems, we would want to consider multiplayer multi-objective games, and now have Nash equilibria realized by finite automata. I will present some transfer results that show under what conditions finite memory determinacy of the two-player win/lose case yields finite memory Nash equilibria of the multiplayer multi-outcome case. This is joint work with Stephane Le Roux, and based on the article https://doi.org/10.1016/j.ic.2018.02.024
In this talk I will present Intuitionistic Fixed Point logic (IFP) and its extensions that lie in the foundation of a new approach to program extraction.
After discussing some examples, I will outline the main challenges of the soundness proof, which come from the optimisations required to include a fair amount of classical logic and to obtain garbage-free programs.
Stable matching problems arise when agents with preferences must be matched in pairs, while avoiding having any pair of agents not currently matched together but who would prefer to be matched together to their current partners. Where agents can have incomplete preferences, with ties, finding a largest stable matching is NP hard, but such matchings are required in real-world applications. Preprocessing is the removal of entries from preferences of agents that don't affect any stable matching, which in turn reduces the time required to find largest stable matchings. We present new theory that allows the removal of more such entries, and then extend our new work to more general settings where agents may have integral capacities. We also present new algorithms that take advantage of this new theory, and perform computational experiments that show considerable improvements.
We present our progress towards fully automated symbolic reasoning about UML state machines. We do this in the context of institution theory and the Heterogeneous Framework (Hets), which allow the principled reuse of results and tools between different logics.
In particular, we integrate a language for simple UML state machines into Hets
and show how Hets can be used to apply the automated theorem prover SPASS to
state machine properties.
This is part of a larger effort to provide institution based tool-support and integrated semantics for UML diagrams.
This talk is dedicated to the memory of Erik Palmgren (1963 - 2019)
We revisit the article by Palmgren giving an embedding of iterated inductive definitions into Martin-L{\""o}f Type Theory, and explain in what sense it provides an early substantial solution to a revised Hilbert's program.
Palmgren's result didn't provide a sharp lower body. We present a restricted version of Martin-L{\""o}f Type Theory with W-type and one universe, for which the embedding of Palmgren works as well and for which Palmgren's lower bound is sharp. We give a proof sketch for the sharpness of this bound.
Quantiles, such as the median or percentiles, provide concise and useful information about the distribution of a collection of items, drawn from a totally ordered universe. We study data structures, called quantile summaries, which keep track of all quantiles, up to an error of at most $\varepsilon$. That is, an $\varepsilon$-approximate quantile summary first processes a stream of items
and then, given any quantile query $0\le \phi\le 1$, returns an item from the stream, which is a $\phi'$-quantile for some $\phi' = \phi \pm \varepsilon$. We focus on comparison-based quantile summaries that can only compare two items
and are otherwise completely oblivious of the universe.
The best such deterministic quantile summary to date, due to Greenwald and Khanna (SIGMOD '01), stores at most $\O(1/varepsilon \cdot \log \varepsilon N)$ items, where $N$ is the number of items in the stream. We prove that this space bound is optimal by showing a matching lower bound.
The European Train Control System (ECTS) is a state-of-the-art railway control system, based on the communication between trains and interlockings via a Radio Block Centre (RBC). We are investigating the use of model-based testing in performing the quality control of RBC implementations in development. Using a formal model of ETCS based on a real-world implementation of a RBC, we can establish whether the comparison of a proven safety-verified model can in turn offer the same assurances to the real-world implementation under test, or whether issues in the implementation can be detected before heavy testing. The required elements necessary for an abstacted yet sufficiently accurate model are key, and further models could be improved with additional details, or by using alternative modelling tools, while we aim to examine the advantages of using a formal proof alongside testing. In this talk we will discuss an instantiation of an ECTS model in Real-Time Maude based on data from industry implementation, and a comparison between its outcomes and that of industrial simulations. There will also be a brief look at possible tools to be implemented for the modelling, and an outlook for the project going forward. The research is done in collaboration with our Industry partner Siemens Rail Automation. The talk is based on joint work with Markus Roggenbach and Monika Seisenberger, Swansea University.
While we know numeral systems such as decimal and binary very well, there is a plethora of alternative ways to encode numbers. We look at some unusual systems and see what they have to offer as search space encodings. Characteristics such as range of value representability and unusual arithmetic laws are reviewed. Results from use in genetic algorithm encoding are discussed.