Computer Science

University Home | Department Home | Research | Department
People | Prizes | Industrial Programme | Admissions | Courses

Proofs, Complexity and Verification

Programme January 2005 - December 2005

talksorganisation


talks

28-10-05 Norbert Preining (Università di Siena): Many-valued logics - a short introduction and case study

The talk will start with a short introduction to the history and development of many-valued logics, beginning from Aristoteles to recent developments. The second part of the talk will - as a casy study - discuss the relation of two non-classical logics, the logic of linearly ordered Kripke frames and Gödel logics. The talk is intended for a broad audience, no special knowledge but basic mathematics and classical logics is necessary to follow the talk.

10-11-05 Roger Hindley: A New Old Typability Algorithm
03-11-05 Will Harwood: Approximant Collapse, Part II
28-10-05 Will Harwood: Approximant Collapse

Bisimilarity, a canonical notion of equivalence between processes, is defined co-inductively, but may be approached, and even reached, by its inductively defined approximants. We take a simple process algebra, the Basic Parallel Processes, and investigate the level at which an approximation becomes the thing itself.

08-07-05 Workshop on Better-Quasiorderings (BQOs)

Speakers are

  • Adrian Mathias, Reunion
  • Thomas Forster, Cambridge.

27-06-05

Joint PCV and SSRS seminar:
Markus Michelbrink: Appetisers for category theory, Part III

30-06-05

Joint PCV and SASS workshop:
Moldable programming day

Presentations from Bergen University, Norway
by Magne Haveraaen, Adis Hodzic, Anya Helene Bagge, Karl Trygve Kalleberg

  • 1000 Introduction, Moldable signatures
  • 1100 Using Programs as Institution models
  • 1200-1400 lunch break
  • 1400 Program transformations for programming
  • 1500 Weaves for moldable software
Abtracts

24-06-05 Monika Seisenberger and Ulrich Berger: Program extraction from proofs, Part II

We explain the logical foundations of optimized program extraction and demonstrate the actual extraction of the normalization program from Tait's proof in the Minlog system. The talk will be self contained.

23-06-05

Joint PCV and SSRS seminar:
Markus Michelbrink: Appetisers for category theory, Part II

20-06-05

Joint PCV and SSRS seminar:
Markus Michelbrink: Appetisers for category theory

This talk was originally thought as a gentle introduction into category theory due to Jose Fiadeiros fourthcoming visit. However during my preparations two things became clear to me:

  1. The topic is to big to give a proper introduction in an one hour talk (even for the most basic concepts: category, functor, natural transformation, limits, colimits; not to speak about adjunctions, monads, algebras, ...).
  2. As more I prepared the talk as more grew the material and as more I recognised how much I would enjoy to present some of this material to an interested audience in a systematic way. Therefore I decided to give just some motivations (possibly coinduction and/or adjoints of the pullback functor vs quantifiers) why it is worthwile to deal with category theory and to offer to do a more detailed and elaborated tutorial for interested people during the next week.

26-05-05 Ulrich Berger: Program extraction from proofs

I present a nontrivial example of an efficient program extracted from a proof. The example is about strong normalization for the simply typed lambda calculus using Tait's computability predicates. The program machine extracted from the formal proof turns out to be `normalization by evaluation' which is known as a very elegant and efficient method for computing the normal form of a lambda term. In the talk I will also briefly explain the background of the method of program extraction from proofs and highlight some of the main problems and pitfalls in this area in general and in this case study in particular.

This is joint work with Pierre Letouzey and Helmut Schwichtenberg (LMU Munich).

11-05-05 Oliver Kullmann: On a theoretical foundation of heuristics for branching, Part II
06-05-05 Oliver Kullmann: On a theoretical foundation of heuristics for branching

I consider algorithms which have to solve a certain problem P, and they do it by creating new problems P_1, ..., P_k (of the same kind) out of P, solving the P_i, and combining the solutions for the P_1, ..., P_k into a solution for P.

(This is like the divide-and-conquer strategy, but here the subproblems can have any size, and the typical run will need exponential time.)

I assume that the combination step is cheap, so we ignore the cost for this step. What concerns us is the choice of the "branching" (P_1, ...,P_k), for which I assume a list of candidates is given (which all can have different lengths k).

The theory I'm envisaging is a theory about this choice.

A main stipulation is that of a "distance" d(P, P_i), a positive real number, which measures how much P_i is "easier" than P. One part of the theory is concerned with the choice of d.

For my talk (perhaps 2 talks would be better) I want to assume that d is given. Then each possible branching (P_1, ..., P_k) yields a "branching tuple" (d(P, P_1), ..., d(P, P_k)), a k-dimensional vector of positive real numbers.

The next stipulation is that of a "projection" rho, which maps the branching tuples to positive real numbers, and the branching tuple with "lowest cost" (as measured by rho) will be considered as the best.

The main question for my talk is the choice of rho. Given for example three branching tuples

  1. (2,2)
  2. (1,3)
  3. (1.5, 4, 5.5)
which to choose ?!

I will present an axiomatic framework for projections, from which it follows that "in general" there is exactly one solution, one "generic projection". In the above example, this generic projection says, that number 3 is sligthly better than number 1, which is much better than number 2.

15-04-05 Markus Michelbrink: Interfaces as games, programs as strategies, Part II
11-04-05 Markus Michelbrink: Interfaces as games, programs as strategies

Peter Hancock and Anton Setzer developed the notion of interface to introduce interactive programming into dependent type theory. We generalise their notion and get an even simpler definition for interfaces. With this definition the relationship of interfaces to games becomes apparent. In fact from a game theoretical point of view interfaces are nothing than special games. Programs are strategies for these games. There is an obvious notion of refinement which coincides exactly with the intuition of what a refinement should do. Interfaces together with the refinement relation build a complete lattice in the sense that every family of interfaces has a least common refinement and there is an interface such that all members of this family refine this interface and this interface is an refinement for every other interface with this property. A program/strategy on a refinement of an interface gives a strategy on the interface.

We can define several operators on interfaces: tensor, par, choice, bang et cetera. Every notion has a dual notion by interchanging the viewpoint of player and opponent. Lollipop is defined as usual in terms of tensor and negation. We define the notion of fair strategy on $A \multimap B$. A fair strategy is essentially a pair of a strategy and a proof that this strategy plays in both games eventually. By a slight restriction of the notion of interface we are able to define a composition for fair strategies on $A \multimap B$ and $B \multimap C$. The obtained strategy is again fair. We can define fair strategies on $A \multimap A$ and $A \otimes (A \multimap B) \multimap B$. These strategies are versions of copy cat strategies. Identifying strategies by some kind of behavioural equivalence we expect to receive a linear category. However we have not checked all details until now. All results so far can be achieved in intensional Martin-Loef Type Theory and are verified in the theorem prover Agda. There are numerous links to other areas including refinement calculus, linear logic, game semantics, formal topology, process algebra and so on. However we have not explored all these relationships until now but intend to do so in the future.

05-04-05 Stefano Berardi: Semantic of Backtracking
18-03-2005 Anton Setzer: Ordinal systems, Part III
11-03-2005 Anton Setzer: Ordinal systems, Part II
04-03-2005 Anton Setzer: Ordinal systems

The development of ordinal notation system is the art of counting as far as possible. Ordinal notation systems can be used to measure the consistency strength of strong theories and to separate them. They allow to compare theories coming from completely different paradigms, which cannot be interpreted into each other. By showing the well-foundedness of an ordinal notation system one exhausts the strength of a theory, using, sometimes for the first time, the full power available in a theory.

Usually ordinal notation systems are presented in a very concrete way. This is a good approach for weak ordinal notation systems, say up to the ordinal $\epsilon_0$ (the strength of Peano Arithmetic) or $\Gamma_0$ (sometimes considered the limit of predicativity). It's not difficult to introduce ordinal notation systems of this strength, and many researchers are able to count that far. The area of ordinals beyond $\Gamma_0$ has however been only accessible to very few researchers. We believe that in order to make those stronger ordinal notation systems accessible to a wider audience, it is necessary to develop notations which reveal more of the underlying structure.

In this talk we present an approach for the development of ordinal notation systems up to the strength of $ID_1$, the theory of one inductive definition, which is far beyond $\Gamma_0$. Our presentation will give the ordinal notation systems more structure. There will be two orderings: one the usual one, and one which corresponds to the order in which elements are introduced. There is as well a function which gives the components of an ordinal notation, and some conditions which need to be fulfilled. Then using the fact that if a set is well-ordered then the set of pairs in it and the set of descending sequences in it are well-ordered, we will be able to set up a machinery, which generates the ordinals of this system in order and therefore shows that it is well-ordered in a hopefully intuitive way.

In a continuation of this talk, we will count further, up to one recursively Mahlo ordinal, which reaches the limit of currently published predicatively acceptable extensions of Martin-L{\"o}f type theory.

01-03-2005 Arnold Beckmann: Proofs ARE programms, Part II
23-02-2005 Arnold Beckmann: Proofs ARE programms

In this talk I will briefly discuss how proofs in certain subtheories of Peano Arithmetic (denoted Bounded Arithmetic) can be viewed as programs. One concern will be the complexity of the resulting program, which will correspond to the strength of the theory (we will deal with algorithms/theories related to the complexity class P)

03-02-2005 Initial meeting

We discussed the form of the seminar and agreed that it should be a high level research seminar where we can directly pop in to our research to present and discuss our latest ideas. We also fixed a provisional title for this seminar: Proofs, Complexity and Verification.

Participants have been: Anton Setzer, Monika Seisenberger, Markus Roggenbach, Peter Mosses, Faron Moller, Markus Michelbrink, Oliver Kullmann, Magne Haveraaen, Ulrich Berger, Arnold Beckmann

Markus Michelbrink: Equality in Martin-Löf Type Theory

organisation

coordination Markus Michelbrink
webpage Arnold Beckmann


Last modified: Mon Jan 23 17:27:18 GMT 2006