Plenary Talks
-
Andrew Brooke-Taylor, University of Leeds, UK
Title: Categorifying Borel reducibility
Abstract: The framework of Borel reducibility has had great success in showing that various classification programmes cannot be completed, in areas such as ergodic theory and C*-algebras. However, this framework ignores a crucial feature of many classification programmes: namely, that they are expected to be functorial. I will talk about joint work in progress with Filippo Calderoni, introducing a categorified version of Borel reducibility, and noting some differences in consequences from the original framework.
-
Vassileios Koutavas, Trinity College Dublin, Ireland
Title: Combining Game Semantics and Bisimulation Techniques for the Verification of Program Equivalence
Abstract: Equivalence is a key concept in the area of programming language semantics as it can concisely express concepts such as modularity and abstraction and expose all the intentional and unintentional ways that different parts of a program can affect each other. There is a long history of theoretical techniques for proving equivalence in programming language semantics and related areas. More recently it has found applications in the verification of cryptographic protocols, compiler correctness and regression verification. In this talk we will overview the history of equivalence in programming language semantics, primarily focusing on the techniques of game semantics and bisimulation. We will show a new approach which combines these two theoretical techniques and leads to the development of a practical verification tool of equivalence in a programming language with functions and local state. This is a challenging setting, first studied in the semantics of ALGOL, which is quite common in modern programming languages. In this setting, verifying equivalence of even simple program expressions requires the exploration of infinite behaviours. Being able to reason within a verification tool about many cases of infinite behaviour, while still avoiding reporting any false-positives or false-negatives, has been the main success of this approach.
-
Thanases Pheidas, University of Crete, Greece
Title: Definability in Number Theory, Algebra and Geometry and Hilbert’s Tenth Problem
Abstract: We survey developments on the questions of decidability (and related definability) of the existential (and first-order) theories of rings of common use, eg. the rational integers and rational numbers and rings of polynomials and rational functions. The starting point of our questions is “Hilbert’s Tenth Problem”.
-
Linda Brown Westrick, Penn State University, USA
Title: Borel sets and reverse mathematics
The talk was cancelled.
Tutorials
-
Alex Kavvos, University of Bristol, UK
Title: Type Theory and Homotopy
Abstract: This is a whirlwind tour of Martin-Loef's Type Theory. We explain some of the basic ideas underlying type-theoretic approaches to the foundations of mathematics, introduce the rules of MLTT, and present a few examples. We also provide a sketch of the recently discovered connections to homotopy theory, the relationship with computer-assisted formalised mathematics, and point the audience to recent advances in the field.
-
Nikos Leonardos, University of Athens, Greece
Title: Bitcoin's backbone algorithm and the consensus problem
Abstract: The purpose of the tutorial is to study Bitcoin's protocol as a distributed algorithm. We will begin by discussing the core of the protocol in a simple model that is static and synchronous. We will prove two of its fundamental properties which we call common prefix and chain quality. Subsequently, we will discuss the Consensus problem. We will show that Bitcoin solves an interesting variant of this fundamental problem. Finally, we discuss attributes of the Bitcoin protocol that allow it to be secure in more realistic models (partially synchronous model, variable difficulty, timestamp rules).
-
Stathis Zachos, National Technical University of Athens, Greece
Title: Introduction to Computational Complexity
Abstract: Complexity classes, Randomness, Interactivity, PCP, Counting. How to deal with NP-completeness, Approximation algorithms, Search Complexity, Parameterized Complexity, Quantum Complexity, Counting revisited (#P, #PE, TotP),Nonuniform Circuit Families, Descriptive Complexity.
Special Sessions
Computer Science:
-
Bruno Bauwens, HSE University, Russia
Title: The algorithmic information distance
Abstract: The algorithmic information distance is a metric on bit-strings. In the machine-learning literature, it is defined as for bit-strings x and y as d(x,y) = max{K(x|y), K(y|x)} + c for some large c, where K(x|y) denotes the minimal length of a program that on input y produces x on an optimal prefix-free machine. In some sense, this measure exploits all types of computable structure that can be found between 2 bit-strings. This optimality property can be formalized in a similar way as Kolmogorov complexity is optimal for measures of information content. The measure has inspired a few interesting machine-learning algorithms and we will briefly discuss some recent ones. The definition given above satisfies the triangle inequality, but it is rather technical. Therefore, the following characterization is usually mentioned: d(x,y) = min {|p| : U(p,x) = y and U(p, y) = x} + O(log |xy|). In fact, the minimum above was the historically first definition. Note that in this minimum, one considers programs p that are bidirectional: the program should simultaneously map x to y and map y to x. On the other hand, in the definition max{K(x|y), K(y|x)}, the conditional complexities only consider 1-directional programs that either map y to x for K(x|y) or x to y for K(y|x).
Can we improve the precision from O(log |xy|) to O(1)? It is known that for plain complexity this is indeed true. But the triangle inequality does not hold for plain complexity. For a long time, it was an open question whether whether the O(1) precision holds for prefix-free machines. Recently, it has been claimed in several places that the answer is yes, but these proofs are wrong. Recently, I gave a counter example, for the equality with O(1) precision. This is remarkable, because many equalities that hold with O(log |xy|) precision for plain complexity can be transformed to inequalities that hold with O(1) for prefix-free complexity. A famous example is the Kolmogorov-Levin formula K(x,y) = K(x) + K(y | x, K(x)). For the first time, we have found an important (in)equality that holds with better precision for plain complexity. Even more mysteriously, the equality does hold with O(1) precision whenever d(x,y) > 6log |xy| and x and y are large, and the proof of this result is rather complex.
-
Juan Garay, Texas A&M University, USA
Title: Cryptography in the Blockchain Era
Abstract: The advent of blockchain protocols such as Bitcoin has ignited much excitement, not only for their realization of novel financial instruments, but also for offering alternative solutions to classical problems in fault-tolerant distributed computing and cryptographic protocols. Underlying many of such protocols is a primitive known as "proof of work" (PoW), which for over 20 years has been liberally applied in the cryptography and security literature to a variety of settings, including spam mitigation, sybil attacks, and denial of service protection; its role in the design of blockchain-based protocols, however, is arguably its most impactful application. At a high level, the way PoWs enable such protocols is by slowing message passing for all parties indiscriminately, thus generating opportunities for honest parties' views to converge, under the assumption that their aggregate computational power exceeds that of the adversary. This talk comprises two parts. First, despite the evolution of our understanding of the PoW primitive, pinning down the exact properties sufficient to prove the security of Bitcoin and related protocols has been elusive. In this talk we identify such properties, and then construct protocols whose security can be reduced to them in the standard model, assuming a common reference string (CRS -- cf. a "genesis" block). All previous protocols rely on the "random oracle" methodology.Second, regarding the realizability of two important problems in the area of cryptographic protocols -- Byzantine agreement (BA) and secure multiparty computation (MPC) -- we show how PoW-based blockchains allow to realize them even in the presence of a minority of corrupted parties (i.e., t < n/2, where t is the number of corrupted parties and n is their total number) , as long as the majority of the computation resources remain in honest hands, while "classically" (i.e., no PoWs), protocols can only tolerate up to t < n/3 corruptions in the absence of a private trusted setup, such as a public-key infrastructure. We resolve this apparent contradiction with a new paradigm we call "Resource-Restricted Cryptography."
The bulk of this talk is based on joint work with Marshall Ball, Aggelos Kiayias, Rafail Ostrovsky, Giorgos Panagiotakos and Vassilis Zikas.
-
Andrew Lewis-Pye, London School of Economics, UK
Title: Chained Fever — achieving optimal synchronisation for Hotstuff
Abstract: This will be a talk of two halves, to accommodate different backgrounds in the audience. First, I’ll give an easy introduction to consensus protocols, focussing on the classical ‘permissioned’ approach rather than Bitcoin, so that there should not be too much overlap with Leonardos’ tutorial. Then I’ll describe an improvement on the state-of-the-art in consensus protocols, which is a modification of ‘Hotstuff’ requiring only O(n) messages per `view change’, meaning O(n^2) message complexity to produce a confirmed block of transactions in the worst case. This improves on the previous best of O(n^3) message complexity for optimistically responsive protocols.
-
Vassilis Zikas, Purdue University, USA & University of Edinburgh, UK
Title: From Blockchain to Global-scale Trustworthy Infrastructure
Abstract: The wide adoption of global computer networks, such as the Internet, creates immense opportunities, and challenges the traditional centralized trust model. The idea of giving control of widely-used critical infrastructure to its users is becoming ever more popular. Blockchain and Distributed Ledger Technology (DLT) promise to bring the decentralization ideas to reality and disrupt traditional strongholds of trust in the financial, digital, biomedical, and manufacturing sectors, as well as in governance. In this talk I will discuss blockchain – from its current structure to its vast potential for future applications. The talk will discuss novel design choices that go into blockchain-based DLT, and how these choices critically impact the security of the solutions and address implementation and deployment challenges. It will also tease the potential of using reputation-based blockchain to enhance trustworthiness of decentralized worldwide systems.
Philosophical Logic:
-
Michael Glanzberg, Rutgers University, USA
(joint work with Lorenzo Rossi)Title: Truth and Quantification
Abstract: Theories of self-applicable truth have been motivated in two main ways. First, if truth-conditions provide the meaning of (many kinds of) natural language expressions, then self-applicable truth is instrumental to develop the semantics of natural languages. Second, a self-applicable truth predicate is required to express generalizations that would not be otherwise expressible in natural languages. In order to fulfill its semantic and expressive role, we argue, the truth predicate has to be studied in its interaction with constructs that are actually found in natural languages and extend beyond first-order logic---modals, indicative conditionals, arbitrary quantifiers, and more. Here, we focus on truth and quantification. We develop a Kripkean theory of self-applicable truth for the language of Generalized Quantifier Theory. More precisely, we show how to interpret a self-applicable truth predicate for the full class of type <1,1> (and type <1>) quantifiers to be found in natural languages. As a result, we can model sentences which are not expressible in theories of truth for first-order languages (such as `Most of what Jane's said is true', or `infinitely many theorems of T are untrue', and several others), thus expanding the scope of existing approaches to truth, both as a semantic and as an expressive device.
-
Volker Halbach, University of Oxford, UK
Title: Axiomatic Theories of Truth: A survey
Abstract: Axiomatic theories of truth are obtained by adding a unary predicate T to the language of arithmetic or another language in which syntax theory is usually developed. Then axioms for T are conjoined with the axioms of arithmetic (or another syntax theory). The liar and other paradoxes impose limits on which truth axioms can consistently be added. I survey some systems that have been seen as promising and interesting. There are several motivations for investigating axiomatic theories of truth. They are of philosophical interest not only because they give us a better understanding of the concept of truth, but also because, for instance, they can be used to reduce away commitment to second-order objects and express generalizations over sentences. They are also of purely mathematical interest. First, proof theorists have analyzed their strength and used them as intermediary systems for the analysis of subsystem of second-order arithmetic, but also set-theoretic systems. Some axiomatic theories of truth have also intriguing properties studied by model theorists."
-
Elia Zardini, University of Lisbon, Portugal & HSE University, Russia
Abstract: In a series of works, P. Cobreros, P. Égré, D. Ripley and R. van Rooij have proposed a nontransitive system (call it ‘K3LP’) as a basis for a solution to the semantic paradoxes. I critically consider that proposal at three levels. At the level of the background logic, I present a conception of classical logic on which K3LP fails to vindicate classical logic not only in terms of structural principles, but also in terms of operational ones. At the level of the theory of truth, I raise a cluster of philosophical difficulties for a K3LP-based system of naive truth, all variously related to the fact that such an extension proves things that would seem already by themselves jointly repugnant, even in the absence of transitivity. At the level of the theory of validity, I consider an extension of the K3LP-based system of naive validity that is supposed to certify that validity in that system does not fall short of naive validity, argue that such an extension is untenable in that its nontriviality depends on the inadmissibility of a certain irresistible instance of transitivity (whence the advertised “final cut”) and conclude on this basis that the K3LP-based system of naive validity cannot coherently be accepted either. At all these levels, a crucial role is played by certain metaentailments and by the extra strength they afford over the corresponding entailments: on the one hand, such strength derives from considerations that would seem just as compelling in a general nontransitive framework, but, on the other hand, such strength wreaks havoc in the particular setting of K3LP. Keywords: classical logic; naive truth; naive validity; nontransitive logics."