About

Machine learning has revolutionised computer science and AI: deep neural networks have been shown to match human ability in various tasks and solutions based on machine learning are being deployed in real-world systems, from automation and self-driving cars to security and banking. Undoubtedly, the potential benefits of AI systems are immense and wide ranging. At the same time, recent accidents of machine learning systems have caught the public’s attention, and as a result several researchers are beginning to question their safety. Traditionally, safety assurance methodologies are the realm of formal methods, understood broadly as the rigorous, mathematical underpinning of software and hardware systems. They are rooted in logic and reasoning, and aim to provide guarantees that the system is behaving correctly, which is necessary in safety-critical contexts. Such guarantees can be provided automatically for conventional software/hardware systems using verification technologies such as model checking or theorem proving. However, machine learning does not offer guarantees, and reasoning techniques necessary to justify safety of its autonomous decisions are in their infancy.

The Summit on Machine Learning Meets Formal Methods will bring together academic and industrial leaders who will discuss the benefits and risks of machine learning solutions. The overall aim is to identify promising future directions for research and innovation of interest to The Alan Turing Institute and UK research councils and government agencies, which will be summarised in a written report that will be made public.

Registration for this event is now closed

 

Confirmed Speakers

  • Programming from examples: PL meets ML 
  • Sumit Gulwani (Microsoft, USA)
  • Abstract
    Programming by Examples (PBE) involves synthesizing intended programs in an underlying domain-specific language from example-based specifications. PBE systems are already revolutionizing the application domain of data wrangling and are set to significantly impact several other domains including code refactoring. There are three key components in a PBE system. (i) A search algorithm that can efficiently search for programs that are consistent with the examples provided by the user. We leverage a divide-and-conquer-based deductive search paradigm to inductively reduce the problem of synthesizing a program expression of a certain kind that satisfies a given specification into sub-problems that refer to sub-expressions or sub-specifications. (ii) Program ranking techniques to pick an intended program from among the many that satisfy the examples provided by the user. We leverage features of the program structure as well of the outputs generated by the program on test inputs. (iii) User interaction models to facilitate usability and debuggability. We leverage active-learning techniques based on clustering inputs and synthesizing multiple programs. Each of these three PBE components leverage both formal methods and heuristics. We make the case for synthesizing these heuristics from training data using appropriate machine learning methods. This not only leads to better heuristics, but also enables easier development, maintenance, and even personalization of a PBE system.
        Biography
    Sumit Gulwani is a Research manager at Microsoft, leading the PROSE research and engineering team that develops APIs for program synthesis and incorporates them into real products. He is the inventor of the Flash Fill feature in Microsoft Excel used by hundreds of millions of people. He has published 110+ papers in top-tier conferences/journals across multiple computer science areas, and delivered 40+ keynotes/invited talks at various forums. He was awarded the ACM SIGPLAN Robin Milner Young Researcher Award in 2014 for his pioneering contributions to end-user programming and intelligent tutoring systems. He obtained his PhD in Computer Science from UC-Berkeley in 2005, and was awarded the ACM SIGPLAN Outstanding Doctoral Dissertation Award. He obtained his BTech in Computer Science and Engineering from IIT Kanpur in 2000, and was awarded the President’s Gold Medal.
     
  • View slides

  • Influence-directed explanations for machine learning systems 
  • Anupam Datta (Carnegie Mellon University, USA)
  • Abstract
    Machine learning systems are often inscrutable black boxes: they are effective predictors but lack human-understandable explanations. I will describe a new paradigm for influence-directed explanations that address this problem. Influence-directed explanations shed light on the inner workings of black-box machine learning systems by identifying components that causally influence system behavior and by providing human-understandable interpretation to the concepts represented by these components. I will describe instances of this paradigm that are model-agnostic and instances that are specific to deep neural networks. Influence-directed explanations serve as a foundation for reasoning about fairness and privacy of machine learning models. Our initial exploration suggests that formal methods for analysis of probabilistic systems have an important role to play in efficient generation of influence-directed explanations. Joint work with colleagues at Carnegie Mellon University.
        Biography
    Anupam Datta is Professor of Electrical and Computer Engineering and (by courtesy) Computer Science at Carnegie Mellon University. He is Director of the Accountable Systems Lab. His research focuses on enabling real-world complex systems to be accountable for their behavior, especially as they pertain to privacy, fairness, and security. His work has helped create foundations and tools for accountable big data systems and cryptographic protocols. Specific examples include a privacy compliance tool chain deployed at Microsoft. the AdFisher tool that discovered gender bias in the targeting of job-related ads in online behavioral advertising, and Protocol Composition Logic, which was applied to prove security properties of a number of real-world security protocols. Datta obtained Ph.D. (2005) and M.S. (2002) degrees from Stanford University and a B.Tech. (2000) from IIT Kharagpur, all in Computer Science.
  • View slides
  • Provably beneficial artificial intelligence 
  • Stuart Russell (University of California at Berkeley, USA), Turing Speaker
  • Abstract
    I will briefly survey recent and expected developments in AI and their implications. Beyond these, one must expect that AI capabilities will eventually exceed those of humans across a range of real-world-decision making scenarios. Should this be a cause for concern, as Elon Musk, Stephen Hawking, and others have suggested? And, if so, what can we do about it? While some in the mainstream AI community dismiss the issue, I will argue instead that a fundamental reorientation of the field is required. Instead of building systems that optimize arbitrary objectives, we need to learn how to build systems that will, in fact, be beneficial for us. I will show that it is useful to imbue systems with explicit uncertainty concerning the true objectives of the humans they are designed to help, as well as the ability to learn more about those objectives from observation of human behaviour.
        Biography
    Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is a recipient of the IJCAI Computers and Thought Award and from 2012 to 2014 held the Chaire Blaise Pascal in Paris. He is a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book “Artificial Intelligence: A Modern Approach” (with Peter Norvig) is the standard text in AI, used in over 1300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence, with an emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.
  • View slides
  • Towards Robust and Explainable Artificial Intelligence
  • Pushmeet Kohli (DeepMind, UK)
  • Abstract
    Deep learning has led to rapid progress being made in the field of machine learning and artificial intelligence, leading to dramatically improved solutions of many challenging problems such as image understanding, speech recognition, and automatic game playing. Despite these remarkable successes, researchers have observed some intriguing and troubling aspects of the behaviour of these models. A case in point is the presence of adversarial examples which make learning based systems fail in unexpected ways. Such behaviour and the difficultly of interpreting the behaviour of neural networks is a serious hindrance in the deployment of these models for safety-critical applications. In this talk, I will review the challenges in developing models that are robust and explainable and discuss the opportunities for collaboration between the formal methods and machine learning communities.
        Biography
    Pushmeet is a principal scientist and team leader at DeepMind. His research revolves around Intelligent Systems and Computational Sciences. His current research interests include Safe and Verified AI, Probabilistic Programming, Interpretable and Verifiable Knowledge Representations of Deep Models. In terms of application domains, he is interested in goal-directed conversation agents, machine learning systems for healthcare, and 3D reconstruction and tracking for augmented and virtual reality. Pushmeet’s papers has appeared in conferences in the field of machine learning, computer vision, game theory and human computer interaction and have won awards in CVPR, WWW, CHI, ECCV, ISMAR, TVX and ICVGIP. His research has also been the subject of a number of articles in popular media outlets such as Forbes, Wired, BBC, New Scientist and MIT Technology Review. Pushmeet is on the editorial board or PAMI, IJCV and CVIU.
  • The realities of applied AI
  • Alison Lowndes (NVIDIA, USA)
  • Abstract
    Artificial Intelligence will improve productivity across a broad range of applications and across a broad range of industries. The benefit to humanity will be substantial but there are important lessons to learn and steps to take. NVIDIA works closely with world-wide researchers, Enterprise & startups in both getting started & problem-solving. This talk will address some of the issues to be considered and explain why software and community are key to success with AI.
        Biography
    Artificial Intelligence Developer Relations – EMEA, NVIDIA
    After spending her first year with NVIDIA as a Deep Learning Solutions Architect, Alison is now responsible for NVIDIA’s Artificial Intelligence Developer Relations in the EMEA region. She is a mature graduate in Artificial Intelligence combining technical and theoretical computer science with a physics background & over 20 years of experience in international project management, entrepreneurial activities and the internet. She consults on a wide range of AI applications, including planetary defence with NASA & ESA, and continues to manage the community of AI & ML researchers around the world, staying knowledgeable in state of the art across all areas of research. She also travels and advises, teaches and evangelizes NVIDIA’s platform, around the globe.
    Twitter: @AlisonBLowndes.
  • View slides
  • Safety verification for deep neural networks with provable guarantees
  • Marta Kwiatkowska (University of Oxford, UK)
  • Abstract
    Deep neural networks have achieved impressive experimental results in image classification, but can surprisingly be unstable with respect to adversarial perturbations, that is, minimal changes to the input image that cause the network to misclassify it. With potential applications including perception modules and end-to-end controllers for self-driving cars, this raises concerns about their safety. This lecture will describe progress with developing automated verification techniques for deep neural networks to ensure safety of their classification decisions with respect to image manipulations, for example scratches or changes to camera angle or lighting conditions, that should not affect the classification. The techniques exploit Lipschitz continuity of the networks and aim to approximate, for a given set of inputs, the reachable set of network outputs in terms of lower and upper bounds, in anytime manner, with provable guarantees. We develop novel algorithms based on games and global optimisation, and evaluate them on state-of-the-art networks.
        Biography
    Marta Kwiatkowska is Professor of Computing Systems and Fellow of Trinity College, University of Oxford. Kwiatkowska has made fundamental contributions to the theory and practice of model checking for probabilistic systems, focusing on automated techniques for verification and synthesis from quantitative specifications. She led the development of the PRISM model checker (www.prismmodelchecker.org), the leading software tool in the area and winner of the HVC Award 2016. Probabilistic model checking has been adopted in many diverse fields, including distributed computing, wireless networks, security, robotics, game theory, systems biology, DNA computing and nanotechnology, with genuine flaws found and corrected in real-world protocols. Kwiatkowska was awarded an honorary doctorate from KTH Royal Institute of Technology in Stockholm in 2014 and the Royal Society Milner Medal in 2018. Her recent work was supported by the ERC Advanced Grant VERIWARE “From software verification to ‘everyware’ verification” and the EPSRC Programme Grant on Mobile Autonomy. She is a Fellow of the ACM and Member of Academia Europea.
  • View slides
  • Machine learning and logic: Fast and slow thinking
  • Moshe Vardi (Rice University, USA)
  • Abstract
    There is a recent perception that computer science is undergoing a Kuhnian paradigm shift, with CS as a model-driven science being replaced as a data-driven science. I will argue that, in general new scientific theories refine old scientific theories, rather than replace them. Thus, data-driven CS and model-driven CS complement each other, just as fast thinking and slow thinking complement each other in human thinking, as explicated by Daniel Kahneman. I will then use automated vehicles as an example, where in recent years, car makers and tech companies have been racing to be the first to market. In this rush there has been little discussion of how to obtain scalable standardization of safety assurance, without which this technology will never be commercially deployable. Such assurance requires formal methods, and combining machine learning with logic is the challenge of the day.
        Biography
    Moshe Y. Vardi is the George Distinguished Service Professor in Computational Engineering and Director of the Ken Kennedy Institute for Information Technology at Rice University. He is the recipient of three IBM Outstanding Innovation Awards, the ACM SIGACT Goedel Prize, the ACM Kanellakis Award, the ACM SIGMOD Codd Award, the Blaise Pascal Medal, the IEEE Computer Society Goode Award, the EATCS Distinguished Achievements Award, and the Southeastern Universities Research Association’s Distinguished Scientist Award. He is the author and co-author of over 500 papers, as well as two books: Reasoning about Knowledge and Finite Model Theory and Its Applications. He is a Fellow of the Association for Computing Machinery, the American Association for Artificial Intelligence, the American Association for the Advancement of Science, the European Association for Theoretical Computer Science, the Institute for Electrical and Electronic Engineers, and the Society for Industrial and Applied Mathematics. He is a member of the US National Academy of Engineering and National Academy of Science, the American Academy of Arts and Science, the European Academy of Science, and Academia Europaea. He holds honorary doctorates from the Saarland University in Germany, Orleans University in France, UFRGS in Brazil, and the University of Liege in Belgium. He is currently a Senior Editor of the Communications of the ACM, after having served for a decade as Editor-in-Chief.
  • View slides

  • AI2: AI Safety and Robustness with Abstract Interpretation 
  • Martin Vechev (ETH Zurich, Switzerland, and DeepCode.ai)
  • Abstract
    I will present AI2 (ai2.ethz.ch), a new approach for ensuring safety and robustness of AI systems based on abstract interpretation. AI2 is based on symbolic over-approximation and can automatically reason about safety properties (e.g., robustness) of realistic deep neural networks (e.g., convolutional). In the talk I will discuss how to soundly and precisely approximate the behavior of fully-connected layers with rectified linear unit activations (ReLU), convolutional layers with ReLU, and max pooling layers, allowing AI2 to scale to larger networks. I will also present a new abstraction which allows a trade-off between scalability and precision. Finally, I will discuss several open research challenges in applying symbolic methods to AI systems.
        Biography
    Martin Vechev is an Associate Professor at ETH Zurich, where he leads the SRL Lab. Prior to ETH, he was a researcher at the IBM T.J. Watson Research Center, USA. He obtained his PhD from Cambridge University, UK. His current research work spans a range of AI topics including: systems that combine programming languages and machine learning to solve tasks not previously possible, foundations and applications of probabilistic programming, and making AI systems more secure, reliable and interpretable. He is the recipient of various awards including an ERC Starting Grant, SIGPLAN and CACM Research Highlights, several faculty awards (Facebook, Google), an Atanasoff prize for outstanding young researchers awarded by the President of Bulgaria, and others. He is also a co-founder of Deepcode.ai, a start-up building the next-generation of AI-based programming tools.
  • View slides

  • Ethically Aligned AI Systems 
  • Francesca Rossi (IBM and University of Padova, Italy), Turing Speaker
  • Abstract
    AI systems are increasingly acting in the same environment as humans, in areas as diverse as driving, assistive technology, and health care. Humans and machines will often need to work together and agree on common decisions. This requires alignment around some common moral values and ethical principles. Humans will accept and trust more machines that behave at least as ethically as other humans in the same environment. Also, this would make it easier for machines to explain their behavior in terms understandable by humans. Moreover, shared moral values and ethical principles will help in collective human-machine decision making, where consensus or compromise is needed. In this talk I will present some challenges in designing ethically aligned AI systems that humans can trust, and describe technical solutions for such challenges.
        Biography
    Francesca Rossi is the AI Ethics Global Leader and a Distinguished Research Staff Member at IBM Research AI. She is also a professor of computer science at the University of Padova, Italy, currently on leave.

 Her research interests focus on artificial intelligence, specifically they include constraint reasoning, preferences, multi-agent systems, computational social choice, and collective decision making. She is also interested in ethical issues in the development and behaviour of AI systems, in particular for decision support systems for group decision making. She has published over 190 scientific articles in journals and conference proceedings, and as book chapters. She has co-authored a book and she has edited 17 volumes, between conference proceedings, collections of contributions, special issues of journals, as well as the Handbook of Constraint Programming. 

She is a AAAI and a EurAI fellow, and a Radcliffe fellow 2015. She has been president of IJCAI and an executive councillor of AAAI. She is Editor in Chief of the Journal of AI Research (JAIR) and a member of the editorial board of Constraints, Artificial Intelligence, AMAI, and KAIS. She co-chairs the AAAI committee on AI and ethics and she is a member of the scientific advisory board of the Future of Life Institute and of the Leverhulme Centre for the Future of Intelligence. She is in the executive committee of the IEEE global initiative on ethical considerations on the development of autonomous and intelligent systems and she belongs to the World Economic Forum Global Council on AI and robotics. She is a member of the board of directors of the Partnership on AI, where she represents IBM as one of the founding partners.

  • Deep Learning Methods for Large Scale Theorem Proving 
  • Christian Szegedy (Google, USA)
  • Abstract
    Machine Learning (with hand engineered features) has become an enabling factor for reasoning in large theories. Here I will give overview of some recent work utilizing deep neural networks for both premise selection and guiding automated theorem provers. Also we will highlight connections and recent efforts on utilizing natural language corpora for automated reasoning.
        Biography
    Christian Szegedy is an artificial intelligence researcher at Google, his interests include deep learning based computer vision (the designer of the Inception computer vision architecture and is co-inventor of the MultiBox object detector and DeepPose human pose estimator). He is also know for his work on adversarial examples and batch-normalization. His most current research direction is focused on auto-formalizing mathematics by utilizing the latest AI methods at large scale.

  • Program Fairness – a Formal Methods Perspective
  • Aditya Nori (Microsoft Research Cambridge, UK)
  • Abstract
    With the range and sensitivity of algorithmic decisions expanding at a break-neck speed, it is imperative that we seriously investigate fairness and bias in decision-making programs. First, we show that a number of recently proposed formal definitions of fairness can be encoded as probabilistic program properties. Second, with the goal of enabling rigorous reasoning about fairness, we design a novel technique for verifying probabilistic properties that admits a wide class of decision-making programs. Third, we present the first verification tool for automatically certifying that a program meets a given fairness property. We evaluate our tool on a range of decision-making programs. Our evaluation demonstrates the tool’s ability to verify fairness for a range of different programs, which we show are out-of-reach for state-of-the-art program analysis techniques.
        Biography
    Aditya Nori is a principal researcher and member of the InnerEye team at MSR Cambridge. His research interests are in the design and analysis of reliable intelligent systems. Over the past few years, he has explored various synergies between programming languages, formal methods and machine learning. These include the use of machine learning in program verification and specification inference, and probabilistic programming. He has built a number of programmer productivity tools, and this includes the Yogi software model checker that shipped with the Static Driver Verifier toolkit for Microsoft Windows, and the R2 probabilistic programming system. He is also winner of the ACM SIGSOFT Distinguished Paper awards at FSE 2006 and 2015. He is currently working on precise and robust medical image analysis for cancer treatment.

  • Safe Reinforcement Learning via Formal Methods
  • André Platzer (Carnegie Mellon University, USA)
  • Abstract
    Formal verification provides a high degree of confidence in safe system operation, but only if reality matches the verified model. Although a good model will be accurate most of the time, even the best models are incomplete. This is especially true in Cyber-Physical Systems because high-fidelity physical models of systems are expensive to develop and often intractable to verify. Conversely, reinforcement learning-based controllers are lauded for their flexibility in unmodeled environments, but do not provide guarantees of safe operation.
    This talk presents joint work with Nathan Fulton on provably safe learning that provides the best of both worlds: the exploration and optimization capabilities of learning along with the safety guarantees of formal verification. The main insight is that formal verification combined with verified runtime monitoring can ensure the safety of a learning agent. Verification results are preserved whenever learning agents limit exploration within the confounds of verified control choices as long as observed reality comports with the model used for off-line verification. When a model violation is detected, the agent abandons efficiency and instead attempts to learn a control strategy that guides the agent to a modeled portion of the state space.
        Biography
    André Platzer’s research develops the logical foundations of cyber-physical systems to characterize their fundamental principles and to show how we can design computers that are guaranteed to interact correctly with the physical world. The solution to this challenge is the key to enabling computer assistance that we can bet our lives on. He pursues this challenge with the principled design of programming languages with logics that can provide proofs as correctness guarantees.
  • View slides

  • What Just Happened in AI
  • Adnan Darwiche (University of California, Los Angeles, USA)
  • Abstract
    I will discuss in this talk my perspective on recent developments in artificial intelligence, which have been triggered by the successes of deep learning. The discussion will be based on a position paper that will appear in CACM with the title “Human-Level Intelligence or Animal-Like Abilities?” My presentation will be hinged on a distinction between model-based and function-based approaches to AI, and will address a spectrum of issues and considerations, while placing current developments in a historical perspective. I will also share thoughts on how best to capitalize on current progress as we move forward, touching on both educational and research needs.
        Biography
    Adnan Darwiche is a professor and chairman of the computer science department at UCLA. He directs the automated reasoning group which focuses on probabilistic and logical reasoning, and their applications to machine learning (http://reasoning.cs.ucla.edu/). Professor Darwiche is a AAAI Fellow, a former Editor-in-Chief of the Journal of Artificial Intelligence Research (JAIR), and author of “Modeling and Reasoning with Bayesian Networks,” by Cambridge University Press. His group recently released a YouTube channel that features lectures, tutorials and talks.

Organisers

 

 Programme

The programme is now available on Easychair – view here.

 

Location

The Summit will take place in one of the main FLoC venues in the centre of Oxford – look out for updates in the coming weeks.

You can also click on the “show menu” symbol (◧) below to view locations for social events, accommodation and other points of interest.