VERification, TEsting, Concurrency, Security / Semantics (VERTECS^{2}) is a research group in the Department of Computer Science and Engineering at Indian Institute of Technology Delhi. We work in the areas of formal verification, concurrency, programming languages and security. Our group currently has three core faculty researchers and four Ph.D. students.
Faculty Members
Affilated Faculty
Ph.D. Students
M.Tech. Students
Graduated Students
Shibashis Guha currently at the Hebrew University of Jerusalem.
Topic  Schedule  Discussion Lead  References/ Annotated Notes 

Linear Temporal Logic  13Apr2017  Divyanshu  Principles of Model Checking (Chapter 5) 
NetKAT  TBA  Arun Shankar  A Formal System for the Verification of Networks 
Dependent Types in Databases  TBA  Chandrika  
Compactness theorem  Lindstorm theorem  TBA  Divyanshu  
MartinLof Type Theory  TBA  Chandrika  
Galois Connection  TBA  Chandrika  
Regular Properties  Mar2017  Divyanshu  Principles of Model Checking (Chapter 4) 
Linear Time Properties  Feb2017  Sanjana  Principles of Model Checking (Chapter 3) 
Modeling Concurrent Systems  Jan2017  Sanjana  Principles of Model Checking (Chapter 2) 
Coinduction & Coalgebra  SeptDec2016  Divyanshu  Universal Colgebra 
Partial Order Reduction  26May2016  Chinmay 

30May2016  
A Decentralized Model for Information Flow Control.  4July2015  Chandrika  Paper 
Certification of programs for secure information flow.  26June2015  Chandrika  Paper 
A Lattice Model of Secure Information Flow.  19June2015  Chandrika  Paper 
Succinct Representation of Concurrent Trace Sets  22May2015  Dr Subodh Sharma  
Nominal Modal Logic  Covered  Divyanshu  
Testing Equivalences  Covered  Divyanshu  Observation Equivalences as Testing Equivalences 
KAT  Covered  Prof. S. Arun Kumar 
Date  Presenter  Presentation Title  Abstract 

17/08/2016  Prof. Sorav Bansal  Program Synthesis from Polymorphic Refinement Types  We present a method for synthesizing recursive functions that provably satisfy a given specification in the form of a polymorphic refinement type. We observe that such specifications are particularly suitable for program synthesis for two reasons. First, they offer a unique combination of expressive power and decidability, which enables automatic verification and hence
synthesis of nontrivial programs. Second, a typebased specification for a program can often be effectively decomposed into independent specifications for its components, causing the synthesizer to consider fewer component combinations and leading to a combinatorial reduction in the size of the search space. At the core of our synthesis procedure is a new algorithm for refinement type checking, which supports specification decomposition. We have evaluated our prototype implementation on a large set of synthesis problems and found that it exceeds the state of the art in terms of both scalability and usability. The tool was able to synthesize more complex programs than those reported in prior work (several sorting algorithms and operations on balanced search trees), as well as most of the benchmarks tackled by existing synthesizers, often starting from a more concise and intuitive user input.

24/08/2016  Prof. Subodh Sharma  Statistical Similarity of Binaries  We address the problem of finding similar procedures in stripped binaries. We present a new statistical approach for measuring the similarity between two procedures. Our notion of similarity allows us to find similar code even when it has been compiled using different compilers, or has been modified. The main idea is to use similarity by composition:
decompose the code into smaller comparable fragments, define semantic similarity between fragments, and use statistical reasoning to lift fragment similarity into similarity between procedures. We have implemented our approach in a tool called Esh, and applied it to find various prominent vulnerabilities across compilers and versions, including Heartbleed [3], Shellshock [5] and Venom [7]. We show that Esh produces high accuracy results, with few to no false positives a crucial factor in the scenario of vulnerability search in stripped binaries.

31/08/2016  Deepak Ravi  Liquid Types  We present Logically Qualified Data Types, abbreviated to Liquid Types, a system that combines HindleyMilner type inference with Predicate Abstraction to automatically infer dependent types precise enough to prove a variety of safety properties. Liquid types allow programmers to reap many of the benefits of dependent types, namely static verification of critical properties and the elimination of expensive runtime checks, without the heavy price of
manual annotation. We have implemented liquid type inference in DSOLVE, which takes as input an OCAML program and a set of logical qualifiers and infers dependent types for the expressions in the OCAML program. To demonstrate the utility of our approach, we describe experiments using DSOLVE to statically verify the safety of array accesses on a set of OCAML benchmarks that were previously annotated with dependent types as part of the DML project. We show that when used in conjunction with a fixed set of array bounds checking qualifiers, DSOLVE reduces the amount of manual annotation required for proving safety from 31% of program text to under 1%.

07/09/2016  Manjeet Dahiya  Types from Data: Making Structured Data FirstClass Citizens in F#  Most modern applications interact with external services and access data in structured formats such as XML, JSON and CSV. Static type systems do not understand such formats, often making data access more cumbersome. Should we give up and leave the messy world of external data to dynamic typing and runtime checks? Of course, not! We present F# Data, a library that integrates external structured data into F#. As most realworld data does not
come with an explicit schema, we develop a shape inference algorithm that infers a shape from representative sample documents. We then integrate the inferred shape into the F# type system using type providers. We formalize the process
and prove a relative type soundness theorem. Our library significantly reduces the amount of data access code and it provides additional safety guarantees when contrasted with the widely used weakly typed techniques.
