Skip to main content

Click here to access the Proceedings of ICISS'18

Springer has made the conference proceedings freely accessible for a period of 4 weeks following the conference.
ICISS 2018 Schedule

PDF Copy

There will be four keynote speeches and two tutorials during the conference. We are happy to announce the keynote and tutorial speakers for the conference

Atul Prakash

Prof. Atul  Prakash,
Department of EECS,
University of Michigan-Ann Arbor

Atul Prakash is a Professor in the EECS Department at the University of Michigan, where he has served on the faculty since 1989. He earned a Ph.D. from the University of California, Berkeley, in 1989 and a B.Tech. from IIT Delhi in 1982. His research interests are in Security and privacy, cyber-physical systems, computer-supported cooperative work, and distributed systems. Most recently, he has been working on security for Internet of Things and on the security and privacy of Financial Web sites.

Atul Prakash's talk is titled Robust Physical-World Attacks on Deep Learning Visual Classifiers and Detectors. Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input. Given that that emerging physical systems are using DNNs in safety-critical situations such as autonomous driving, adversarial examples could mislead these systems and cause dangerous situations. It was however unclear if these attacks could be effective in practice with real-world objects, with some researchers finding that the attacks fail to translate to physical world in practice.

The talk will report on some of the findings for generating such adversarial examples that can be physically realized using techniques such as stickers placed on real-world traffic signs. With a perturbation in the form of only black and white stickers, we modified real stop signs, causing targeted misclassification in over 80% of video frames obtained on a moving vehicle (field test) for state-of-the-art image classifiers, LISA-CNN and GTSRB-CNN. Recent results suggest that object detectors, such as YOLO, are also susceptible to physical perturbation attacks. The talk will discuss some of the implications of the work on the design of robust classifiers and detectors for safety-critical applications.


Sriram Rajamani

Sriram Rajamani, 
Managing Director and Distinguished Scientist, 
Microsoft Research India

Sriram Rajamani is Distinguished Scientist and Managing Director of Microsoft Research India. His research interests are in designing, building and analyzing computer systems in a principled manner. Over the years he has worked on various topics including Hardware and Software Verification, Type Systems, Language Design, Distributed Systems, Security and Privacy, Cloud Security and Probabilistic Programming.

Together with Tom Ball, he was awarded the CAV 2011 Award for “contributions to software model checking, specifically the development of the SLAM/SDV software model checker that successfully demonstrated computer-aided verification techniques on real programs.” Sriram was elected ACM Fellow in 2015 for contributions to software analysis and defect detection. Recently, Sriram has been elected Fellow of Indian National Academy of Engineering.

Sriram has a PhD from UC Berkeley, MS from University of Virginia and BEng from College of Engineering, Guindy, all with specialization in Computer Science. Sriram was general chair for POPL 2015 in India, and was program Co-Chair for CAV 2005. He co-founded the Mysore Park Series, and the ISEC conference series in India. He served on the CACM editorial board till recently.

Sriram Rajamani's talk is titled Specifying and Checking Data Use Policies. Cloud computing has changed the goals of security and privacy research. The primary concerns have shifted to protecting data in terms of not only who gets to access data, but also how they use it. While the former can be specified using access control logics, the latter is relatively a new topic and relatively unexplored.

The talk will describe a language called Legalese, which we designed to specify data use policies in cloud services. Legalese uses propositional logic together with type-state to specify constraints on data use, retention and combination of data. Next, we describe a notion called Information Release Confinement (IRC), which can be used to specify that data does not leave a region except through specific channels such as API calls. IRC has been used to specify and verify confidentiality of cloud services that use Intel SGX enclaves. Finally, the talk will speculate on combining these two approaches to specify and check stateful policies on data use in cloud services.

Slides are available: Click here


Prateek Saxena

Prateek Saxena, 
Assistant Professor, Computer Science Division, 
National University of Singapore

Prateek Saxena is an Assistant Professor in the Computer Science Department at National University of Singapore. He work on computer security, and its intersection with formal methods and programming languages. His present research projects are on cryptocurrencies, trusted computing, binary analysis, and web security. He got his Phd in Computer Science from the University of California, Berkeley in 2012 and visited Microsoft Research Redmond during the summer of 2015.

Prateek Saxena's talk will be On The Security of Blockchain Consensus Protocols. Blockchain protocols, which originated in Bitcoin, allow a large network of computers to agree on the state of a shared ledger. Applications utilizing blockchains embrace a semantics of immutability: once something is committed to the blockchain, it can not be reversed without extensive effort from a majority of computers connected to it. These protocols embody the vision of a global “consensus computer” to which arbitrary machines with no pre-established identities can connect for offering their computational resources (in return for a fee), without dependence on any centralized authority. Despite this, the computational infrastructure strives to offer failure resistance against arbitrarily malicious actors. Security is at the heart of these protocols and applications built on them, as they now support an economy valued at several hundred billion dollars.

Theoretical frameworks should guide the construction of practical systems. The last decade of work on designing blockchain protocols highlights the importance of this interplay. In this paper, we distill the essence of the problem of designing secure blockchain consensus protocols, which are striving towards lower latencies and scalability. Our goal is to present key results that have surfaced in the last decade, offering a retrospective view of how consensus protocols have evolved. This talk will examine a central question: is Bitcoin’s original consensus protocol—often called Nakamoto consensus— secure, and if so, under which conditions? There have been many folklore claims, for instance, that Nakamoto consensus is categorically secure up to adversarial power, beyond which “51% attacks” violate its guarantees. Careful analysis, however, has dispelled many such claims. The quest for designing more scalable and secure consensus protocols has ensued. The talk will review some of these construction paradigms and open problems. The talk will focus mostly on protocols that are designed to operate in the open or permissionless setting which limit adversaries by computational power only.

Slides are available: Click here


V.S. Subrahmanian

V.S. Subrahmanian,
Professor, Department of Computer Science,
Dartmouth

Subrahmanian joins Dartmouth from the University of Maryland, where he has built a career over three decades as a professor in the department of computer science and the University of Maryland Institute for Advanced Computer Studies, for which he served as director from 2003 to 2010; founding co-director of the Laboratory for Computational Cultural Dynamics; and founding director of the Center for Digital International Government. He is a fellow of the American Association for the Advancement of Science and the Association for the Advancement of Artificial Intelligence, and his research has been funded by the Defense Advanced Research Projects Agency (DARPA) and the U.S. Army and Navy, among others.

A prolific scholar, Subrahmanian is author or coauthor of 140 peer-reviewed journal papers. In 2015, he published The Global Cyber-Vulnerability Report, an analysis of more than 20 billion reports generated by 4 million computers in 44 countries, and led a team that won DARPA’s Twitter Influence Bot Detection Challenge, a competition to develop means of identifying and eliminating propaganda-spreading automated social media accounts. His work has been featured in national media, including PBS’s Nova, NBC Nightly News, The New York Times, The Washington Post, and The Wall Street Journal, among other outlets. He received his doctorate from Syracuse University, and was awarded a National Science Foundation National Young Investigator Award and the Distinguished Young Scientist Award from the Maryland Science Center/Maryland Academy of Science.

V.S. Subrahmanyan's talk is titled Bots, Socks, and Vandals: An Overview of Malicious Actors on the Web. Online social networks and e-commerce platforms are increasingly targeted by malicious actors with a wide variety of goals. Bots on Twitter may seek to illicitly influence opinion. Sock-puppet accounts on online discussion forums (e.g. discussion threads on online news articles) may help push certain points of view. Vandals on Wikipedia may seek to inject false material into otherwise legitimate pages. Review fraud in online forums may illicitly promote a product or destroy a competing product’s reputation. The bulk of this talk will focus on identifying review fraud in online e-commerce platforms such as Amazon, eBay and Flipkart. Because an increase of 1 star in a product rating can, on average, lead to a 5–9% increase in revenues, vendors have strong incentives to generate fake reviews.

The talk will present both an unsupervised model as well as a supervised model to identify users who generate fake reviews. We show that our framework, called REV2, produces high performance in real world experiments. In addition, a report of 150 review fraud accounts on Flipkart was independently evaluated by Flipkart’s anti-fraud team who reported that 127 of the predictions were correct.

Sockpuppet accounts – multiple accounts operated by a single individual or corporate “puppetmaster” – are also a popular mechanism used to inappropriately sway opinion in online platforms. For instance, social “botnets” commonly use multiple “sock” accounts to implement coordinated bots. Sockpuppet accounts are also commonly used by trolls. The talk will report on recent work on the characteristics and properties of sockpuppet accounts through a study that involves data from the Disqus platform. Disqus powers discussion threads and forums on a host of news and other websites. Sockpuppets are often used in such contexts to artificially boost an opinion or artificially generate controversy. The talk will also briefly discuss the use of bots in real world influence campaigns along with methods to detect them. Third, the speaker will discuss the problem of vandals on Wikipedia. Though research has been done previously on automated methods to detect acts of vandalism on Wikipedia, we describe VEWS, a Wikipedia Vandal Early Warning System that seeks to detect vandals as early as possible and preferably before they commit any acts of vandalism. The talk will show that VEWS outperforms prior work – but that when combined with prior work, it predicts vandals with very high accuracy. The talk will conclude with a discussion of different types of malicious actors on the web.

Slides are available: Click here



Nishanth Chandran

Nishanth Chandran,
Researcher, Microsoft Research,
India

Nishanth Chandran is a Researcher at Microsoft Research, India. His research interests are in problems related to cryptography, cloud security and distributed algorithms. Prior to joining MSRI, Nishanth was a Researcher at AT&T Labs, and before that he was a Post-doctoral Researcher at MSR Redmond.

Nishanth is a recipient of the 2010 Chorafas Award for exceptional achievements in research and his research has received coverage in science journals and in the media at venues such as Nature and MIT Technology Review. He has published several papers in top computer science conferences and journals such as Crypto, Eurocrypt, CCS, TCC, STOC, FOCS, SIAM Journal of Computing, Journal of the ACM, and so on. His work on position-based cryptography was selected as one of the top 3 works and invited to QIP 2011 as a plenary talk. Nishanth has served on the technical program committee of many of the top cryptography conferences on several occasions. He holds 2 US Patents and 4 pending US Patents. Nishanth received his Ph.D. in Computer Science from UCLA, M.S. in Computer Science from UCLA, and B.E. in Computer Science and Engineering from Anna University, Chennai.

Nishanth Chandran and Divya Gupta will present a tutorial on Secure Multiparty Computation. The tutorial will survey the fascinating area of secure two and multi-party computation that allows parties to compute functions on their joint data without revealing their private data. We will present some of the cryptographic techniques in the area and also highlight the current research problems and directions.

Slides are available: Click here



Divya Gupta

Divya Gupta,
Researcher, Microsoft Research,
India

Divya Gupta is a researcher at Microsoft Research India. Before joining MSR, she was a postdoc at UC Berkeley hosted by Sanjam Garg. She completed her PhD at University of California at Los Angeles with Amit Sahai. She earned her B.Tech and M.Tech from the Indian Institute of Technology, Delhi. Her interests are cryptography, security, and theoretical computer science.

Nishanth Chandran and Divya Gupta will present a tutorial on Secure Multiparty Computation. The tutorial will survey the fascinating area of secure two and multi-party computation that allows parties to compute functions on their joint data without revealing their private data. We will present some of the cryptographic techniques in the area and also highlight the current research problems and directions.

Slides are available: Click here



Somesh Jha

Somesh Jha,
Professor, Computer Sciences Department,
University of Wisconsin-Madison

Somesh Jha holds the Sheldon B. Lubar Chair Professorship in Computer Sciences at the University of Wisconsin-Madison. He earned his Ph.D. in 1996 from Carnegie Mellon University and B.Tech in 1985 from IIT Delhi. His main areas of interest are security, privacy, formal methods and software engineering. He has recently been working on security and privacy issues in machine learning systems. He is the recipient of numerous awards, including the Computer-aided Verification (CAV) Award in 2015 for his development of counter-example-guided abstraction refinement, and numerous best paper awards at the primier venues in security and software engineering. He is a Fellow of both the ACM and the IEEE. He has published over 150 articles in highly refereed conferences and prominent journals.

Somesh Jha will present a tutorial on the topic of Adversarial Machine Learning. Fueled by massive amounts of data, models produced by machine-learning (ML) algorithms, especially deep neural networks, are being used in diverse domains where trustworthiness is a concern, including automotive systems, finance, health care, natural language processing, and malware detection. Of particular concern is the use of ML algorithms in cyber-physical systems (CPS), such as self-driving cars and aviation, where an adversary can cause serious consequences. In this talk, we will address the following question: what happens to machine-learning algorithms in the presence of a malicious adversary? This area of machine learning is called adversarial ML (AML), and recently the interest in this area has simply exploded. The tutorial will survey recent results in the field and close with some open problems.

Slides are available: Click here