Bengaluru Crypto Day, Edition 4

Bengaluru Crypto Day, Edition 4

Indian Institute of Science

1st Jul, 2024

Register here

BCD Edition-4

Welcome to the Bengaluru Crypto Day, 4th edition. We will have a day full of exciting topics in cryptography presented by leading researchers.

Speakers


Venue

CSA Seminar hall, room #112


Schedule

Time Speaker Title
09:00 - 09:30 Welcome

Research presentations

09:30 - 10:30 Manoj
CASE: A New Frontier in Public-Key Authenticated Encryption We introduce a new cryptographic primitive, called Completely Anonymous Signed Encryption (CASE). CASE is a public-key authenticated encryption primitive, that offers anonymity for senders as well as receivers. A “case-packet” should appear, without a (decryption) key for opening it, to be a blackbox that reveals no information at all about its contents. To decase a case-packet fully – so that the message is retrieved and authenticated – a verification key is also required. Defining security for this primitive is subtle. We present a relatively simple Chosen Objects Attack (COA) security definition. Validating this definition, we show that it implies a comprehensive indistinguishability-preservation definition in the real-ideal paradigm. To obtain the latter definition, we extend the Cryptographic Agents framework of [AAP15,APY16] to allow maliciously created objects. We also provide a novel and practical construction for COA-secure CASE under standard assumptions in public-key cryptography, and in the standard model. We believe CASE can be a staple in future cryptographic libraries, thanks to its robust security guarantees and efficient instantiations based on standard assumptions. Based on joint work with Shashank Agrawal, Shweta Agrawal, Rajeev Raghunath, and Jayesh Singla, that appeared at TCC 2023.
10:30 - 11:30 Sayak
Differential privacy in Learning from Preference Feedback Learning from preference feedback has recently gained considerable traction as a promising approach to align generative models with human interests. Instead of relying on numerical rewards, the generative models are trained using reinforcement learning with human feedback (RLHF). These approaches first solicit feedback from human labellers typically in the form of pairwise comparisons between two possible actions, then estimate a reward model using these comparisons, and finally employ a policy based on the estimated reward model. An adversarial attack in any step of the above pipeline might reveal private and sensitive information of human labelers. In this work, we adopt the notion of label differential privacy (DP) and focus on the problem of reward estimation from preference-based feedback while protecting privacy of each individual labellers. Specifically, we consider the parametric Bradley-Terry-Luce (BTL) model for such pairwise comparison feedback involving a latent reward parameter, and within a standard minimax estimation framework, we provide tight upper and lower bounds on the error in estimating the parameter under the constraint of DP.
11:30 - 11:45 Tea/Coffee break
11:45 - 12:30 Divya
Practical Secure Machine Learning With the rise of data silos, it is becoming increasingly important to enable private data collaboration, i.e., securely computing on data owned by different entities without any exchange or sharing of data in the clear. While theoretically, secure multiparty computation (MPC) enables this scenario with strong formal security guarantees, its general application suffers from many challenges, namely, performance, scalability and ease-of-use. In my talk, I will primarily focus on computations occurring in collaborative machine learning, namely, ML inference, training and validation. Over the last decade, the crypto and security communities have worked hard to address these challenges for secure machine learning. In fact, one of our recent works shows that secure inference has reached a tipping point: secure inference for certain model classes has a small overhead over cleartext inference. In another work, we improve the latency and scalability of secure transformer inference by more than an order of magnitude and enable secure inference of GPT-2 model in 1.6 seconds. My talk will discuss these recent developments, how we get there and what problems remain.
12:30 - 14:00 Lunch Break

Interactive sessions

14:00 - 15:30 Session-1: Round robin interaction with researchers
15:30 - 16:30 Session-2: AMA with researchers/speakers
16:30 - 17:00 High Tea

Registration

Register here


Organizers

Bhavana Kanukurthi

(IISc Bengaluru) [Email: bhavana at iisc dot ac dot in]

Chaya Ganesh

(IISc Bengaluru) [Email: chaya at iisc dot ac dot in]

Dhinakaran Vinayagamurthy

(IBM India Research Lab) [Email: dvinaya1 at in dot ibm dot com]

Nishanth Chandran

(Microsoft Research India) [Email: nichandr at microsoft dot com]

Sikhar Patranabis

(IBM India Research Lab) [Email: sikhar.patranabis at ibm dot com]

How to reach