Assistant Professor
IIT Bombay
Talk Title: Old dog, Old tricks, New show: Fast 1st order methods for training Kernel Machines.
Abstract: Kernel Machines are a classical family of models in Machine Learning that overcome several limitations of Neural Networks. These models have regained popularity following some landmark results showing their equivalence to Neural Networks. We propose a state of the art algorithm - EigenPro - based on gradient descent in the RKHS. This algorithm is much faster and requires less memory compared to previous attempts, and enables training large scale Kernel Machines over large datasets.
Bio: Parthe Pandit is a Thakur Family Chair Assistant Professor at the Center for Machine Intelligence and Data Science (C-MInDS) at IIT Bombay. He was a Simons postdoctoral fellow at UC San Diego. He obtained his PhD from UCLA, and his undergraduate education from IIT Bombay. He has received the AI2050 Early Career Fellowship from Schmidt Sciences in 2024, and the Jack K Wolf Student Paper Award at ISIT 2019.
Assistant Professor
IIT Goa
Talk Title: Scalable Simulation for Performance Modeling and Design Exploration
Abstract: Simulation plays a key role in systems research. This session will begin with a gentle introduction to Discrete-Event Simulation (DES), covering event-driven and cycle-based approaches and explain why parallelizing DES is essential yet non-trivial. I will then introduce SiTAR, a parallel simulation framework that we have developed over several years, outline its key ideas, modeling language, and runtime, and share result on modeling and parallel simulation of multicore and memory subsystem models for design exploration. This talk will conclude with an overview of our ongoing work on automated model generation and hybrid (discrete-continuous) simulation.
Bio: Neha Karanjkar is an Assistant Professor of Computer Science and Engineering at IIT Goa. She received her M.Tech. and Ph.D. in Electrical Engineering from IIT Bombay. Before joining IIT Goa, she was a postdoctoral fellow at the Robert Bosch Centre for Cyber-Physical Systems at IISc Bangalore and a research scientist at IIT Bombay, where she worked on the ?development of the indigenous 'Ajit' microprocessor. Her research focuses on modeling, simulation, and optimization of discrete event systems, with applications in computer systems design, networks, and digital twins of industrial and manufacturing processes. She develops frameworks for parallel discrete event simulation and for hybrid discrete-continuous system simulation. She serves on the ACM India Education Committee and is a senior member of IEEE.
Associate Professor
IIT Guwahati
Talk Title: Machine Learning for Electronic Design Automation
Abstract: The increasing complexity of integrated circuit (IC) design has made traditional Electronic Design Automation (EDA) approaches computationally intensive and time-consuming. Recent advances in Machine Learning (ML) are transforming the EDA landscape by enabling data-driven optimization, faster design closure, and improved accuracy in prediction-based tasks. This talk will provide an overview of how ML techniques are being integrated into various stages of the VLSI design flow-such as synthesis, placement, routing, and verification. Emphasis will be placed on key challenges, successful use cases, and future research directions where ML can significantly enhance design productivity and innovation in EDA.
Bio: Dr. Chandan Karfa is currently an Associate Professor in the Department of Computer Science and Engineering, IIT Guwahati where he is working since August 2016. Prior to that, he has worked for five years as Sr. R&D Engineer in Synopsys (India) Pvt. Ltd. He has visited New York University during the summer of 2019. He has obtained his MS and PhD degrees in Computer Science and Engineering from IIT Kharagpur. His research interests include electronic design automation (EDA), formal verification, high-level synthesis, hardware security and ML for EDA. He has published more than eighty research papers in reputed international journals and conferences. He has received Google India Research Award 2024, Google Silicon Research Award 2024 from Google, Qualcomm Faculty Award from Qualcomm in 2021, the TechnoInventor Award by India Electronics and Semiconductor Association in 2014, Innovative Student Projects Award from the Indian National Academy of Engineers in 2008 and 2013, Best Paper Awards in the ADCOM 2007 and in I-CARE 2013, Best paper nominations in ASIANHOST 2020, CASES 2024 and VLSI 2025 and Microsoft Research India PhD Fellowship in 2008. He is a senior member of IEEE.
Assistant Professor
IIT Jodhpur
Talk Title: Reimagining Computing through Systems that Drive AI and AI that Shapes Systems
Abstract: Artificial intelligence has not only transformed applications but has also redefined the design principles of modern computing systems. This presentation explores the emerging co-evolution of systems that drive AI and AI that shapes systems. It first discusses system-level innovations in AI accelerators, including dataflow architectures, memory hierarchies, and interconnect optimizations that enable scalable and energy-efficient model execution. In parallel, it examines the use of AI techniques to enhance system intelligence, with particular emphasis on learning-guided cache management and adaptive memory control. By integrating predictive models into runtime and architectural decisions, computing platforms can dynamically adapt to diverse workloads and access patterns, leading to more efficient and intelligent system behavior.
Bio: Palash Das has received his B.Tech. Degree in CSE from the West Bengal University of Technology and an M.E. degree in CSE from the Indian Institute of Engineering Science and Technology, Shibpur. He completed his Ph.D. from IIT Guwahati in April 2022. Presently, he is working as an Assistant Professor in the department of CSE, IIT Jodhpur. His research interests include Near-Memory Processing, Hardware Designing, AI/ML accelerators, and Emerging Memory Technologies under the broad domain of Computer Architecture.
Professor
TBD
Talk Title: Efficient Stochastic Machine Learning at the Edge
Abstract: In this talk, I will talk about some hardware/software work my group has done in the area of stochastic computing based machine learning acceleration. I will talk about suitability of the SC to this workload, how to deal with its inherent approximate nature and briefly discuss few chip prototypes which we leverage both logic and in-memory implementations of SC-based accelerators for dense as well as a sparse compute.
Bio: Puneet Gupta received the B.Tech. degree in electrical engineering from the Indian Institute of Technology Delhi, New Delhi, India, in 2000, and the Ph.D. degree from the University of California at San Diego, San Diego, CA, USA, in 2007. He is currently a Faculty Member with the Electrical and Computer Engineering Department, University of California at Los Angeles. He Co-Founded Blaze DFM Inc., Sunnyvale, CA, USA, in 2004 and served as its Product Architect until 2007. He has authored over 200 papers, 18 U.S. patents, a book and two book chapters in the areas of system-technology co-optimization as well as emerging computing architectures for machines learning. Dr. Gupta is an IEEE Fellow and was a recipient of the NSF CAREER Award, the ACM/SIGDA Outstanding New Faculty Award, SRC Inventor Recognition Award, and the IBM Faculty Award. He has led the multi-university IMPACT+ Center which focused on future semiconductor technologies. He currently leads the System Benchmarking theme within the SRC CHIMES JUMP 2.0 center.
Professor
IIT Delhi
Talk Title: Toward Faithful and Human-Interpretable Explanations for Graph Neural Networks
Abstract: Understanding why Graph Neural Networks (GNNs) make certain predictions remains a central challenge. Existing explainability methods, though insightful, often produce complex or large explanations that are difficult for humans to interpret, and they primarily focus on local reasoning around individual predictions. Yet, a GNN learns global reasoning patterns that govern its behavior across data. This motivates our broader vision-to design explainability algorithms that are both faithful to the model's reasoning and interpretable to humans. We first addressed this through GraphTrail, the first global, post-hoc GNN explainer that represents model behavior as Boolean formulae over subgraph-level concepts discovered using Shapley values, offering a symbolic understanding of GNN reasoning. However, graph datasets are inherently multi-modal, combining topology with rich node and edge attributes, and a graph with n nodes admits up to 2n subgraphs, making the isolation of neural reasoning patterns combinatorially prohibitive. Consequently, GraphTrail is limited to small, labeled graphs. Our latest work, GNNXemplar, overcomes these challenges. Inspired by Exemplar Theory in cognitive science, it identifies representative nodes-exemplars-in the embedding space and derives interpretable natural language rules for their neighborhoods using large language models.
Bio: Sayan Ranu holds the joint positions of Nick McKeown Chair Professor at the Department of Computer Science and Eng. and the Yardi School of AI at IIT Delhi. His research interests span the broad area of machine learning and data mining for graphs. Sayan has received several awards and recognitions including ACM India Early Career Researcher Award-Honorable Mention, 2024, Indian National Science Academy (INSA) Associate Fellowship 2023, Mrs. Veena Arora Early Career Faculty Research Award of IIT Delhi 2023, Associate of the Indian Academy of Sciences (IASc) 2020, Teaching Excellence Award 2025, and most reproducible paper award at SIGMOD 2018. Sayan regularly serves in the program committees and review panels of prestigious conferences and journals and has been awarded Outstanding Reviewer Awards at WSDM 2021, VLDB 2022, ICLR 2024 and KDD 2025. Sayan has been granted 5 US patents.
Assistant Professor
IISc, Bangalore
Talk Title: On Stopping Times of Power-one Sequential Tests: Tight Lower and Upper Bounds
Abstract: Sequential hypothesis testing is a fundamental tool in statistics and machine learning. It enables decisions to be made from streaming data while controlling errors. But what is the minimum number of samples that you need to make such decisions with high confidence? In this talk, we will see two tight lower bounds on the stopping times of power-one sequential tests that guarantee small false positive rates and eventual detection under the alternative. These results extend the classical works (Wald, Farrell) to modern, fully nonparametric composite settings, using an information-theoretic quantity, which we call KL-inf, the minimal KL divergence between the null and alternative sets. We will also see sufficient conditions for designing tests that match these lower bounds. Given past work, these upper and lower bounds are unsurprising in their form; our main contribution is the generality in which they hold, for example, not requiring reference measures or compactness of the classes.
Bio: Shubhada Agrawal is an Assistant Professor in the dept. of Electrical Communication Engineering (ECE) at IISc, Bangalore. She received her Ph.D. in Computer and Systems Science from the TIFR, Mumbai, and her undergraduate degree in Mathematics and Computing from IIT Delhi. She was a postdoc in the dept. of ISyE at Georgia Tech., and in the dept. of Statistics & Data Science at Carnegie Mellon University. Her research interests broadly lie in applied probability and sequential decision-making, including sequential hypothesis testing, multi-armed bandits, and reinforcement learning.
Associate Professor
IIT Kanpur
Talk Title: Statistical Inference for Stochastic Gradient Descent
Abstract: The stochastic gradient descent (SGD) algorithm is used for parameter estimation, particularly for massive datasets and online learning. Inference in SGD has been a generally neglected problem and has only recently started to get some attention. I will first introduce SGD for relatively simple statistical models and explain the limiting behavior of Averaged SGD. Then, I will present a memory-reduced batch-means estimator of the limiting covariance matrix that is both consistent and amenable to finite-sample corrections. Further, I will discuss the practical usability of error covariance matrices for problems where SGD is relevant, and present ongoing challenges in this area.
Bio: I am an Associate Professor in the Department of Mathematics and Statistics at the Indian Institute of Technology, Kanpur. Previously, I was an NSF Postdocotoral fellow with Prof. Gareth Roberts at the University of Warwick. My PhD was from the University of Minnesota, Twin-Cities working with Prof. Galin Jones. My research interests are: Markov chain Monte Carlo, output analysis for stochastic simulation. Recently, I've also been interested in stochastic optimization algorithms.
Associate Professor
IIT Patna
Talk Title: Harnessing Generative Intelligence for Healthcare: Models, Methods and Evaluations
Abstract: Generative AI, especially Large Language Models (LLMs) and Multimodal Language Models (MLMs), is creating exciting opportunities in healthcare. However, real-world use is still challenging due to the need for models that are compact, personalized, safe, and capable of handling multiple languages and data types. This work tackles these challenges in three main directions: building specialized models, developing advanced methods for key healthcare tasks, and creating strong evaluation benchmarks. First, we build a small, domain-specific language model for veterinary medicine, a field that is often overlooked. This model is trained from scratch with proper pretraining, fine-tuning, and safety alignment. Second, we design models for summarizing medical inputs that include both text and images, focusing on low-resource and code-mixed languages to help healthcare professionals better understand complex patient data. Finally, we introduce new benchmarks to evaluate model performance in medical settings, including a) M3Retrieve, a large multimodal retrieval benchmark across 5 domains and 16 medical fields, and b) a multilingual trust benchmark that covers 15 languages and 18 detailed tasks. Together, these efforts aim to make generative AI more practical, reliable, and inclusive for healthcare use.
Bio: Dr. Sriparna Saha is currently an Associate Professor in the Department of Computer Science and Engineering, Indian Institute of Technology Patna, India. Her current research interests include natural language processing, generative AI, multiobjective optimization, and biomedical information extraction. She regularly publishes papers in reputed conferences and journals. She is the recipient of the Fulbright-Nehru Academic and Professional Excellence Fellowship 2025, Humboldt Research Fellowship, Google India Women in Engineering Award 2008, INSA Young Associate 2024, INSA Distinguished Lecture Fellow 2025, NASI YOUNG SCIENTIST PLATINUM JUBILEE AWARD 2016, SERB WOMEN IN EXCELLENCE AWARD 2018, Pattern Recognition Letters Associate Editor Award 2023. She received several best paper awards in prestigious venues. For more information, please visit https://www.iitp.ac.in/~sriparna/.
Performance Engineer
AMD
Talk Title: Theory and Practice: Performance Engineering Beyond Steady State
Abstract: Performance models assume steady state and infinite resources. Real systems don't. Learn how performance engineers diagnose where theory breaks, design experiments to expose the gaps, and use that understanding to solve hard problems. We'll explore the principles through real debugging examples.
Bio: I am a Performance Engineer at AMD (formerly Xilinx), where I have worked for five years on FPGA-SoC components. My role involves modeling and simulating future NoC-memory subsystems and analyzing their performance. Previously, I spent seven years at Nvidia, specializing in instrumentation and diagnostic software for high-performance computing platforms. I hold a Bachelor of Technology in Computer Science and Engineering from MNNIT Allahabad.
Distinguished Technologist
Hewlett Packards
Talk Title: On Device Agents with Small Action Models (SAMs)
Abstract: The term Large Action Model (LAM) is typically used in the context of specific agentic workflows leveraging Large Language Models (LLMs) designed to accomplish tasks by interacting with tools/APIs, device controls, web pages or user Interfaces. Since such actions are often local and frequent, there is a compelling case to consider Small Action Models (SAMs) to unlock on-device, private, low-power, low-latency agents. However, small models pose significant challenges since they are often less reliable. This talk discusses the challenges in building high-quality, reliable on-device action models and presents an approach to building SAMs that are capable of complex query orchestration
Bio: Niranjan Damera Venkata is currently AI/ML Distinguished Technologist with the AI Lab at HP Inc. where he leads AI research in the area of efficient on-device large language models for HP AI PCs. His prior experience includes stints with the Future Technologies and Experiences (FT&E) group, Digital Transformation Office, and HP Labs, working on the application of AI to areas such as Agents, customer behaviour modelling, service operations, automated publishing, and computational displays. Niranjan has led AI/ML teams that have delivered several AI/ML solutions from concept to production. He is also the recipient of 2 ACM SIGWEB best-paper awards. He holds a PhD degree in Electrical Engineering from the University of Texas at Austin and an MS in Management Science and Engineering from Stanford University.
Assistant Professor
IIT Dharwad
Talk Title: Adaptive and Efficient Deep Neural Network Inference on Heterogeneous Edge Platforms
Abstract: Deploying deep neural networks (DNNs) on edge devices presents unique challenges due to limited computational resources, diverse hardware architectures, and dynamic runtime conditions. This talk presents a comprehensive overview of our research efforts aimed at making DNN inference on heterogeneous edge platforms more efficient, adaptive, and responsive to real-world constraints.
Bio: Dr. Gayathri Ananthanarayanan is currently an Assistant Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology Dharwad, Karnataka. She obtained her Ph.D. from the Indian Institute of Technology Delhi, and subsequently worked as a Postdoctoral Fellow at the School of Computing, National University of Singapore. Her research interests broadly span computer architecture and embedded systems. At present, her work focuses on hardware performance analysis and Edge AI, with an emphasis on hardware-software co-design techniques for the efficient deployment and execution of AI applications on mobile and embedded edge devices.
Assistant Professor
IIT Madras
Talk Title: Efficient Solutions for Machine Learning at the Edge
Abstract: The rapid growth of edge devices has created a dynamic, AI-powered data ecosystem with significant potential for societal advancement. However, privacy concerns restrict data sharing across multiple owners, hindering the full potential of AI. Furthermore, edge devices often have substantial resource constraints and heterogeneity, severely restricting their ability to handle large models. This talk will provide innovative solutions to overcome these challenges and enable efficient and privacy-preserving ML in diverse edge settings. As a key highlight, we will address the following question: How to enable federated learning of a large global model, when each edge device can only train a small local model?
Bio: Dr. Saurav Prakash is an Assistant Professor in the Department of Electrical Engineering (EE) at IIT Madras and affiliated with the Center for Responsible AI (CeRAI). He received his BTech in EE from IIT Kanpur in 2016 and completed his PhD in EE at the University of Southern California (USC) in 2022. Afterwards, he was a postdoctoral researcher at the University of Illinois Urbana-Champaign (UIUC) for a couple of years before joining IIT Madras. His research interests span Information and Coding Theory, Machine Unlearning, Federated Learning, and Hyperbolic Geometry. Among his accolades, he is an ANRF PM Early Career Research Grant fellow, was an Institute for Genomic Biology (IGB) postdoctoral fellow at UIUC, received the Qualcomm Innovation Fellowship in 2021, and was one of the Viterbi-India Fellows in summer 2015.
Assistant Professor
IIT Delhi
Talk Title: From Guess to Guarantee: Fusing AI and AR for Automated Synthesis
Abstract: We increasingly entrust large parts of our daily lives to computer systems that are growing ever more complex. Developing scalable and trustworthy methods for designing, building, and verifying these systems is therefore crucial. In this talk, I will focus on automated synthesis a technique that uses formal specifications to automatically generate systems such as functions, programs, or circuits that provably satisfy their requirements. Can we leverage AI-particularly machine learning and large language models-to propose candidate programs or functions? Yes. But can these candidates be guaranteed correct? Can we verify them, or even repair them so that they provably meet the intended specifications? This talk will revolve around these questions-how AI and Automated Reasoning can be effectively fused to synthesize reliable systems.
Bio: Priyanka Golia is an Assistant Professor in the Department of Computer Science and Engineering at IIT Delhi. Prior to that, she was a faculty member at the CISPA Helmholtz Center for Information Security. Her research focuses on Artificial Intelligence and Automated Reasoning.
Assistant Professor
IIT Madras
Talk Title: Towards practical and provable privacy preservation in the age of foundation models
Abstract: While foundation models and LLMs unlock new and unprecedented AI capabilities, they come with a substantially increased risk of memorising, regurgitating, and leaking privacy-sensitive data. Differential privacy, now a well-established standard for privacy protection, provides a principled solution to prevent such leakage, but is often computationally expensive (for good performance). I'll present some of our work on developing efficient and scalable algorithms AI inference and fine-tuning to make differential privacy practical in the era of foundation models. First, I will describe our method for generating artificial text with LLMs that is statistically similar to real data while preserving privacy. Our algorithm reduces the computational overhead of differential privacy from roughly 100-1000x in prior work to about 4x, making deployment feasible at scale. Next, I will discuss fine-tuning with differential privacy, where we build on a recent approach that injects correlated Gaussian noise across stochastic gradient steps. Our variant reduces the time complexity from quadratic to nearly linear, while maintaining comparable accuracy and privacy guarantees. I will conclude with a brief outlook on ongoing and future directions.
Bio: Krishna Pillutla is an assistant professor and the Narayanan Family Foundation Fellow at the Wadhwani School of Data Science & AI at IIT Madras. He received his PhD from University of Washington, MS from Carnegie Mellon University, and BTech from IIT Bombay, and has spent time Google and Meta's AI research labs. He received the AI2050 Early Career Fellowship from Schmidt Sciences (2025), the NeurIPS outstanding paper award (2021), and a J.P. Morgan PhD Fellowship (2019-20).
Assistant Professor
IIT Gandhinagar
Talk Title: Federated Learning, Robustness and Incentives
Abstract: Federated Learning (FL) enables collaborative model training without centralizing data, but its distributed nature introduces new challenges. In this talk, I will discuss two key aspects: robustness and incentives. Robustness focuses on ensuring reliable learning under data heterogeneity and adversarial updates. Incentive design addresses how to motivate self-interested or strategic clients to contribute high-quality data and computation truthfully. I will outline recent approaches to each and reflect on how integrating the two can lead to more stable, fair, and trustworthy federated systems.
Bio: Manisha Padala is an Assistant Professor of Computer Science at the Indian Institute of Technology Gandhinagar. She received her B.Tech. in Electrical Engineering from IIT Jodhpur in 2017 and her Ph.D. from IIIT Hyderabad in 2023, during which she interned at Google and Adobe. Her research group works on problems at the intersection of machine learning and fairness, focusing on fairness measures, fair and private classifiers, and fairness in federated settings. The group also applies ideas from mechanism design and deep learning to design systems that are resistant to manipulation while ensuring efficiency and fairness in contexts of auctions, crowdfunding, and resource allocation.
TBD
Talk Title: Efficient Serving of Large Language Models at Scale
Abstract: Serving Large Language Models (LLMs) at scale involves managing a wide range of workloads with varying performance requirements. Applications such as interactive chatbots demand low latency, while tasks like document generation can tolerate longer response times. This talk presents work from the M365 Research group on optimizing LLM inference infrastructure to efficiently balance these diverse needs. We introduce a dynamic framework that combines short-term request routing with long-term resource management, including GPU scaling and model placement, guided by optimization techniques and a fairness-aware scheduler.
Bio: Anjaly is a Senior Researcher at M365 Research, leading applied research at the intersection of efficiency and reliability in cloud services. Anjaly works on data-driven optimizations to ensure continuous availability of cloud services and improve the efficiency of cloud infrastructure running diverse workloads, including the newly emerged Large Language Model workloads. Before joining Microsoft, Anjaly was a postdoctoral fellow with the Computational and Information Sciences Directorate at the U.S. Army Research Laboratory from September 2019 to August 2021, where she was also part of the Director's Strategic Initiative program spanning theory, modeling, and experimentation. Anjaly completed graduate studies in the Department of Aerospace Engineering at the Indian Institute of Science, Bangalore, and has authored more than 25 publications in artificial intelligence, control systems, and optimization.
IISc, Bangalore
Talk Title: Scalable and Interactive 3D Ocean Data Visualization
Abstract: Oceanographers struggle with the scalability that is required for visual analysis of massive and multivariate ocean model output for tasks like event tracking and phenomenon identification. Our research addresses this challenge by introducing a novel methodology centered on two key innovations: integrating specialized domain-specific analysis modules directly as efficient ParaView filters, and developing parallel solutions that leverage the use of computing resources available to the analyst. This approach culminated in the development of pyParaOcean, an extendible and easily deployable visualization system that leverages ParaView's parallel processing capabilities. We demonstrate the utility of this research with a Bay of Bengal case study and present scaling studies that confirm the high efficiency of the system.
Bio: Vijay Natarajan is a Professor in the Department of Computer Science and Automation at Indian Institute of Science, Bangalore. He received his Ph.D. in computer science from Duke University and holds the B.E. degree in computer science and M.Sc. degree in mathematics from BITS Pilani. His research interests include scientific visualization, computational geometry, and computational topology. In current work, he is developing topological methods for time-varying and multi-field data visualization, and studying applications in biology, material science, and climate science.
Associate Professor
IISc, Bangalore
Talk Title: From Radiance Fields to Gaussian Splatting: Learning 3D Scenes from Sparse Inputs
Abstract: Neural Radiance Fields (NeRFs) have offer remarkable scene reconstructions through differentiable volumetric rendering. Yet, their reliance on dense image capture and heavy optimization limits practical deployment. This talk will first introduce the foundations of NeRFs-their scene representation, rendering pipeline, and learning dynamics - and then discuss advances that extend NeRFs to sparse input settings. I will also introduce 3D Gaussian Splatting, a recent point-based alternative enabling real-time rendering, and highlight our recent work on sparse-input 3D Gaussian Splatting.
Bio: Rajiv Soundararajan received the B. E. degree in Electrical and Electronics Engineering from Birla Institute of Technology and Science (BITS), Pilani, India in 2006. He received the M. S. and Ph. D. degrees in Electrical and Computer Engineering from The University of Texas at Austin, USA in 2008 and 2012 respectively. Between 2012 and 2015, he was with Qualcomm Research India, Bangalore. He is currently an Associate Professor at the Indian Institute of Science, Bangalore. He received the 2016 IEEE Circuits and Systems for Video Technology Best Paper Award and 2017 IEEE Signal Processing Letters Best Paper Award. He also received a Technology and Engineering Emmy Award from the National Academy of Television Arts & Sciences in 2021 for the "Development of Perceptual Metrics for Video Encoding Optimization". His research interests are broadly in image and video signal processing, computer vision, machine learning and information theory.