Research


Whole Slide Image (WSI) in Digital Pathology
With a Focus in Generative AI and Graph Neural Networks

Whole slide imaging (WSI) reshaped digital pathology by enabling detailed diagnosis of high-resolution tissue samples. However, the extremely high-resolution WSI creates complications if it has to be applied to traditional deep-learning architecture. Graph Neural Networks (GNNs) present an alternative mechanism to address these challenges due to their capability to model relational data and capture spatial hierarchies. Research in this area focuses on developing methods to convert WSI data into graph representations, where nodes correspond to tissue regions and edges represent spatial relationships. These GNN models can applied to perform various tasks such as tissue classification,  cancer sub-typing and grading. 

Adversarial and Out-of-Distribution Robustness
With Application to Healthcare

Adversarial robustness has become a critical area of research in machine learning, particularly for Graph Neural Networks (GNNs), which are increasingly used for tasks involving relational data. GNNs, despite their powerful capabilities in capturing graph structures and dependencies, are vulnerable to adversarial attacks. These attacks involve deliberate modifications to the graph data, such as altering node features or edges, which can significantly degrade the performance of GNN models. Research in this field aims to understand the nature of these adversarial vulnerabilities and to develop robust GNN architectures that can withstand such attacks. Techniques explored include adversarial training, robust optimization, and the development of defense mechanisms specifically tailored for graph data. Studies also investigate the theoretical underpinnings of adversarial robustness in GNNs, providing insights into the trade-offs between model complexity and robustness. 

Specific Focus Area


Utilization of Graph Explainability: To ensure out-of-distribution (OOD) and adversarial robustness,  the distribution of graphs needs to be expanded.  One way to expand the distribution is to perturb the graph structure or the node features. However, that poses the challenge of coming up with a scheme to identify important subgraphs and nodes. One key focus of our research is harnessing the power of explainability to identify the important subgraphs and nodes within a subgraph.

Published Works

Ongoing Works

Smart Grid
With a Focus in Real Time Decision Making, Online learning, Learning under Uncertainty and Event Diagnosis

Learning in the presence of missing data and uncertainty is of paramount importance in smart grid systems due to the complex and dynamic nature of modern power networks. Smart grids rely on vast amounts of data collected from diverse sources such as sensors, smart meters, and communication networks to optimize energy distribution, enhance reliability, and ensure efficient operation. However, these data sources often encounter issues like sensor failures, communication dropouts, and inaccurate readings, leading to missing or uncertain data. Robust learning algorithms capable of handling incomplete information are essential to maintain accurate predictions, effective control strategies, and timely decision-making processes.