摘要:Reconstructing cellular dynamics from sparsely sampled single-cell sequencing data is a major challenge in biology. Classical dynamical models, despite their superior interpretability and predictive power for perturbation analysis, meet with challenges due to the curse of dimensionality and insufficient observations. Can we revitalize models in the era of single-cell data science, by taking advantage of Artificial Intelligence?
In this talk, I will introduce our recent efforts to dynamically integrate sampled cell state distributions through generative AI, highlighting exciting opportunities in both algorithm development and theoretical innovation. I will begin by presenting a framework that employs flow-based generative models to uncover the underlying dynamics (i.e. PDEs) of scRNA-seq data, and demonstrate the development of a dimensionless solver capable of inferring continuous cell-state transitions, as well as proliferation and apoptosis, from real datasets.
For spatial transcriptomics, we have further extended this framework by developing stVCR, which addresses the critical challenge of aligning snapshots collected from (1) different biological replicates and (2) distinct temporal stages. stVCR enables interpretable reconstruction and simulation of cell differentiation, growth, and migration in physical space, aligning spatial coordinates from transcriptomic snapshots—effectively generating a "video" of tissue development from limited static "images." This approach will be illustrated through applications in axolotl brain regeneration and 3D Drosophila embryo development.
To further infer stochastic dynamics from static data, we explore a regularized unbalanced optimal transport (RUOT) formulation and its theoretical connections to the Schrödinger Bridge and diffusion models. I will also introduce a generative deep-learning solver designed for this problem, with applications in single-cell analysis.