I am a physicist and a machine learning researcher. I am currently a third-year PhD student at MIT and IAIFI, advised by Max Tegmark. My research interests lie generally in the intersection of artificial intelligence (AI) and physics (science in general):
1. Physics of AI. Understanding AI from physical principles: "AI as simple as physics";
2. Physics for AI. Physics-inspired AI: "AI as natural as physics";
3. AI for physics. Boosting physics with AI: "AI as powerful as physicists".
Serving the ultimtate goal of building a better world using AI + Physics, I have interests in a broad range of topics, including but not limited to discovering physical laws, physics-inspired generative models, machine learning theory, mechanistic interpretability, etc. I have formed close collaboration not only with physicists (condensed matter/high energy/quantum computation), but also with computer scientists, biologists, neuroscientists, climate scientists... Because I appreciate the merit of interdisciplinary collaboration. I give talks at many venues and my works have been covered by top media. I publish papers both in top physics journals and AI conferences. I serve as a reviewer for IEEE, Physcial Reviews, NeurIPS, ICLR, etc. I co-organized the AI4Science workshop at NeurIPS2021 and ICML2022.
Before my PhD, I interned at Microsoft Research Asia. Before that, I obtained my B.S. from school of physics in Peking University. Before that, my memory is sealed in my hometown, Wuhan, China.
My works have received wide attention in public, and have been promoted on social media/news/podcast.
Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks (Ziming Liu*, Mikail Khona*, Ila R. Fiete, Max Tegmark)
Comment: To examine whether it is possible to grow brain-like anatomical modularity, we apply a recent machine learning method, brain-inspired modular training (BIMT), to a network being trained to solve a set of compositional cognitive tasks. We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected.
Grokking as Compression: A Nonlinear Complexity Perspective (Ziming Liu*, Ziqian Zhong*, Max Tegmark)
Comment: We attribute grokking, the phenomenon where generalization is much delayed after memorization, to compression. We define linear mapping number (LMN) to measure network complexity, which is a generalized version of linear region number for ReLU networks. LMN can nicely characterize neural network compression before generalization.
A Neural Scaling Law from Lottery Ticket Ensembling (Ziming Liu, Max Tegmark)
Comment: Neural scaling laws (NSL) refer to the phenomenon where model performance improves with scale. We propose a mechanism of neural scaling law from lottery ticket ensembling, and used it to explain Chinchilla scaling laws.
Scientific discovery in the age of artificial intelligence (Wang et al.)
Comment: A review article on AI for Science.
The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks (Ziqian Zhong*, Ziming Liu*, Max Tegmark, Jacob Andreas)
Comment: Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure we term the Pizza algorithm, or a variety of even more complex procedures.
Restart Sampling for Improving Generative Processes (Yilun Xu, Mingyang Deng, Xiang Chen, Yonglong Tian, Ziming Liu, Tommi Jaakkola)
Comment: We propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy.
Discovering New Interpretable Conservation Laws as Sparse Invariants (Ziming Liu, Patrick Obin Sturm, Saketh Bharadwaj, Sam Silva, Max Tegmark)
Comment: We propose the Sparse Invariant Detector (SID), an algorithm that auto-discovers conservation laws from differential equations. For two examples in fluid mechanics and atmospheric chemistry, SID discovers 14 and 3 conserved quantities, respectively, where only 12 and 2 were previously known to domain experts.
GenPhys: From Physical Processes to Generative Models (Ziming Liu, Di Luo, Yilun Xu, Tommi Jaakkola, Max Tegmark)
Comment: We introduce GenPhys which can convert any smooth physical process to a generative model.
The Quantization Model of Neural Scaling (Eric Michaud, Ziming Liu, Uzay Girit and Max Tegmark)
Comment: We propose the Quantization Model of neural scaling laws, explaining both the observed power law dropoff of loss with model and data size, and also the sudden emergence of new capabilities with scale.
Poisson Flow Generative Models (Yilun Xu*, Ziming Liu*, Max Tegmark and Tommi Jaakkola)
Comment: We proposed a new generative model called Poisson Flow Generative Models (PFGM), inspired from high-dimensional electromagnetism! The model achieves SOTA performance (in terms of both quality and speed) within the flow family.
Omnigrok: Grokking Beyond Algorithmic Data (Ziming Liu, Eric J. Michaud and Max Tegmark)
Comment: We aim to understand grokking from neural loss landscapes, and successfully induce grokking beyond algorithmic datasets.
Towards Understanding Grokking: An Effective Theory of Representation Learning (Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, Mike Williams)
Comment: We aim to understand grokking from the perspective of effective theories and phase transitions of representation learning.
Second Order Ensemble Langevin Method for Sampling and Inverse Problems (Ziming Liu, Yixuan Wang and Andrew Stuart)
Comment: We propose a sampling method based on an ensemble approximation of second order Langevin dynamics.
Physics-augmented Learning: A new paradigm beyond physics-informed learning (Ziming Liu, Yunyue Chen, Yuanqi Du and Max Tegmark)
Comment: we propose a learning framework which unifies the already successful physics-informed learning paradigm and a novel paradigm called physics-augmented learning.
Machine-Learning Non-Conservative Dynamics for New-Physics Detection (Ziming Liu, Bohan Wang, Meng Qi, Wei Chen, Max Tegmark and Tie-Yan Liu)
Comment: We present Neural New-Physics Detector (NNPhD), a machine learning algorithm for decomposing conservative and non-conservative forces. NNPhD is a natural extension of Lagrangian Neural Network.
AI Poincaré: Machine Learning Conservation Laws from Trajectories. (Ziming Liu and Max Tegmark)
Comment: We present AI Poincaré, a machine learning algorithm for auto-discovering conserved quantities using trajectory data from unknown dynamical systems. We released our code on PyPI here, and you could simply install aipoincare package by typing in pip install aipoincare
.
Schrodinger PCA: You only Need Variances for Eigenmodes (Ziming Liu, Sitian Qian, Yixuan Wang, Yuxuan Yan and Tianyi Yang)
Comment: We make an intriguing connection between quantum mechanics and principal component analysis.
Quantum-Inspired Hamiltonian Monte Carlo for Bayesian Sampling (Ziming Liu and Zheng Zhang)
Comment: What will happen when quantum mechanics meets hamiltonian monte carlo? The quantum mass achieves better sampling results on spiky and multi-modal distributions.
Applications of deep learning to relativistic hydrodynamics (Hengfeng Huang, Bowen Xiao, Ziming Liu, Zeming Wu, Yadong Mu and Huichao Song)
Robustness of principal component analysis on harmonic flow in heavy ion collisions (Ziming Liu, Arabinda Behera, Huichao Song, Jiangyong Jia)
Principal Component Analysis of collective flow in Relativistic Heavy-Ion Collisions (Ziming Liu, Wenbin Zhao, Huichao Song)
I give talks on various topics in many places, including but not limited to CMU, MIT, Tiktok, Peking University, WestLake University, Swarma, and all kinds of journal clubs. Check out my slides/videos below.
I write blogs on wordpress. My blogs document my quest for "physics of intelligence", "simplifying intelligence".
Ziming Liu. All rights reserved. Design: HTML5 UP