Ziming Liu (CV)

刘子鸣
AI & Physics researcher
PhD student
@MIT and IAIFI

Email: zmliu@mit.edu

About Me

I am a physicist and a machine learning researcher. I am currently a third-year PhD student at MIT and IAIFI, advised by Max Tegmark. My research interests lie generally in the intersection of artificial intelligence (AI) and physics (science in general):

  1. Physics of AI. Understanding AI from physical principles: "AI as simple as physics";
  2. Physics for AI. Physics-inspired AI: "AI as natural as physics";
  3. AI for physics. Boosting physics with AI: "AI as powerful as physicists".

Serving the ultimtate goal of building a better world using AI + Physics, I have interests in a broad range of topics, including but not limited to discovering physical laws, physics-inspired generative models, machine learning theory, mechanistic interpretability, etc. I have formed close collaboration not only with physicists (condensed matter/high energy/quantum computation), but also with computer scientists, biologists, neuroscientists, climate scientists... Because I appreciate the merit of interdisciplinary collaboration. I give talks at many venues and my works have been covered by top media. I publish papers both in top physics journals and AI conferences. I serve as a reviewer for IEEE, Physcial Reviews, NeurIPS, ICLR, etc. I co-organized the AI4Science workshop at NeurIPS2021 and ICML2022.

Before my PhD, I interned at Microsoft Research Asia. Before that, I obtained my B.S. from school of physics in Peking University. Before that, my memory is sealed in my hometown, Wuhan, China.

News

My works have received wide attention in public, and have been promoted on social media/news/podcast.


It's my greatest pleasure to be on the Cognitive Revolution Podcast!



Podcast


Our hidden symmetry paper is covered by New Scientist!



News


A Nature Review paper acknowledges my contribution in AI for Physics!



paper


Our physics-inspired generative models are covered by the Quanta magazine!



Quanta article


Our PFGM++ work is covered by MIT News!



MIT News

Recent Publications

Growing Brains: Co-emergence of Anatomical and Functional Modularity in Recurrent Neural Networks (Ziming Liu*, Mikail Khona*, Ila R. Fiete, Max Tegmark)
Comment: To examine whether it is possible to grow brain-like anatomical modularity, we apply a recent machine learning method, brain-inspired modular training (BIMT), to a network being trained to solve a set of compositional cognitive tasks. We find that functional and anatomical clustering emerge together, such that functionally similar neurons also become spatially localized and interconnected.

arXiv

Grokking as Compression: A Nonlinear Complexity Perspective (Ziming Liu*, Ziqian Zhong*, Max Tegmark)
Comment: We attribute grokking, the phenomenon where generalization is much delayed after memorization, to compression. We define linear mapping number (LMN) to measure network complexity, which is a generalized version of linear region number for ReLU networks. LMN can nicely characterize neural network compression before generalization.

arXiv

A Neural Scaling Law from Lottery Ticket Ensembling (Ziming Liu, Max Tegmark)
Comment: Neural scaling laws (NSL) refer to the phenomenon where model performance improves with scale. We propose a mechanism of neural scaling law from lottery ticket ensembling, and used it to explain Chinchilla scaling laws.

arXiv

Scientific discovery in the age of artificial intelligence (Wang et al.)
Comment: A review article on AI for Science.

Nature

The Clock and the Pizza: Two Stories in Mechanistic Explanation of Neural Networks (Ziqian Zhong*, Ziming Liu*, Max Tegmark, Jacob Andreas)
Comment: Some networks trained to perform modular addition implement a familiar Clock algorithm; others implement a previously undescribed, less intuitive, but comprehensible procedure we term the Pizza algorithm, or a variety of even more complex procedures.

arXiv

Restart Sampling for Improving Generative Processes (Yilun Xu, Mingyang Deng, Xiang Chen, Yonglong Tian, Ziming Liu, Tommi Jaakkola)
Comment: We propose a novel sampling algorithm called Restart in order to better balance discretization errors and contraction. The sampling method alternates between adding substantial noise in additional forward steps and strictly following a backward ODE. Empirically, Restart sampler surpasses previous SDE and ODE samplers in both speed and accuracy.

arxiv

Discovering New Interpretable Conservation Laws as Sparse Invariants (Ziming Liu, Patrick Obin Sturm, Saketh Bharadwaj, Sam Silva, Max Tegmark)
Comment: We propose the Sparse Invariant Detector (SID), an algorithm that auto-discovers conservation laws from differential equations. For two examples in fluid mechanics and atmospheric chemistry, SID discovers 14 and 3 conserved quantities, respectively, where only 12 and 2 were previously known to domain experts.

arxiv

Seeing is Believing: Brain-Inspired Modular Training for Mechanistic Interpretability (Ziming Liu, Eric Gan and Max Tegmark)
Comment: We introduce Brain-Inspired Modular Training (BIMT), a method for making neural networks more modular and interpretable.

arxiv code

GenPhys: From Physical Processes to Generative Models (Ziming Liu, Di Luo, Yilun Xu, Tommi Jaakkola, Max Tegmark)
Comment: We introduce GenPhys which can convert any smooth physical process to a generative model.

arxiv

PFGM++: Unlocking the Potential of Physics-Inspired Generative Models (Yilun Xu, Ziming Liu, Yonglong Tian, Shangyuan Tong, Max Tegmark, Tommi Jaakkola)
Comment: We introduce PFGM++ which unifies diffusion models and Poisson Flow Genereative Models (PFGM).

arxiv code

The Quantization Model of Neural Scaling (Eric Michaud, Ziming Liu, Uzay Girit and Max Tegmark)
Comment: We propose the Quantization Model of neural scaling laws, explaining both the observed power law dropoff of loss with model and data size, and also the sudden emergence of new capabilities with scale.

arxiv

Precision Machine Learning (Eric Michaud, Ziming Liu and Max Tegmark)
Comment: In this paper, we consider what becomes involved when you care about the difference between approximating a function with error 0.001 vs 0.0000000000000001 error.

arxiv code

Poisson Flow Generative Models (Yilun Xu*, Ziming Liu*, Max Tegmark and Tommi Jaakkola)
Comment: We proposed a new generative model called Poisson Flow Generative Models (PFGM), inspired from high-dimensional electromagnetism! The model achieves SOTA performance (in terms of both quality and speed) within the flow family.

arxiv NeurIPS blog code

Omnigrok: Grokking Beyond Algorithmic Data (Ziming Liu, Eric J. Michaud and Max Tegmark)
Comment: We aim to understand grokking from neural loss landscapes, and successfully induce grokking beyond algorithmic datasets.

arxiv

Towards Understanding Grokking: An Effective Theory of Representation Learning (Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, Mike Williams)
Comment: We aim to understand grokking from the perspective of effective theories and phase transitions of representation learning.

arxiv NeurIPS 2022 Oral

Second Order Ensemble Langevin Method for Sampling and Inverse Problems (Ziming Liu, Yixuan Wang and Andrew Stuart)
Comment: We propose a sampling method based on an ensemble approximation of second order Langevin dynamics.

arxiv

AI Poincare 2.0: Machine Learning Conservation Laws from Differential Equations (Ziming Liu, Varun Madhavan and Max Tegmark)
Comment: We present AI Poincaré, a machine learning algorithm for auto-discovering conserved quantities from differential equations of dynamical systems.

arxiv PRE

Physics-augmented Learning: A new paradigm beyond physics-informed learning (Ziming Liu, Yunyue Chen, Yuanqi Du and Max Tegmark)
Comment: we propose a learning framework which unifies the already successful physics-informed learning paradigm and a novel paradigm called physics-augmented learning.

arxiv

Machine Learning Hidden Symmetries (Ziming Liu and Max Tegmark)
Comment: We present a method searching for hidden symmetries revealed by coordinate transformations parameterized by neural networks.

arxiv PRL code

Machine-Learning Non-Conservative Dynamics for New-Physics Detection (Ziming Liu, Bohan Wang, Meng Qi, Wei Chen, Max Tegmark and Tie-Yan Liu)
Comment: We present Neural New-Physics Detector (NNPhD), a machine learning algorithm for decomposing conservative and non-conservative forces. NNPhD is a natural extension of Lagrangian Neural Network.

arxiv PRE code

AI Poincaré: Machine Learning Conservation Laws from Trajectories. (Ziming Liu and Max Tegmark)
Comment: We present AI Poincaré, a machine learning algorithm for auto-discovering conserved quantities using trajectory data from unknown dynamical systems. We released our code on PyPI here, and you could simply install aipoincare package by typing in pip install aipoincare.

arxiv PRL PyPI code github code

Schrodinger PCA: You only Need Variances for Eigenmodes (Ziming Liu, Sitian Qian, Yixuan Wang, Yuxuan Yan and Tianyi Yang)
Comment: We make an intriguing connection between quantum mechanics and principal component analysis.

arxiv PRE github code youtube video

Quantum-Inspired Hamiltonian Monte Carlo for Bayesian Sampling (Ziming Liu and Zheng Zhang)
Comment: What will happen when quantum mechanics meets hamiltonian monte carlo? The quantum mass achieves better sampling results on spiky and multi-modal distributions.

arxiv github code

Applications of deep learning to relativistic hydrodynamics (Hengfeng Huang, Bowen Xiao, Ziming Liu, Zeming Wu, Yadong Mu and Huichao Song)

PRR

Robustness of principal component analysis on harmonic flow in heavy ion collisions (Ziming Liu, Arabinda Behera, Huichao Song, Jiangyong Jia)

PRC

Principal Component Analysis of collective flow in Relativistic Heavy-Ion Collisions (Ziming Liu, Wenbin Zhao, Huichao Song)

EPJC

Highlighted Talks

I give talks on various topics in many places, including but not limited to CMU, MIT, Tiktok, Peking University, WestLake University, Swarma, and all kinds of journal clubs. Check out my slides/videos below.


What Does a Good Machine Learning Theory Look Like?

Slides


Understanding Grokking from Perspectives of Physics

Slides


From Physics to Generative Models


Slides, Video


Understanding Neural Scaling Laws in LLM and Beyond

Slides


Intelligence from Hunger

Slides


AI for Discoverie in Physics


Slides


How Can Human Scientists Survive in the Time of AI?

Slides

Blog

I write blogs on wordpress. My blogs document my quest for "physics of intelligence", "simplifying intelligence".

Ziming Liu. All rights reserved. Design: HTML5 UP