Self-Introduction

My name is Jiawei Fang (方家卫).

My vision is to build a close-loop emboided system—one in which machines continuously sense, interpret, generate, and refine their interactions with the physical world. To make such a closed-loop system possible, my research is driven by two tightly coupled questions: (1) How can machines perceive and reason about physical intelligence from real-world sensory interaction in an accurate and imperceptible way? (2) How can such physically grounded intelligence be further used to autonomously design and evolve embodied systems, including mechanical devices and robots?

I had my bachelor’s degree in Xiamen University. I am fortunate to work with Prof. Shihui Guo and Prof. Yipeng Qin for four years. Previously, I was a research intern at Carnegie Mellon University in the Computer Science Department and a visiting scholar at UC Berkeley in the Mechanical Engineering Department, mentored by Prof. Lining Yao and University of Washington, mentored by Prof. Yiyue Luo. I am now working at CSAIL, MIT, mentored by Prof. Wojciech Matusik.

If you are interested in any aspect of me, I am always open to discussions and collaborations. Feel free to reach out to me at jiaweif[at]stu.xmu.edu.cn

Research Experience

Massachusetts Institute of Technology
CDFG, CSAIL (2025.5 – present)
Research Assistant | Advisor: Prof. Wojciech Matusik

University of Washington, Seattle
Wearable Intelligence Lab, ECE (2024.7 – 2024.9)
Research Intern | Advisor: Prof. Yiyue Luo

University of California, Berkeley
Morphing Matter Lab, ME (2024.1 – 2025.5)
Visiting Scholar | Advisor: Prof. Lining Yao

Carnegie Mellon University
Morphing Matter Lab, HCII (2023.4 – 2024.5)
Research Intern | Advisor: Prof. Lining Yao

Project1: Acquiring Physical Intelligence Through Imperceptible Wearable Sensing

Project2: Designing and Evolving Embodied Systems

Garment Inertial Denoiser (GID)

Jiawei Fang, Ruonan Zheng, Xiaoxia Gao, Shifan Jiang, Anjun Chen, Qi Ye, Shihui Guo, Garment Inertial Denoiser (GID): Endowing Accurate Motion Capture via Loose IMU Denoiser,

Wearable inertial motion capture (MoCap) provides a portable, occlusion-free, and privacy-preserving alternative to camera-based systems, but its accuracy depends on tightly attached sensors—an intrusive and uncomfortable requirement for daily use. Embedding IMUs into loose-fitting garments is a desirable alternative, yet sensor–body displacement introduces severe, structured, and location-dependent corruption that breaks standard inertial pipelines. We propose GID (Garment Inertial Denoiser), a lightweight, plug-and-play Transformer that factorizes loose-wear MoCap into three stages: (i) location-specific denoising, (ii) adaptive cross-wear fusion, and (iii) general pose prediction. GID uses a location-aware expert architecture, where a shared spatio-temporal backbone models global motion while per-IMU expert heads specialize in local garment dynamics, and a lightweight fusion module ensures cross-part consistency. This inductive bias enables stable training and effective learning from limited paired loose–tight IMU data. We also introduce GarMoCap, a combined public and newly collected dataset covering diverse users, motions, and garments. Experiments show that GID enables accurate, real-time denoising from single-user training and generalizes across unseen users, motions, and garment types—consistently improving state-of-the-art inertial MoCap methods when used as a drop-in module.


Posted

in

by

Comments

Leave a comment