Self-Introduction

My name is Jiawei Fang (方家卫).

My vision is to build a close-loop emboided system—one in which machines continuously sense, interpret, generate, and refine their interactions with the physical world. To make such a closed-loop system possible, my research is driven by two tightly coupled questions: (1) How can machines perceive and reason about physical intelligence from real-world sensory interaction in an accurate and imperceptible way? (2) How can such physically grounded intelligence be further used to autonomously design and evolve embodied systems, including mechanical devices and robots?

I had my bachelor’s degree in Xiamen University. I am fortunate to work with Prof. Shihui Guo and Prof. Yipeng Qin for four years. Previously, I was a research intern at Carnegie Mellon University in the Computer Science Department and a visiting scholar at UC Berkeley in the Mechanical Engineering Department, mentored by Prof. Lining Yao and University of Washington, mentored by Prof. Yiyue Luo. I am now working at CSAIL, MIT, mentored by Prof. Wojciech Matusik.

If you are interested in any aspect of me, I am always open to discussions and collaborations. Feel free to reach out to me at jiaweif[at]stu.xmu.edu.cn

Research Experience

Massachusetts Institute of Technology
CDFG, CSAIL (2025.5 – present)
Research Assistant | Advisor: Prof. Wojciech Matusik

University of Washington, Seattle
Wearable Intelligence Lab, ECE (2024.7 – 2024.9)
Research Intern | Advisor: Prof. Yiyue Luo

University of California, Berkeley
Morphing Matter Lab, ME (2024.1 – 2025.5)
Visiting Scholar | Advisor: Prof. Lining Yao

Carnegie Mellon University
Morphing Matter Lab, HCII (2023.4 – 2024.5)
Research Intern | Advisor: Prof. Lining Yao

Project1: Acquiring Physical Intelligence Through Imperceptible Wearable Sensing

Project2: Designing and Evolving Embodied Systems

RoboMoRe

RoboMoRe: LLM-based Robot Co-design via Joint Optimization of Morphology and Reward

Jiawei Fang, Yuxuan Sun, Chengtian Ma, Qiuyu Lu, Lining Yao

Under review of ICLR, ArXiv

Robot co-design, the joint optimization of morphology and control policy, remains a longstanding challenge in the robotics community. Existing approaches often converge to suboptimal designs because they rely on fixed reward functions, which fail to capture the diverse motion modes suited to different morphologies. We propose RoboMoRe, a large language model (LLM)-driven framework that integrates morphology and reward shaping for co-optimization within the robot design loop. RoboMoRe adopts a dual-stage strategy: in the coarse stage, an LLM-based Diversity Reflection mechanism is proposed to generate diverse and high-quality morphology–reward pairs and Morphology Screening is performed to reduce unpotential candidates and efficiently explore the design space; in the fine stage, top candidates are iteratively refined through alternating LLM-guided updates to both reward and morphology. This process enables RoboMoRe to discover efficient morphologies and their corresponding motion behaviors through joint optimization. The result across eight representative tasks demonstrate that without any task-specific prompting or predefined reward and morphology templates, RoboMoRe significantly outperform human-engineered design results and competing methods. Additional experiments demonstrate robustness of RoboMoRe on manipulation and free-form design tasks.


Posted

in

by

Comments

Leave a comment