I was a MSc student in Stanford Vision and Learning Lab, working with Prof. Jiajun Wu and Prof. Fei-Fei Li.
I previously received my dual B.S. in Mechanical Engineering from Shanghai Jiao Tong University and Purdue University, where I was fortunate to be advised by Prof. Karthik Ramani on Human-Computer Interaction (Fun fact: Prof. Ramani was a Ph.D. student of Mark’s!).
I support diversity, equity, and inclusion. If you would like to have a chat with me regrading research, career plans or anything, feel free to reach out! I would be happy to support people from underrepresented groups in the STEM research community, and hope my expertise can help you.
[Jan 2026] "IMPASTO: Integrating Model-Based Planning with Learned Dynamics Models for Robotic Oil Painting Reproduction" is accepted by ICRA'26.
[Jan 2026] "Multimodal Sensing for Robot-Assisted Sub-Tissue Feature Detection in Physiotherapy Palpation" is accepted by Design of Medical Device Conference (DMD) 2026.
[Jul 2025] "TypeTele: Releasing Dexterity in Teleoperation by Dexterous Manipulation Types" is accepted by CoRL'25.
[Jun 2025] "SLIM: A Symmetric, Low-Inertia Manipulator for Constrained, Contact-Rich Spaces" is accepted by RA-L.
[Jun 2025] "TacCap: A Wearable FBG-Based Tactile Sensor for Seamless Human-to-Robot Skill Transfer" is accepted by IROS'25.
[Apr 2025] "Whisker-Inspired Tactile Sensing: A Sim2Real Approach for Precise Underwater Contact Tracking" is accepted by RA-L.
I have been working on design, fabricate, and understand tactile sensors and the rich information brought by them.
I'm also broadly interested in AI and robotics, including but not limited to perception, planning, control, hardware design, and human-centered AI.
The goal of my research is to build agents that can achieve human-level of learning and adapt to novel and challenging scenarios by leveraging multisensory information including vision, audio, touch, etc.
We present IMPASTO, a robotic oil-painting system that integrates learned pixel dynamics models with model-based planning for high-fidelity reproduction of oil paintings.
We introduce a
benchmark suite of 10 tasks for multisensory object-centric
learning, and a dataset, in-
cluding the multisensory measurements for 100 real-world
household objects.
We present a compact multimodal sensor that integrates high-resolution vision-based tactile imaging with a 6-axis force-torque sensor for robust subsurface feature detection and controlled robotic palpation.
We present IMPASTO, a robotic oil-painting system that integrates learned pixel dynamics models with model-based planning for high-fidelity reproduction of oil paintings.
We propose TypeTele, a type-guided dexterous teleoperation system, which enables dexterous hands to perform actions that are not constrained by human motion patterns by introducing dexterous manipulation types into the teleoperation system.
We introduce SLIM, a symmetric, low-inertia robotic manipulator with a bidirectional hand and integrated wrist designed for safe and effective operation in constrained, contact-rich environments.
We introduce a
benchmark suite of 10 tasks for multisensory object-centric
learning, and a dataset, in-
cluding the multisensory measurements for 100 real-world
household objects.
We build a system that allows
users to select region(s) of interest (ROI) in scanned point cloud or
sketch in mid-air to enable interactive VR experience.