Yiqin Li

Robot Learning Researcher

I work on robot learning, embodied AI, and dexterous manipulation. I completed my MEng in Computing (AI & Machine Learning) at Imperial College London with First-class Honours, where my thesis studied language-guided, closed-loop imitation learning for tabletop manipulation.

Currently a Research Assistant at Shanghai Jiao Tong University, I work on VLA benchmarking and dexterous arm-hand manipulation pipelines. I am broadly interested in scalable Vision-Language-Action training, embodied agents, and contact-rich manipulation.

Yiqin Li

current work

My current work spans VLA evaluation, dexterous robot data pipelines, language-conditioned control, and agent-based robot systems.

VLA Evaluation and Benchmarking

Tracking model architectures, training data sources, and post-training trends across the VLA landscape to clarify where current systems scale well and where they still depend on human supervision.

Dexterous Arm-Hand Pipelines

Building teleoperation, data collection, and evaluation infrastructure for real-robot dexterous manipulation, with a focus on arm-hand coordination and contact-rich tasks.

Language-Conditioned Manipulation

Designing closed-loop language-guided control systems that decompose tasks, recover from failures, and connect natural-language commands to manipulation policies.

Agentic Robotics Systems

Exploring how LLM/VLM agents can reason over robot state, select procedures, and reduce human supervision in embodied systems.


projects

Dexterous-Hand VLA Pipeline

Real Robot Pipeline · Teleoperation · 22-DoF Arm-Hand Coordination

At Shanghai Jiao Tong University, I am building the teleoperation, data collection, and evaluation stack for a LinkerHand O6 + xArm6 platform. This work grounds the real-robot side of my current interests, especially contact-rich data collection, dexterous action spaces, and deployment-time evaluation.

Language-Guided Imitation Learning

MEng Thesis · Language-Conditioned Manipulation · Closed-Loop Recovery

My MEng thesis built a closed-loop system where an LLM planner decomposes natural-language commands into a sub-task DAG and a behaviour-tree controller orchestrates perception, policy execution, and automatic recovery. The project achieved 100% success on pick-and-place and a 2.4× improvement over a code-as-policy baseline.

Evo-SOTA VLA Benchmark Leaderboard

Evo-SOTA — VLA Benchmark Leaderboard

Benchmarking Infrastructure · VLA Landscape Analysis · Open Source

I built and maintain an open-source leaderboard tracking 136+ Vision-Language-Action models across 6 benchmarks including LIBERO, CALVIN, Meta-World, and RoboChallenge. This project helps me systematically compare architectures, training data sources, and post-training strategies across the current VLA landscape.

RoboClaw — Embodied AI Assistant

Embodied Agents · ROS2 · Autonomous Data Collection

RoboClaw is an open-source framework for embodied AI assistance with natural-language robot control, procedure selection, and ROS2 execution across simulation and real robots. It gave me direct experience with agent orchestration and autonomous data-collection loops, which continue to shape how I think about agent-based robotics systems.


background

2025 — Present

Research Assistant · Shanghai Jiao Tong University

VLA benchmarking (Evo-SOTA) and dexterous-hand manipulation research. Building teleoperation, training, and evaluation pipelines for real-robot arm-hand coordination.

2025

Applied Scientist Intern · Thomson Reuters Labs

Developed agentic capabilities for Westlaw Deep Research Agent including multi-agent workflow orchestration. Benchmarked and fine-tuned small language models for guardrail classification.

2021 — 2025

MEng Computing (AI & ML) · Imperial College London

First-class Honours. Thesis on multimodal imitation learning for tabletop manipulation. Coursework in deep learning, computer vision, reinforcement learning, NLP, and robotics.

2024

NLP Researcher Intern · Huawei

Built LLM-based pipelines for automated knowledge-graph construction on large-scale GPU clusters. Acknowledged in a Findings of ACL 2025 paper.


beyond the lab

Outside research, I spend most of my time climbing. I summited Kilimanjaro, played on Imperial College's Men's 1st Basketball team, and share my home with three cats.

Bouldering
Rock climbing outdoors
Rock climbing
Cats
Kilimanjaro summit
Imperial Basketball team