LEARNING ADAPTIVE NEURAL TELEOPERATION FOR HUMANOID ROBOTS: FROM INVERSE KINEMATICS TO END-TO-END CONTROL

Authors

  • Sanjar Atamuradov Georgia Institute of Technology, Atlanta, GA satamuradov3@gatech.edu Author

Keywords:

teleoperation, humanoid robots, reinforcement learning, VR control, neural networks, sim-to-real

Abstract

Virtual reality (VR) teleoperation has emerged as a promising approach for controlling humanoid robots in complex manipulation tasks. However, traditional teleoperation systems rely on inverse kinematics (IK) solvers and hand-tuned PD controllers, which struggle to handle external forces, adapt to different users, and produce natural motions under dynamic conditions. In this work, we propose a learning-based neural teleoperation framework that replaces the conventional IK+PD pipeline with learned policies trained via reinforcement learning. Our approach learns to directly map VR controller inputs to robot joint commands while implicitly handling force disturbances, producing smooth trajectories, and adapting to user preferences. We train our policies in simulation using demonstrations collected from IK-based teleoperation as initialization, then fine-tune them with force randomization and trajectory smoothness rewards. Experiments on the Unitree G1 humanoid robot demonstrate that our learned policies achieve 34% lower tracking error, 45% smoother motions, and superior force adaptation compared to the IK baseline, while maintaining real-time performance (50Hz control frequency). We validate our approach on manipulation tasks including object pick-andplace, door opening, and bimanual coordination. These results suggest that learning-based approaches can significantly improve the naturalness and robustness of humanoid teleoperation systems.

Downloads

Download data is not yet available.

References

K. Darvish, L. Penco, J. Ramos, R. Cisneros, J. Pratt, E. Yoshida, S. Ivaldi, and D. Pucci. Teleoperation of humanoid robots: A survey. IEEE Transactions on Robotics, 39(3):1706–1727, 2023.

F. Argelaguet, L. Hoyet, M. Trico, and A. Lecuyer.´ The role of interaction in virtual embodiment: Effects of the virtual hand representation. In IEEE VR, pages 3–10, 2016.

C. Lu, X. Cheng, J. Li, S. Yang, M. Ji, C. Yuan, G. Yang, S. Yi, and X. Wang. Mobile-television: Predictive motion priors for humanoid whole-body control. arXiv preprint arXiv:2412.07773, 2024.

Q. Ben, F. Jia, J. Zeng, J. Dong, D. Lin, and J. Pang. HOMIE: humanoid loco-manipulation with isomorphic exoskeleton cockpit. arXiv preprint arXiv:2502.13013, 2025.

T. He, Z. Luo, X. He, W. Xiao, C. Zhang, W. Zhang,

K. Kitani, C. Liu, and G. Shi. Omnih2o: Uni-

L. C. T. Wang and C. C. Chen. A combined optimization method for solving the inverse kinematics problems of mechanical manipulators. IEEE Transactions on Robotics and Automation, 7(4):489–499, 1991.

J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter. Learning quadrupedal locomotion over challenging terrain. Science Robotics, 5(47):eabc5986, 2020.

M. Kalashnikov, A. Irpan, P. Pastor, J. Ibarz, A. Herzog, E. Jang, D. Quillen, E. Holly, M. Kalakrishnan, V. Vanhoucke, and S. Levine. Qt-opt: Scalable deep reinforcement learning for vision-based robotic manipulation. In CoRL, 2018.

J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter. Learning agile and dynamic motor skills for legged robots. Science Robotics, 4(26):eaau5872, 2019.

OpenAI, I. Akkaya, M. Andrychowicz, M. Chociej, M. Litwin, B. McGrew, A. Petron, A. Paino, M. Plappert, G. Powell, R. Ribas, J. Schneider, N. Tezak, J. Tworek, P. Welinder, L. Weng, Q. Yuan, W. Zaremba, and L. Zhang. Solving rubik’s cube with a robot hand. arXiv preprint arXiv:1910.07113, 2019.

A. Zeng, P. Florence, J. Tompson, S. Welker, J. Chien, M. Attarian, T. Armstrong, I. Krasin, D. Duong, V. Sindhwani, and J. Lee. Transporter networks: Rearranging the visual world for robotic manipulation. In CoRL, 2020.

T. Z. Zhao, V. Kumar, S. Levine, and C. Finn. Learning fine-grained bimanual manipulation with lowcost hardware. In RSS, 2023.

T. Z. Zhao, J. Wu, D. Xu, T. Xiao, Y. Chebotar, S. Schaal, Q. Vuong, S. Levine, and C. Finn. ALOHA 2: An enhanced low-cost hardware for bimanual teleoperation. arXiv preprint arXiv:2405.02292, 2024.

Y. Zhang, Y. Yuan, P. Gurunath, T. He, S. Omidshafiei, A. Agha-mohammadi, M. VazquezChanlatte, L. Pedersen, and G. Shi. FALCON: Learning force-adaptive humanoid locomanipulation. arXiv preprint arXiv:2501.xxxxx, 2025.

N. Hogan. Impedance control: An approach to manipulation. In American Control Conference, pages 304–313, 1984.

T. Portela, G. B. Margolis, Y. Ji, and P. Agrawal. Learning force control for legged manipulation. In IEEE ICRA, pages 15366–15372, 2024.

D. A. Lawrence. Stability and transparency in bilateral teleoperation. IEEE Transactions on Robotics and Automation, 9(5):624–637, 1993.

T. He, Z. Luo, W. Xiao, C. Zhang, K. Kitani, C. Liu, and G. Shi. Learning human-to-humanoid real-time whole-body teleoperation. arXiv preprint arXiv:2403.04436, 2024.

E. Rosen, D. Whitney, E. Phillips, G. Chien, J. Tompkin, G. Konidaris, and S. Tellex. Communicating robot arm motion intent through mixed reality head-mounted displays. In Int. Symposium on Robot and Human Interactive Communication, 2018.

Downloads

Published

2025-11-07