We present a comprehensive teleoperation framework for electric vehicle (EV) battery cell handling, integrating haptic feedback, extended reality (XR) visualisation, and Task-Parameterised Gaussian Mixture Regression (TP-GMR) for adaptive, real-time trajectory generation. The system enables seamless switching between manual and autonomous operation through a variable autonomy mechanism, while Constraint Barrier Functions (CBFs) enforce spatial safety constraints. A lightweight intent prediction module anticipates user deviation and precomputes corrective trajectories, reducing response time from 2.0 seconds to under 1 millisecond. The framework is implemented on an industrial KUKA robotic manipulator and validated in structured and real-world EV battery disassembly scenarios. Results show that combining XR and haptic feedback reduces task completion time by up to 48% and path deviation by 32%, compared to manual teleoperation without assistance. Predictive replanning improves continuity of force feedback and reduces unnecessary user motion. The integration of XR-based spatial computing, learning-from-demonstration, and real-time control enables safe, precise, and efficient manipulation in high-risk environments. This work demonstrates a scalable human-in-the-loop solution for battery recycling and other semi-structured tasks, where full automation is impractical. The proposed system significantly improves operator performance while maintaining safety and flexibility, marking a meaningful advancement in collaborative field robotics.

Ali Aflakian

and 3 more

This paper fuses ideas from Reinforcement Learning (RL), Learning from Demonstration (LfD), and Ensemble Learning into a single paradigm. Knowledge from a mixture of control algorithms (experts) are used to constrain the action space of the agent, enabling faster RL refining of a control policy, by avoiding unnecessary explorative actions. Domain-specific knowledge of each expert is exploited. However, the resulting policy is robust against errors of individual experts, since it is refined by a RL reward function without copying any particular demonstration. Our method has the potential to supplement existing RLfD methods when multiple algorithmic approaches are available to function as experts. We illustrate our method in the context of a Visual Servoing (VS) task, in which a 7-dof robot arm is controlled to maintain a desired pose relative to a target object. We explore four methods for bounding the actions of the RL agent during training. These methods include using a hypercube and convex hull with modified loss functions, ignoring actions outside the convex hull, and projecting actions onto the convex hull. We compare the training progress of each method with and without using the expert demonstrators. Our experiments show that using the convex hull with a modified loss function significantly improves training progress. Furthermore, we demonstrate faster VS error convergence while maintaining higher manipulability of the arm, compared to classical image-based VS, position-based VS, and hybrid-decoupled VS.