Mr Bukeikhan Omarali

Bukeikhan OmaraliRobot teleoperation, Virtual Reality

Now: Research Associate at Imperial College London

EECS, PhD Supervisor: Dr Ildar Farkhatdinov
Queen Mary University of London
Google Scholar LinkedIn Facebook


Robot teleopeartion, Virtual Reality, Cognitive load, Haptic feedback


Remotely controlled robots heavily rely on human supervisory control approaches. The quality of performed telerobotic tasks depends on the human operator's experience, cognitive state and availability of required feedback information to support the decisions. Colour, depth cameras and range sensors are commonly used to create the visual and spatial representation of the remote environment for the human-operator. Additionally, in some applications force and tactile sensing data is required to provide the human-operator with haptic feedback from the robot's end-effector. We propose to integrate visual, spatial and haptic feedback with the help of interactive VR environment to improve the efficiency of remote robot control. However, a direct merging of the sensory modalities may increase cognitive loads of the human-operator and therefore affect the performance. We propose an intelligent feedback adaptation system which will automatically adjust the representation of the feedback information to provide better situation awareness (better field of view for visual feedback, relevant tactile information, and vision-based spatial maps). Such system will need to learn from previous experience (recorded sensing and actuation data from the robot and the operator) to form an internal dynamic task-model which will also be updated on-fly based on the actual task-specific operation parameters. The developed system will be composed of a mobile robotic manipulation system equipped with RGBD-cameras, a VR interface and an application for merging visual, spatial and haptic feedback from the robot and human body movement tracker to control the robotic system and algorithmic component implementing cognitive adaptation of the VR-interface with respect to the actual user and task state. This research will use methods from robotics, computer vision, haptic and visual rendering for VR and ergonomics