Human Cooperative Wheelchair with Brain Machine Interaction Based on Shared Control Strategy

Abstract:

In this paper, a human-machine shared control strategy is proposed for the navigation control of a wheelchair, employing both brain-machine control mode and autonomous control mode. In the brain-machine control mode, contrary to the traditional four-direction control signals, a novel brainmachine interface using steady state visual evoked potentials (SSVEP) is presented, which utilizes two brain signals to produce a polar polynomial trajectory (PPT). The produced trajectory is continuous in curvature without violating dynamic constraints of the wheelchair. In the autonomous control mode, the synthesis of angle-based potential field (APF) and vision-based SLAM (simultaneous localization and mapping) technique is proposed to guide the robot navigating a the obstacles. Extensive experiments have been conducted to test the developed shared control wheelchair in several scenarios with a number of volunteers, and the results have verified the effectiveness of the proposed shared control scheme.

EXISTING  SYSTEM:

Considering a large number of people with different kinds of disabilities, brain-machine control finds many applications and shows great potential. However, the electroencephalogram (EEG) signals have low spatial resolution and provides only a broad and very noisy overview of the ongoing brain activity [8]. Two typical ways to enhance noisy EEG activity patterns are decoder design and training method optimization, and a lot of BMI (brain machine interface) studies focus on feature extraction and pattern classification. The features related to a special consciousness task can be expressed in time domain, frequency domain, and spatial domain [9]. The most used algorithms for the EEG feature extraction is the common spatial patterns (CSPs) [10], independent components analysis (ICA), power spectrum and wavelet analysis. A number of machine learning algorithms are applied in brain-computer interface (BCI) systems as EEG decoders, such as the linear discriminant analysis [11], multi layer Perceptron (MLP) [12], learning vector quantization (LVQ) [13], neural network [14], support vector machine [15], and Bayesian framework [16]. Training method optimization is another that would benefit the spatial resolution. The common procedure of decoder training is to record EEG patterns from user before using the BCI and train the pattern recognition algorithms, which are then applied in real-time, and feedback on the quality of detection is provided to the user. In [10], moving a computer cursor in two-dimensions (2D) using external visual stimuli scene is the most significant pattern in this field. The operator can control mouse cursor movement using EEG signals through building the mapping of mu-or betarhythm changes with left-hand, right-hand, or foot-movement imagery [9]. In [17], a hybrid BCI that uses the motor imagery based mu rhythm, and the P300 potential was described to control a brain-actuated simulated or real wheelchair, where the kinematics property of the wheelchair was not considered. However, a more sophisticated control strategy should be developed to accomplish the control tasks at a more complex level, because most external robotic actuators (mechanical prosthetics, exoskeleton manipulator, and mobile  manipulators) have more degrees of freedom (DOFs) and are subjected to various holonomic and nonholonomic constrains.

PROPOSED  SYSTEM:

The paper proposes a shared control strategy for noninvasive brain-actuated robotic wheelchairs, which employs a combination of a brain-machine control mode and an autonomous control mode. In the brain-machine control mode, different from the previous works using four control signals, a novel brain-machine interface (BMI) is proposed based on steady state visually evoked potential (SSVEP), which utilizes only two brain states to produce smooth polar polynomial paths and velocity profiles satisfying the various dynamic constraints of a wheelchair. The produced path changes continuously in curvature and satisfies dynamic constraints of the wheelchair. In the SSVEP, the techniques of component selection, spatial filtering and classification of EEG signals are involved and the canonical correlation analysis (CCA) algorithm is used to classify two brain states. In the autonomous control mode, the synthesis of angle-based potential field and vision-based SLAM is proposed to guide the robot to navigate a the obstacles. With the assistance of two control patterns, the experiments have been performed by a number of able-bodied volunteers and verified the effectiveness of the proposed shared control scheme.

CONCLUSIONS:

In this paper, a shared control strategy has been developed for human cooperation wheeled chairs, which employs a novel BMI and autonomous navigation. A novel BMI is proposed based on the SSVEP, which utilizes two brain states to produce smooth polar polynomial paths and velocity profiles satisfying various dynamic constraints of the wheelchair. Motion safety and collision avoidance using angle potential functions and visual-SLAM are achieved by adjusting the robots velocity in the presence of limited information, sensor uncertainties, and robot dynamics. The experiments have been verified the effectiveness of the proposed shared control scheme. In the future work, the performance can be definitely improved by i) development of more precise and fast SLAM techniques, which will improve the capability of environment sensing and lighten human burden; ii) development of global navigation functions, which eliminate the local mininals and can reduce the training procedure for subject; and iii) designing different strategies for the selection.

REFERENCES:

[1] C. Urdiales, J. M. Peula, M. Fdez-Carmona, C. Barru, E. J. Prez, I. Snchez-Tato, J. C. Del Toro, F. Galluppi, “A new multi-criteria optimization strategy for shared control in wheelchair assisted navigation,” Auton. Robots, vol. 30, no. 2, pp. 179–197, 2011.

[2] F. Leishman, O. Horn, G. Bourhis, “Smart wheelchair control through a deictic approach,”Robotics and Autonomous Systems, vol. 58, 2010, pp. 1149-1158.

[3] D. Vanhooydonck, E. Demeester, A. Hntemann, J. Philips, G. Vanacker, H.V. Brussel, M. Nuttin, “Adaptable navigational assistance for intelligent wheelchairs by means of an implicit personalized user model,” Robotics and Autonomous Systems, vol. 58, 2010, pp. 963-977

[4] A. Poncela, C. Urdiales, E.J. Prez, F. Sandoval, “ A new efficiencyweighted strategy for continuous human/robot cooperation in navigation,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol. 39, 2009, pp. 486-500.

[5] T. B. Sheridan, Telerobotics, Automation, and Human Supervisory Control, The Mit Press, Cop., Cambridge, Mass, 1992.

[6] T. Carlson, Y. Demiris, “Collaborative control for a robotic wheelchair: evaluation of performance, attention, and workload,” IEEE Transactions on Systems, Man and Cybernatics, Part B: Cybernetics, vol. 42, 2012, pp. 876–888.

[7] H. Wang, X. P. Liu, “Adaptive shared control for a novel mobile assistive robot,” IEEE/ASME Transactions on Mechatronics, vol. 19, no. 6, pp. 1725–1736, Dec. 2014.

[8] R. Scherer, J. Faller, D. Balderas, E. V. C. Friedrich, M. Proll, B. Allison, “Brain-computer interfacing: more than the sum of its parts,”Soft Comput., vol. 17, no. 2, 2013, pp. 317-331.

[9] D. J. McFarland , W. A. Sarnacki , J. R. Wolpaw, “Electroencephalographic (EEG) control of three-dimensional movement,” J. Neural. Eng. , vol. 7, no. 3, 2010, pp. 036007.

[10] G. Schalk, K. J. Miller, N. R. Anderson, J. A. Wilson, M. D. Smyth, J. G. Ojemann, “Two-dimensional movement control using electrocorticographic signals in humans,” J. Neural. Eng., vol. 5, no. 1, 2008, pp. 75–84