A study is presented on brain computer interface (BCI) using motor imagery (MI) and facial expressions to control a mobile robot. Traditionally, only MI signals are used in BCI applications. In this paper a hybrid approach of using both MI and facial expression stimulations for BCI is proposed. Electroencephalography (EEG) signals were acquired using a sensor system and processed for several MI and facial expressions to extract characteristic features. The features were used to train support vector machine (SVM) based classifiers and the trained classifiers were used to recognize test signals for correct identification of MI and facial expressions. A system was developed to implement the BCI using MI and facial expressions to control a mobile robot. Results of training using MI and facial expressions, individually and together are presented for comparison. The combined features from MI and facial expression stimulations were found to give performance similar to facial expressions but better than MI only.