Automatic moving object extraction has been explored extensively in the image processing and computer vision community. Generally, moving object extraction schemes rely on either optical flow or frame difference. Optical flow methods can deal with moving cameras, but they are inconsistent at object boundaries and the object segmentation tends to be inaccurate. Although frame difference approaches can detect object boundaries, they cannot detect the uniform intensity interior regions. Additionally, the frame difference approaches cannot deal with moving cameras. We present a novel technique for the automatic extraction of a moving object captured by a moving camera by blending the information from the optical flow, the frame differences, and the spatial segmentation. The optical flow is used to compensate the camera motion and to generate a model for the background. Next, the differences in the compensated frames are compared with the background model to detect the changes in the frame. Finally, the detected changes and the spatial segmentation are combined to identify the moving uniform intensity regions. Experimental results of the proposed moving object extraction method for a variety of videos are presented.