Dull Bit Grading Using Video Intelligence
Abstract Although visual data analytics using image processing is one of the most growing research areas today and is largely applied in many fields, it is not fully utilized in the petroleum industry. This study is inspired by medical image segmentation in detecting tumor cells. This paper uses a supervised Machine Learning technique through video analytics to identify bit dullness that can be used in the drilling industry in place of the subjective screening approach. The evaluation of bit performance can be affected by subjective evaluation of the degree of dullness. The present approach of using video analytics is able to grade bit dullness to avoid user subjectivity. The approach involves the use of datasets in good quantity and quality by separating them into training datasets, testing datasets, and validation datasets. Due to the large datasets, Google Collaboratory was used as it provides access to its Graphic Processing Unit (GPU) online for the processing of the bit datasets. The processing time and resource consumption are minimized using Google GPU. Using the Google GPU resources, the procedure is automated without any installation. After the bit is pulled out and cleaned, a video is taken around and up and down in 360°. Further, it is compared against the green bit. By this approach, multiple video datasets are not required. The algorithm was validated with new sets of bit videos and the results were satisfactory. The identification of the dullness or otherwise of each screened bit is done with the aid of a bounding box with a stamp of a level of confidence (range 0.5–1) and the algorithm assigns for its decision on the identified or screened object. This method is also able to screen multiple bits stored in a single place. In an event where several drill bits are to be screened, manual grading will be a huge task and will require a lot of resources. This model and algorithm will take a few minutes to screen and provide grading for several bits while videos are passed through the algorithm. It has also been found that the grading with the video was much better than the single image as the contextual information extracted are much higher at the level of the entire video, per segment, per shot, and per frame. Also, methodology is made robust so that the video model test starts successfully without error. The time penalty for the processing is fast and it took less time for a single video screening. The work developed here is probably the first to handle the dull bit grading using video analytics. With more of these datasets available, the future automation of the IADC bit characterization will soon evolve into an automated process.