model file
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Shifa Kubra N

Abstract: Melanoma is one of the most prevalent and severe types of skin cancer, accounting for 75 percent of all skin cancer deaths. Early detection of melanoma can greatly improve the chances of survival. Melanoma segmentation is a critical and necessary stage in the correct detection of melanoma. For high-resolution dermoscopy images, many previous works based on standard segmentation algorithms and deep learning methods have been offered. Automatic melanoma segmentation remains a difficult task for present algorithms due to the inherent visual complexity and ambiguity among different skin states. Among these methods, the deep learning methods have obtained more attention recently due to its high performance by training an endto-end framework, which needs no human interaction. U-net is a very popular deep learning model for medical image segmentation We present an efficient skin lesion segmentation based on an improved U-net model in this research. Experiments using the 2017 ISIC Challenge melanoma dataset reveal that the suggested technique can achieve state-of-the-art performance on the skin lesion segmentation problem. Keywords: Melanoma, Convolution neural Network, U-net, Relu, Binary_threshold, Model File


2021 ◽  
Author(s):  
Lucas Pereira ◽  
Nuno Velosa ◽  
Manuel Pereira

There is a consensus in the Non-Intrusive Load Monitoring research community on the importance of public datasets in furthering load disaggregation research. Nevertheless, despite the considerable efforts to release public data, not many steps have been taken towards homogenizing how these are made available to the community and providing seamless access to the available and future datasets.In this paper, we present the Energy Monitoring and Disaggregation Data Format (EMD-DF64). EMD-DF64 is a data model, file format, and application programming interface developed to provide a unique interface to create, manage, and access electric energy datasets. The proposed format is an extension of the well-known Sony WAVE64 format that supports the storage of audio data and metadata annotations in the same file, thus reducing the number of artifacts that have to be maintained by the dataset users.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Longtao Mu ◽  
Yunfei Zhou ◽  
Tiebiao Zhao

Abstract This paper studies the robot arm sorting position control based on robot operation system (ROS), which works depending on the characteristics of the robot arm sorting operation using the top method, to automate the sorting operation and improve the work efficiency of workpiece sorting. Through the ROS MoveIt! module, the sorting pose and movement path of the robotic arm are planned, the inverse kinematics of the sorting robotic arm is solved, and the movement pose characteristics of the sorting robotic arm are analysed. The robot arm model was created using Solidworks software, and the URDF model file of the robot arm was exported through the sw2urdf plugin conversion tool, and the parameters were configured. Based on ROS for 6-degree-of-freedom (DOF) robot motion simulation, random extended tree (RRT) algorithm from open motion planning library (OMPL) is selected. The robot motion planning analysis and sorting manipulator drive UR5 manipulator. The results show that the sorting pose and motion trajectory of the robot arm are determined by controlling the sorting pose of the sorting robot arm, and the maximum radius value of the tool centre point (TCP) rotation of the robot arm and the position of the workpiece are obtained. This method can improve the success rate of industrial sorting robots in grabbing objects. This analysis is of great significance to the research of robots’ autonomous object grabbing.


2020 ◽  
Author(s):  
Andrea Conley ◽  
Patrick Hammond ◽  
Sanford Ballard ◽  
Michael Begnaud
Keyword(s):  

Deploying deep learning models require extraction of the model weights from the training environment and saving them to files that can be shipped to production. Often complex models have large model file size and it is difficult to transport those models, this paper aims to reduce the size of the model file while transferring the trained weights to production environment. Weight rolls is an algorithm that rolls down (reduces) the trained model weights to a smaller size, in some cases even reduced by a proportion of one thousand (1,000). On the production environment this is again unrolled to regain the original weights that were learned by the neural network during its training phase. Weight rolls uses a compressed pictorial representation of the weights array along with a pix-to-weight neural network to transport the learned weights which can be used on the other end for the unrolling process. The pix-to-weight network maps the pixels of the compressed weight image to the original floating point values which in the unrolling phase is used to transform the pixels into corresponding floating point values of trained weights.


These days’ data gathered is unstructured. It is becoming very hard to have labelled data gathered, due to the volume of the data being generated every second. It is almost impossible to train a model on the unstructured/unlabelled data. The unlabelled data will be divided into groups using the ML techniques and CNN/Deep learning/Machine Learning techniques will be trained using the grouped data generated. The model will be enhanced over time by the feedback given by the users and with addition of new data as well. Existing models can be trained over labelled data only. Without labelled data models cannot be used for prediction and reinforcement learning. In this approach though the data is unlabelled if a feature column is specified we will be able to train the model with the help of SME. This will be helpful in many areas of classification and prediction of the trends and patterns. Machine learning, Deep learning techniques (Supervised) will be used to implement the data. Tools used will be Python, PyTorch and TensorFlow. Input can be any data (Audio/Video/pictographic/text). Labelled data and a model file which could be used for further predictions, and which will be improved over feedback.


The number of patents that are being filed across the world is increasing day by day. With the increase in patents being filed the process of segregating the patents based on their class becomes even more difficult. There is no prior work that has been done to increase the efficiency of this process, therefore patent mining is done. There are a set of features that are extracted from the dataset that is previously present. The features that are being extracted will vary for each document and based on the feature that is extracted the following steps are carried out. After the feature extraction is done there are two steps that need to be carried out, namely: Classification and prediction. For this purpose, decision tree algorithm is used which makes use of the most prominent feature and classification is done using those features. Therefore, for classification a hierarchical decision tree algorithm is used along with the probability of patent conversion. Based on the classification that is done a model will be created and whenever a new entity is brought it is compared with the model file that was created using the available datasets and is predicted as a particular class. Thus, both classification of existing dataset and the prediction for any new dataset based on previous inputs can be achieved thereby facilitating the patent mining process.


2019 ◽  
Vol 893 ◽  
pp. 116-120
Author(s):  
Hong Xin Wang ◽  
Peng Zhang

This thesis exploits chassis dynamic performance analysis platform of the passenger carmodule on the basis of based on the VC#.Net software. Its purpose is to significantly shorten thedevelopment cycle of other models located on the same platform in the process of change. It mainlyadopts the interactive form of menu and dialogue to build the platform. Through the way by usinginternal call Adams/Car module to build a similar Car model file and altering some design variables,the suspension and chassis models of required models can be built quickly. In accordance with thepre-designed suspension and vehicle stability analysis project as the model, it achieves rapidsimulation analysis and access to the analysis results in time. The accuracy of the platform wasproved by the K&C test of the developed model. It provides great reference significance for laterproduct development.


2018 ◽  
Vol 10 (7) ◽  
pp. 168781401878741
Author(s):  
Jingbin Hao ◽  
Hansong Ji ◽  
Hao Liu ◽  
Zhongkai Li ◽  
Haifeng Yang

Colorized physical terrain models are needed in many applications, such as intelligent navigation, military strategy planning, landscape architecting, and land-use planning. However, current terrain elevation information is stored as digital elevation model file format, and terrain color information is generally stored in aerial images. A method is presented to directly convert the digital elevation model file and aerial images of a given terrain to the colorized virtual three-dimensional terrain model, which can be processed and fabricated by color three-dimensional printers. First, the elevation data and color data were registered and fused. Second, the colorized terrain surface model was created by using the virtual reality makeup language file format. Third, the colorized three-dimensional terrain model was built by adding a base and four walls. Finally, the colorized terrain physical model was fabricated by using a color three-dimensional printer. A terrain sample with typical topographic features was selected for analysis, and the results demonstrated that the colorized virtual three-dimensional terrain model can be constructed efficiently and the colorized physical terrain model can be fabricated precisely, which makes it easier for users to understand and make full use of the given terrain.


2017 ◽  
Vol 3 (3) ◽  
pp. 215
Author(s):  
Nur Hizbullah ◽  
Fazlur Rachman ◽  
Fuzi Fauziah

<p><em>Abstrak – </em><strong>Penelitian ini bertujuan menyusun sebuah model file korpus Al-Qur'an digital yang dapat digunakan sebagai bahan data primer bagi penelitian kebahasaan dalam kerangka cabang ilmu linguistik korpus yang berkenaan dengan daftar kata (<em>word list</em>)<em> </em>dan konkordansi (<em>concordance</em>)<em> </em>dalam Al-Qur'an. Penelitian ini menggunakan metode kombinasi antara eksplorasi dan eksperimen yang digunakan untuk mencari berbagai aplikasi pengolah korpus dan menguji coba satu persatu aplikasi itu untuk mengolah korpus teks Al-Qur'an bertulisan Arab dengan segala karakteristiknya. Setelah aplikasi yang tepat ditemukan, langkah berikutnya menggunakan metode deskriptif yaitu menguraikan secara faktual mekanisme pengolahan bahan digital menjadi format korpus Al-Qur'an sekaligus menyusun dafar kata dan konkordansinya dalam Al-Qur'an. Penelitian ini menunjukkan bahwa aplikasi WordSmith adalah yang paling memadai untuk melakukan pengolahan teks berbahasa Arab dalam kerangka linguistik korpus. Dengan prosedur dan langkah-langkah yang sesuai dengan sistematika aplikasi tersebut, dapat dihasilkan <em>file </em>Al-Qur'an<em> </em>digital yang memenuhi syarat teknis untuk diolah guna menyusun daftar kata dan konkordansi.</strong></p><p><strong> </strong></p><p><strong><em>Kata kunci : </em></strong><em>linguistik korpus, korpus Al-Qur'an, daftar kata, konkordansi</em></p><p><em> </em></p><p><em>Abstract – </em><strong>This research aims to develop a model of the corpus file digital Qur'an that can be used as primary data for the study of language in terms of corpus linguistics branch of science with regard to word list and concordance in the Qur'an</strong><strong>. This</strong><strong> </strong><strong>r</strong><strong>esearch </strong><strong>is </strong><strong>using a combination </strong><strong>method </strong><strong>of exploration and experimentation that are used to search for a variety of </strong><strong>corpus </strong><strong>processing applications and tested one by one to process Arabic Qur'an </strong><strong>text </strong><strong>with all its characteristics. </strong><strong>And after h</strong><strong>aving found the right application, the next step </strong><strong>is </strong><strong>using descriptive method that describes in factual material processing mechanisms into digital format at the same corpus of the Qur'an and </strong><strong>its concordance and</strong><strong> word list</strong><strong>. </strong><strong>This study shows that the WordSmith </strong><strong>software </strong><strong>is the most adequate to do the Arabic text processing within the framework of corpus linguistics. With procedures and measures in accordance with the application,</strong><strong> it</strong><strong> can produce</strong><strong>s</strong><strong> a digital file of the Qur'an </strong><strong>that is </strong><strong>technically qualified to be processed in order to compile a list of words and a concordance.</strong></p><p><strong> </strong></p><p><strong><em>Keywords: </em></strong><em>corpus linguistics, corpus Al-Qur'an, word list, concordance</em></p>


Sign in / Sign up

Export Citation Format

Share Document