COMPUTER AIDED KNOT THEORY USING MATHEMATICA AND MATHLINK

2002 ◽  
Vol 11 (06) ◽  
pp. 945-954 ◽  
Author(s):  
NORIKO IMAFUJI ◽  
MITSUYUKI OCHIAI

We introduce a computer tool called Knot2000(K2K) which was developed for the purpose of support for the research of knot theory. K2K is a package on Mathematica in which consists of 19 functions and it has already been opened to the public with other external programs and data files. In this paper, we will describe focusing on the usages of each functions and some examples of effective ways to use K2K, and show its availability.

2019 ◽  
Vol 41 (1) ◽  
pp. 37-52
Author(s):  
Tongxin Sun ◽  
Bu Zhong

A computer-aided semantic analysis (using Linguistic Inquiry and Word Count [LIWC]) examined how newspaper coverage of air pollution from 2014 to 2017 may affect the public agenda in four cities—Hong Kong, London, Pittsburgh, and Tianjin. Results show that after controlling for the real-time air quality, the agenda-setting effect was found in Hong Kong, London, and Pittsburgh, but not Tianjin. Tianjin’s reports also contained more future-framed words but fewer present-framed words than other cities.


Author(s):  
Abdulhameed Alkhateeb

Recently, breast cancer is one of the most popular cancers that women could suffer from. The gravity and seriousness of breast cancer can be evidenced by the fact that the mortality rates associated with it are the second highest after lung cancer. For the treatment of breast cancer, Mammography has emerged as the one whose modality when it comes to the defection of this cancer is most effective despite the challenges posed by dense breast parenchyma. In this regard, computer-aided diagnosis (CADe) leverages the mammography systems’ output to facilitate the radiologist’s decision. It can be defined as a system that makes a similar diagnosis to the one done by a radiologist who relies for his/her interpretation on the suggestions generated by a computer after it analyzed a set of patient radiological images when making. Against this backdrop, the current paper examines different ways of utilizing known image processing and techniques of machine learning detection of breast cancer using CAD – more specifically, using mammogram images. This, in turn, helps pathologist in their decision-making process. For effective implementation of this methodology, CADe system was developed and tested on the public and freely available mammographic databases named MIAS database. CADe system is developed to differentiate between normal and abnormal tissues, and it assists radiologists to avoid missing breast abnormalities. The performance of all classifiers is the best by using the sequential forward selection (SFS) method. Also, we can conclude that the quantization grey level of (gray-level co-occurrence matrices) GLCM is a very significant factor to get robust high order features where the results are better with L equal to the size of ROI. Using an enormous number of several features assist the CADe system to be strong enough to distinguish between the different tissues.


Author(s):  
R Vanitha ◽  
K Ramkumar ◽  
G Rajtilak ◽  
V Rajasekar

ABSTRACT A 37-year-old female patient reported to the hospital with a nasal defect due to carcinoma. She was previously restored with nasal prostheses, but was not satisfied with its cosmetic appeal. A computerized tomographic (CT) scan of the defect area was made and converted into 3- dimensional (3D) digital data using dedicated medical imaging software. From the 3D image, measurements of the defect were calculated and compared with various nasal fossa measurements available in the digital database. A 3D nose model which had measurements that closely matched the defect area was extracted and superimposed on the defect area and margins adjusted. The data files were then sent for rapid prototyping (RP). A RP model was fabricated which was duplicated in wax and processed. The final result was a nasal prosthesis that conformed well to the patients’ face and was also esthetically acceptable. The main advantage of computer-aided designing (CAD)-RP is that it allows trying various nasal forms on the patients face within few hours. This saves chair time, eliminates the impression step and provides patient and dentist an option of variety. How to cite this article Vanitha R, Ramkumar K, Rajtilak G, Rajasekar V. Designing a Nasal Prosthesis using CAD-RP Technology. Int J Prosthodont Restor Dent 2012;2(3):108-112.


2021 ◽  
Author(s):  
Francis J J. Ambrosio ◽  
Jill V Hagey ◽  
Kevin Libuit ◽  
Technical Outreach and Assistance for States Team

The Titan_ONT workflow is a part of the Public Health Viral Genomics Titan series for SARS-CoV-2 genomic characterization. Titan_ONT was written specifically to process basecalled and demultiplexed Oxford Nanopore Technology (ONT) read data. Input reads are assumed to be the product of sequencing ARTIC V3 tiled PCR-amplicons designed for the SARS-CoV-2 genome. Upon initiating a Titan_ONT run, input read data provided for each sample will be processed to perform consensus genome assembly, infer the quality of both raw read data and the generated consensus genome, and assign lineage or clade designations as outlined in the Titan_ONT data workflow diagram below. Additional technical documentation for the Titan_ONT workflow is available at: https://public-health-viral-genomics-theiagen.readthedocs.io/en/latest/titan_workflows.html#titan-workflows-for-genomic-characterization Required input data for Titan_ONT: Basecalled and demultiplexed ONT read data files (single FASTQ file per sample) Primer sequence coordinates of the PCR scheme utilized in BED file format Titan_ONT has not been written to process FAST5 files Video Instruction: Theiagen Genomics: Titan Genomic Characterization https://www.youtube.com/watch?v=zP9I1r6TNrw Theiagen Genomics: Titan Outputs QC https://www.youtube.com/watch?v=Amb-8M71umw For technical assistance please contact us at: [email protected]


2013 ◽  
Vol 30 (4) ◽  
pp. 759-788 ◽  
Author(s):  
Michel Ollitrault ◽  
Jean-Philippe Rannou

Abstract During the first decade of the twenty-first century, more than 6000 Argo floats have been launched over the World Ocean, gathering temperature and salinity data from the upper 2000 m, at a 10-day or so sampling period. Meanwhile their deep displacements can be used to map the ocean circulation at their drifting depth (mostly around 1000 m). A comprehensive processing of the whole Argo dataset collected prior to 1 January 2010 has been performed to produce a world-wide dataset of deep displacements. This numerical atlas, named ANDRO, after a traditional dance of Brittany meaning a swirl, comprises some 600 000 deep displacements. These displacements, based on Argo or GPS surface locations only, have been fully checked and corrected for possible errors found in the public Argo data files (due to incorrect decoding or instrumental failure). Park pressures measured by the floats while drifting at depth are preserved in ANDRO (less than 2% of the park pressures are unknown): 63% of the float displacements are in the layer (900, 1100) dbar with a good (more or less uniform) degree of coverage of all the oceans, except around Antarctica (south of 60°S). Two deeper layers—(1400, 1600) and (1900, 2100) dbar—are also sampled (11% and 8% of the float displacements, respectively) but with poorer geographical coverage. Grounded cycles (i.e., if the float hits the sea bottom) are excluded. ANDRO is available online as an ASCII file.


Author(s):  
Ameya Divekar ◽  
Joshua D. Summers

Design engineers create models of design artifacts with commercial Computer Aided Design (CAD) solid modeling systems. These systems stop short of providing support for querying and retrieving data from within the CAD data files. The design exemplar has been proposed as an approach to developing a CAD query language based upon an analysis of the design exemplar components, vocabulary, and extensions to support logical connectives. The implementation of the required extensions is offered in this paper. Algorithms are developed to implement the NOT and OR logical connectives. These algorithms are discussed as they relate to the generic exemplar algorithm. The verification of the algorithms is performed using test cases and comparing the expected results with those found using the software. The design exemplar, supported with the AND, NOT, and OR logical connectives, provides for complex and precise query expression and geometric information retrieval.


2015 ◽  
Vol 26 (7) ◽  
pp. 2139-2147 ◽  
Author(s):  
Colin Jacobs ◽  
Eva M. van Rikxoort ◽  
Keelin Murphy ◽  
Mathias Prokop ◽  
Cornelia M. Schaefer-Prokop ◽  
...  

2021 ◽  
Author(s):  
Zijian Zhang ◽  
Yan Jin

Abstract The goal of this research is to develop a computer-aided visual analogy support (CAVAS) framework that can augment designers’ visual analogical thinking by providing relevant visual cues or sketches from a variety of categories and stimulating the designer to make more and better visual analogies at the ideation stage of design. The challenges of this research include what roles a computer tool should play in facilitating visual analogy of designers, what the relevant and meaningful visual analogies are at the sketching stage of design, and how the computer can capture such meaningful visual knowledge from various categories through analyzing the sketches drawn by the designers. A visual analogy support framework and a deep clustering model, called Cavas-DL, are proposed to learn a latent space of sketches that can reveal the shape patterns for multiple categories of sketches and at the same time cluster the sketches to preserve and provide category information as part of visual cues. The latent space learned serves as a visual information representation that captures the learned shape features from multiple sketch categories. The distance- and overlap-based similarities are introduced and analyzed to identify long- and short-distance analogies. Extensive evaluations of the performance of our proposed methods are carried out with different configurations, and the visual presentations of the potential analogical cues are explored. The evaluation results and the visual organizations of information have demonstrated the potential of the usefulness of the Cavas-DL model.


2018 ◽  
Vol 72 (1) ◽  
pp. 55-61 ◽  
Author(s):  
Antoine Daina ◽  
Marie-Claude Blatter ◽  
Vivienne Baillie Gerritsen ◽  
Vincent Zoete

Author(s):  
Christine M O’Keefe

ABSTRACT IntroductionFor several years, Population Data Linkage initiatives around the world have been successfully linking population‐based administrative and other datasets and making extracts available for research under strong confidentiality protections1. This paper provides an overview of current approaches in a range of scenarios, then outlines current relevant trends and potential implications for population data linkage initiatives.MethodsApproaches to protecting the confidentiality of data in research can also reduce the statistical usefulness, and the trade‐off between confidentiality protection and statistical usefulness is often represented as a Risk‐Utility map [2, 3, 5, 7]. Positioning the range of current approaches on such a Risk‐Utility map can indicate the relative nature of the trade‐off in each case.Such a Risk‐Utility map is only part of the story, however. Each approach needs to be implemented with appropriate levels of governance, information technology security, and ethical oversight. In addition, there are several changes in the external environment that have potential implications for population data linkage initiatives.Results and DiscussionCurrent approaches to protecting the confidentiality of data in research fall into one of two classes. The first class comprises approaches that anonymise the data before analysis, namely: Removal of identifying information such as names and addresses Secure data centres on‐site at the custodian premises Public use files made widely available Synthetic data files made widely available Open data files published on the internet The second class comprises approaches that anonymise the analysis outputs, namely: Virtual data centres that are on‐line versions of secure data centres [8] Remote analysis centres where users can request analyses but cannot see data. Many such initiatives implicitly or explicitly use criteria that have been recently captured in the Five Safes model [3]. However, changes in the external environment may add potential implications to address [6].First, there is a rapid increase in scenarios for data use, many of which involve multiple datasets from multiple sources with multiple custodians. This raises the question of whether there should be centralised data integration versus a proliferation of ad‐hoc decentralised but inter‐related initiatives. In any case, harmonised and shared governance will be essential. Next, the public are becoming increasingly informed and are increasingly exercising their privacy preferences in selecting between competing service providers. It is likely that the public will demand that initiatives move beyond education gain acceptance to a model of full partnership.ConclusionsWhile Population Data Linkage initiatives have been successful to date, changes in the external environment have potential implications such as a need for harmonised and shared governance, as well as full partnership with the public. Meeting the future challenges will require sophistication in the selection, design and operation of approaches to protecting the confidentiality of data in research. Useful frameworks in this context include [1, 4]. Importantly, it is necessary to have a range of approaches in order to adequately meet the needs of a range of different scenarios.AcknowledgementsThis work was partially supported by a grant from the Simons Foundation. The author thanks the Isaac Newton Institute for Mathematical Sciences, University of Cambridge, for support and hospitality during the programme Data Linkage and Anonymisation, which was supported by EPSRC grant no EP/K032208/1. 1For a list of administrative data linkage centres around the world, see www.ipdln.org/data‐linkage‐centres Key References[1] Desai T, Felix Ritchie F, Welpton R. Five safes: designing data access for research. Preprint 2016.[2] Duncan G, Elliot M, Salazar‐Gonzàlez JJ. Statistical Confidentiality. Springer: New York, 2011.[3] El Emam K. A Guide to the De‐identification of Health Information. CRC Press: New York, NY, 2013.[4] Elliot M, Mackey E, O’Hara K, Tudor C. The Anonymisation Decision‐Making Framework. http://ukanon.net/wp‐content/uploads/2015/05/The‐Anonymisation‐Decision‐making‐Framework.pdf[5] Hundepool A, Domingo‐Ferrer J, Franconi L, Giessing S, Nordholt E, Spicer K, deWolf PP. Statistical Disclosure Control, Wiley Series in Survey Methodology. John Wiley & Sons: United Kingdom, 2012.[6] O’Keefe CM, Gould P, Chipperfield JO. A Five Safes perspective on administrative data integration initiatives, submitted.[7] O'Keefe CM and Rubin DB. Individual Privacy versus Public Good: Protecting Confidentiality in Health Research, Statistics in Medicine 34 (2015), 3081‐3103. DOI: 10.1002/sim.6543[8] O’Keefe CM, Westcott M, O’Sullivan M, Ickowicz A, Churches T. Anonymization for outputs of population health and health services research conducted via an online data centre, JAMIA in press.


Sign in / Sign up

Export Citation Format

Share Document