scholarly journals Egyptian Shabtis Identification by Means of Deep Neural Networks and Semantic Integration with Europeana

2020 ◽  
Vol 10 (18) ◽  
pp. 6408
Author(s):  
Jaime Duque Domingo ◽  
Jaime Gómez-García-Bermejo ◽  
Eduardo Zalama

Ancient Egyptians had a complex religion, which was active for longer than the time that has passed since Cleopatra until our days. One amazing belief was to be buried with funerary statuettes to help the deceased carry out his/her tasks in the underworld. These funerary statuettes, mainly known as shabtis, were produced in different materials and were usually inscribed in hieroglyphs with formulas including the name of the deceased. Shabtis are important archaeological objects which can help to identify the owners, their jobs, ranks or their families. They are also used for tomb dating because, depending on different elements: color, formula, tools, wig, hand positions, etc., it is possible to associate them to a concrete type or period of time. Shabtis are spread all over the world, in excavations, museums or private collections, and many of them have not been studied and identified because this process requires a deep study and reading of the hieroglyphs. Our system is able to solve this problem using two different YOLO v3 networks for detecting the figure itself and the hieroglyphic names, which provide identification and cataloguing. Until now, there has been no other work on the detection and identification of shabtis. In addition, a semantic approach has been followed, creating an ontology to connect our system with the semantic metadata aggregator, Europeana, linking our results with known shabtis in different museums. A complete dataset has been created, a comparison with previous technologies for similar problems has been provided, such as SIFT in the ancient coin classification, and the results of identification and cataloguing are shown. These results are over similar problems and have led us to create a web application that shows our system and is available on line.

Author(s):  
Morgan Magnin ◽  
Guillaume Moreau ◽  
Nelle Varoquaux ◽  
Benjamin Vialle ◽  
Karen Reid ◽  
...  

A critical component of the learning process lies in the feedback that students receive on their work that validates their progress, identifies flaws in their thinking, and identifies skills that still need to be learned. Many higher-education institutions have developed an active pedagogy that gives students opportunities for different forms of assessment and feedback. This means that students have numerous lab exercises, assignments, and projects. Both instructors and students thus require effective tools to efficiently manage the submission, assessment, and individualized feedback of students’ work. The open-source web application MarkUs aims at meeting these needs: it facilitates the submission and assessment of students’ work. Students directly submit their work using MarkUs, rather than printing it, or sending it by email. The instructors or teaching assistants use MarkUs’s interface to view the students’ work, annotate it, and fill in a marking rubric. Students use the same interface to read the annotations and learn from the assessment. Managing the students’ submissions and the instructors assessments within a single online system, has led to several positive pedagogical outcomes: the number of late submissions has decreased, the assessment time has been drastically reduced, students can access their results and read the instructor’s feedback immediately after the grading process is completed. Using MarkUs has also significantly reduced the time that instructors spend collecting assignments, creating the marking schemes, passing them on to graders, handling special cases, and returning work to the students. In this paper, we introduce MarkUs’ features, and illustrate their benefits for higher education through our own teaching experiences and that of our colleagues. We also describe an important benefit of the fact that the tool itself is open-source. MarkUs has been developed entirely by students giving them a valuable learning opportunity as they work on a large software system that real users depend on. Virtuous circles indeed arise, with former users of MarkUs becoming developers and then supervisors of further development. We will conclude by drawing perspectives about forthcoming features and use, both technically and pedagogically.


GigaScience ◽  
2020 ◽  
Vol 9 (10) ◽  
Author(s):  
Katrina L Kalantar ◽  
Tiago Carvalho ◽  
Charles F A de Bourcy ◽  
Boris Dimitrov ◽  
Greg Dingle ◽  
...  

Abstract Background Metagenomic next-generation sequencing (mNGS) has enabled the rapid, unbiased detection and identification of microbes without pathogen-specific reagents, culturing, or a priori knowledge of the microbial landscape. mNGS data analysis requires a series of computationally intensive processing steps to accurately determine the microbial composition of a sample. Existing mNGS data analysis tools typically require bioinformatics expertise and access to local server-class hardware resources. For many research laboratories, this presents an obstacle, especially in resource-limited environments. Findings We present IDseq, an open source cloud-based metagenomics pipeline and service for global pathogen detection and monitoring (https://idseq.net). The IDseq Portal accepts raw mNGS data, performs host and quality filtration steps, then executes an assembly-based alignment pipeline, which results in the assignment of reads and contigs to taxonomic categories. The taxonomic relative abundances are reported and visualized in an easy-to-use web application to facilitate data interpretation and hypothesis generation. Furthermore, IDseq supports environmental background model generation and automatic internal spike-in control recognition, providing statistics that are critical for data interpretation. IDseq was designed with the specific intent of detecting novel pathogens. Here, we benchmark novel virus detection capability using both synthetically evolved viral sequences and real-world samples, including IDseq analysis of a nasopharyngeal swab sample acquired and processed locally in Cambodia from a tourist from Wuhan, China, infected with the recently emergent SARS-CoV-2. Conclusion The IDseq Portal reduces the barrier to entry for mNGS data analysis and enables bench scientists, clinicians, and bioinformaticians to gain insight from mNGS datasets for both known and novel pathogens.


2018 ◽  
Vol 45 (3) ◽  
pp. 364-386
Author(s):  
Ceri Binding ◽  
Douglas Tudhope ◽  
Andreas Vlachidis

This study investigates the semantic integration of data extracted from archaeological datasets with information extracted via natural language processing (NLP) across different languages. The investigation follows a broad theme relating to wooden objects and their dating via dendrochronological techniques, including types of wooden material, samples taken and wooden objects including shipwrecks. The outcomes are an integrated RDF dataset coupled with an associated interactive research demonstrator query builder application. The semantic framework combines the CIDOC Conceptual Reference Model (CRM) with the Getty Art and Architecture Thesaurus (AAT). The NLP, data cleansing and integration methods are described in detail together with illustrative scenarios from the web application Demonstrator. Reflections and recommendations from the study are discussed. The Demonstrator is a novel SPARQL web application, with CRM/AAT-based data integration. Functionality includes the combination of free text and semantic search with browsing on semantic links, hierarchical and associative relationship thesaurus query expansion. Queries concern wooden objects (e.g. samples of beech wood keels), optionally from a given date range, with automatic expansion over AAT hierarchies of wood types and specialised associative relationships. Following a ‘mapping pattern’ approach (via the STELETO tool) ensured validity and consistency of all RDF output. The user is shielded from the complexity of the underlying semantic framework by a query builder user interface. The study demonstrates the feasibility of connecting information extracted from datasets and grey literature reports in different languages and semantic cross-searching of the integrated information. The semantic linking of textual reports and datasets opens new possibilities for integrative research across diverse resources.


2001 ◽  
Vol 4 (2) ◽  
pp. 155-168 ◽  
Author(s):  
Ellen R. A. de Bruijn ◽  
Ton Dijkstra ◽  
Dorothee J. Chwilla ◽  
Herbert J. Schriefers

Dutch–English bilinguals performed a generalized lexical decision task on triplets of items, responding with “yes” if all three items were correct Dutch and/or English words, and with “no” if one or more of the items was not a word in either language. Sometimes the second item in a triplet was an interlingual homograph whose English meaning was semantically related to the third item of the triplet (e.g., HOUSE – ANGEL – HEAVEN, where ANGEL means “sting” in Dutch). In such cases, the first item was either an exclusively English (HOUSE) or an exclusively Dutch (ZAAK) word. Semantic priming effects were found in on-line response times. Event-related potentials that were recorded simultaneously showed N400 priming effects thought to reflect semantic integration processes. The response time and N400 priming effects were not affected by the language of the first item in the triplets, providing evidence in support of a strong bottom-up role with respect to bilingual word recognition. The results are interpreted in terms of the Bilingual Interactive Activation model, a language nonselective access model assuming bottom-up priority.


2014 ◽  
Vol 687-691 ◽  
pp. 869-873
Author(s):  
Song Hai Fan ◽  
Shu Hong Yang

Systematic approach for the transmission line positive sequence parameters, temperature, and sag based on wavelet analysis to detect error is developed in this work. Unbiased (random/Gaussian) error such as, transient meter failures, transient meter malfunction, and measurements captured during system transients, are inherently in the form of large abrupt change of short duration in a measurement-sequence. These should be detected before the data is used because their presence will lead to insecure and unstable of power grid. The test results of the proposed method based on data of Sichuan power grid are presented.


2010 ◽  
Vol 1 ◽  
pp. 59-70
Author(s):  
Cindy Gunn ◽  
John Raven

Students in the Middle East have typically been taught English following traditional, rote learning methods. There has been little time, or little room, within the set curriculum for teachers to enrich their students’ learning experience. However, especially in the UAE, reforms are being implemented to change the way English is being taught. This paper illustrates how student autonomy can be fostered through the use of an on-line web application. The authors argue that weblogs (blogs), which allow students to publish their work on-line and allow for others to comment on the published work, support a new approach to teaching writing. Feedback from a group of students involved in a small study at the Higher Colleges of Technology support the authors’ claims.


Sign in / Sign up

Export Citation Format

Share Document