task group
Recently Published Documents


TOTAL DOCUMENTS

835
(FIVE YEARS 81)

H-INDEX

59
(FIVE YEARS 3)

2022 ◽  
Author(s):  
Pei‐Jan Paul Lin ◽  
Allen R. Goode ◽  
Frank D. Corwin ◽  
Ryan F. Fisher ◽  
Stephen Balter ◽  
...  


Author(s):  
Scott Crowe ◽  
Trent Aland ◽  
Lotte Fog ◽  
Lynne Greig ◽  
Lynsey Hamlett ◽  
...  


Mathematics ◽  
2021 ◽  
Vol 9 (19) ◽  
pp. 2522
Author(s):  
Harwant Singh Arri ◽  
Ramandeep Singh Khosa ◽  
Sudan Jha ◽  
Deepak Prashar ◽  
Gyanendra Prasad Joshi ◽  
...  

It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient.



Axioms ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 247
Author(s):  
Andries van van Beek ◽  
Peter Borm ◽  
Marieke Quant

We define and axiomatically characterize a new proportional influence measure for sequential projects with imperfect reliability. We consider a model in which a finite set of players aims to complete a project, consisting of a finite number of tasks, which can only be carried out by certain specific players. Moreover, we assume the players to be imperfectly reliable, i.e., players are not guaranteed to carry out a task successfully. To determine which players are most important for the completion of a project, we use a proportional influence measure. This paper provides two characterizations of this influence measure. The most prominent property in the first characterization is task decomposability. This property describes the relationship between the influence measure of a project and the measures of influence one would obtain if one divides the tasks of the project over multiple independent smaller projects. Invariance under replacement is the most prominent property of the second characterization. If, in a certain task group, a specific player is replaced by a new player who was not in the original player set, this property states that this should have no effect on the allocated measure of influence of any other original player.



Author(s):  
Katelin Pearson ◽  
Libby Ellwood ◽  
Edward Gilbert ◽  
Rob Guralnick ◽  
James Macklin ◽  
...  

Phenological data (i.e., data on growth and reproductive events of organisms) are increasingly being used to study the effects of climate change, and biodiversity specimens have arisen as important sources of phenological data. However, phenological data are not expressly treated by the Darwin Core standard (Wieczorek et al. 2012), and specimen-based phenological data have been codified and stored in various Darwin Core fields using different vocabularies, making phenological data difficult to access, aggregate, and therefore analyze at scale across data sources. The California Phenology Network, an herbarium digitization collaboration launched in 2018, has harvested phenological data from over 1.4 million angiosperm specimens from California herbaria (Yost et al. 2020). We developed interim standards by which to score and store these data, but further development is needed for adoption of ideal phenological data standards into the Darwin Core. To this end, we are forming a Plant Specimen Phenology Task Group to develop a phenology extension for the Darwin Core standard. We will create fields into which phenological data can be entered and recommend a standardized vocabulary for use in these fields using the Plant Phenology Ontology (Stucky et al. 2018, Brenskelle et al. 2019). We invite all interested parties to become part of this Task Group and thereby contribute to the accesibility and use of these valuable data. In this talk, we will describe the need for plant phenological data standards, current challenges to developing such standards, and outline the next steps of the Task Group toward providing this valuable resource to the data user community.



Author(s):  
Yanina Sica ◽  
Paula Zermoglio

Biodiversity inventories, i.e., recording multiple species at a specific place and time, are routinely performed and offer high-quality data for characterizing biodiversity and its change. Digitization, sharing and reuse of incidental point records (i.e., records that are not readily associated with systematic sampling or monitoring, typically museum specimens and many observations from citizen science projects) has been the focus for many years in the biodiversity data community. Only more recently, attention has been directed towards mobilizing data from both new and longstanding inventories and monitoring efforts. These kinds of studies provide very rich data that can enable inferences about species absence, but their reliability depends on the methodology implemented, the survey effort and completeness. The information about these elements has often been regarded as metadata and captured in an unstructured manner, thus making their full use very challenging. Unlocking and integrating inventory data requires data standards that can facilitate capture and sharing of data with the appropriate depth. The Darwin Core standard (Wieczorek et al. 2012) currently enables reporting some of the information contained in inventories, particularly using Darwin Core Event terms such as samplingProtocol, sampleSizeValue, sampleSizeUnit, samplingEffort. However, it is limited in its ability to accommodate spatial, temporal, and taxonomic scopes, and other key aspects of the inventory sampling process, such as direct or inferred measures of sampling effort and completeness. The lack of a standardized way to share inventory data has hindered their mobilization, integration, and broad reuse. In an effort to overcome these limitations, a framework was developed to standardize inventory data reporting: Humboldt Core (Guralnick et al. 2018). Humboldt Core identified three types of inventories (single, elementary, and summary inventories) and proposed a series of terms to report their content. These terms were organized in six categories: dataset and identification; geospatial and habitat scope; temporal scope; taxonomic scope; methodology description; and completeness and effort. While originally planned as a new TDWG standard and being currently implemented in Map of Life (https://mol.org/humboldtcore/), ratification was not pursued at the time, thus limiting broader community adoption. In 2021 the TDWG Humboldt Core Task Group was established to review how to best integrate the terms proposed in the original publication with existing standards and implementation schemas. The first goal of the task group was to determine whether a new, separate standard was needed or if an extension to Darwin Core could accommodate the terms necessary to describe the relevant information elements. Since the different types of inventories can be thought of as Events with different nesting levels (events within events, e.g., plots within sites), and after an initial mapping to existing Darwin Core terms, it was deemed appropriate to start from a Darwin Core Event Core and build an extension to include Humboldt Core terms. The task group members are currently revising all original Humboldt Core terms, reformulating definitions, comments, and examples, and discarding or adding new terms where needed. We are also gathering real datasets to test the use of the extension once an initial list of revised terms is ready, before undergoing a public review period as established by the TDWG process. Through the ratification of Humboldt Core as a TDWG extension, we expect to provide the community with a solution to share and use inventory data, which improves biodiversity data discoverability, interoperability and reuse while lowering the reporting burden at different levels (data collection, integration and sharing).



Author(s):  
Mathias Dillen ◽  
Elspeth Haston ◽  
Nicole Kearney ◽  
Deborah L Paul ◽  
Joaquim Santos ◽  
...  

The natural history specimens of the world have been documented on paper labels, often physically attached to the specimen itself. As we transcribe these data to make them digital and more useful for analysis, we make interpretations. Sometimes these interpretations are trivial, because the label is unambiguous, but often the meaning is not so clear, even if it is easily read. One key element that suffers from considerable ambiguity is people’s names. Though a person is indivisible, their name can change, is rarely unique and can be written in many ways. Yet knowing the people associated with data is incredibly useful. Data on people can be used to validate other data, simplify data capture, link together data across domains, reduce duplication-of-effort and facilitate data-gap-analysis. In addition, people data enable the discovery of individuals unique to our collections, the collective charting of the history of scientific researchers and the provision of credit to the people who deserve it (Groom et al. 2020). We foresee a future where the people associated with collections are not ambiguous, are shared globally, and data of all kinds are linked through the people who generate them. The TDWG People in Biodiversity Data Task Group is therefore working on a guide to the disambiguation of people in natural history collections. The ultimate goal is to connect the various strings of characters on specimen labels and other documentation to persistent identifiers (PIDs) that unambiguously link a name “string” to the identity of a person. In working towards this goal, 150 volunteers in the Bionomia project have linked 21 million specimens to persistent identifiers for their collectors and determiners. An additional 2 million specimens with links to identifiers for people have already emerged directly from collections that make use of the recently ratified Darwin Core terms recordedByID and identifiedByID. Furthermore, the CETAF Botany Pilot conducted among a group of European herbaria and museums has connected over 1.4 million specimens to disambiguated collectors (Güntsch et al. 2021). Still, given the estimated 2 billion (Ariño 2010) natural history specimens globally, there is much more disambiguation to be done. The process of disambiguation starts with a trigger, which is often the transcription of a specimen’s label data. Unambiguous identification of the collector may facilitate this transcription, as it offers knowledge of their biographical details and collecting habits, allowing us to infer missing information such as collecting date or locality. Another trigger might be the flagging of inconsistent data during data entry or resulting from data quality processes, revealing for instance that multiple collectors have been conflated. A disambiguation trigger is followed by the gathering of data, then the evaluation of the results and finally by the documentation of the new information. Disambiguation is not always straightforward and there are many pitfalls. It requires access to biographical data, and identifiers to be minted. In the case of living people, they have to cooperate with being disambiguated and we have to follow legal and ethical guidelines. In the case of dead people, particularly those long dead, disambiguation may require considerable research. We will present the progress made by the People in Biodiversity Data Task Group and their recommendations for disambiguation in collections. We want to encourage other institutions to engage with a global effort of linking people to persistent identifiers to collaboratively improve all collection data.



2021 ◽  
Author(s):  
Wenzheng Feng ◽  
Mark J. Rivard ◽  
Elizabeth M. Carey ◽  
Robert A. Hearn ◽  
Sujatha Pai ◽  
...  
Keyword(s):  


Sign in / Sign up

Export Citation Format

Share Document