Spherical encapsulation for bigger semiconductor sample testing

2019 ◽  
Vol 28 ◽  
pp. 7
Author(s):  
David Bradley
2020 ◽  
Vol 5 (1) ◽  
pp. 61
Author(s):  
Vadlan Febrian ◽  
Muhamad Rizki Ramadhan ◽  
Muhammad Faisal ◽  
Aries Saifudin

In this employee payroll application, if there is an error program there will be a loss for employees and the company. Losses for employees, if this application program error occurs then the salary reduction will experience delays due to the difficulty in the process of calculating employee salaries and employees will be late in receiving salaries. Losses for the company, if there is an error program in this application, the company will suffer losses if the employee wants a salary reduction quickly but the company cannot calculate quickly and accurately. In solving this problem, the authors use the black box testing method. Black box testing method is a test that sees the results of execution through test data and ensures the function of the software. Black box testing method has several testing techniques, namely Sample Testing, Boundary Value Analysis, Equivalence Partitions and others. From the testing techniques that have been mentioned, we use the Equivalence Partitions testing technique. Equivalence Partitions are tests that refer to data entry on the employee payroll application form, input will be tested and then put together based on the test function, both valid and invalid values. The expected results of this test are a payroll system for employees who are computerized, have standard rules in the process of developing the program so that it is easy to develop and maintain, and can minimize errors in processing salary calculations for employees.


2021 ◽  
Vol 14 (3) ◽  
pp. 119
Author(s):  
Fabian Waldow ◽  
Matthias Schnaubelt ◽  
Christopher Krauss ◽  
Thomas Günter Fischer

In this paper, we demonstrate how a well-established machine learning-based statistical arbitrage strategy can be successfully transferred from equity to futures markets. First, we preprocess futures time series comprised of front months to render them suitable for our returns-based trading framework and compile a data set comprised of 60 futures covering nearly 10 trading years. Next, we train several machine learning models to predict whether the h-day-ahead return of each future out- or underperforms the corresponding cross-sectional median return. Finally, we enter long/short positions for the top/flop-k futures for a duration of h days and assess the financial performance of the resulting portfolio in an out-of-sample testing period. Thereby, we find the machine learning models to yield statistically significant out-of-sample break-even transaction costs of 6.3 bp—a clear challenge to the semi-strong form of market efficiency. Finally, we discuss sources of profitability and the robustness of our findings.


2018 ◽  
Author(s):  
Παντελής Σταυρούλιας

Οι έγκυρες προβλέψεις χρηματοοικονομικών κρίσεων διασφάλιζαν ανέκαθεν την σταθερότητα τόσο ολόκληρου του χρηματοοικονομικού οικοδομήματος γενικότερα, όσο και του τραπεζικού τομέα ειδικότερα. Με την παρούσα διατριβή επιτυγχάνεται η πρόβλεψη συστημικών τραπεζικών κρίσεων για χώρες της EE-14 αρκετά τρίμηνα προτού αυτές γίνουν αντιληπτές με την χρησιμοποίηση των πιο διαδεδομένων μεταβλητών (μακροοικονομικών, τραπεζικών και αγοράς) μέσω δύο προσεγγίσεων, της δυαδικής και της πολυεπίπεδης. Ακολουθώντας τη δυαδική προσέγγιση, εξάγονται μοντέλα ταξινόμησης με την εφαρμογή της Διακριτής Ανάλυσης (Discriminant Analysis), της Γραμμικής Παλινδρόμησης (Linear Regression), της Λογιστικής Παλινδρόμησης (Logistic Regression) και της Παλινδρόμησης Πιθανοομάδας (Probit Regression), για την έγκαιρη πρόβλεψη των κρίσεων -12 έως -7 τρίμηνα πριν την εμφάνισή τους. Επιπροσθέτως, συγκρίνεται η απόδοση της ανωτέρω ανάλυσης χρησιμοποιώντας τις νεότερες και πλέον υποσχόμενες μεθόδους του Δέντρου Ταξινόμησης (Classification Tree), του Τυχαίου Δάσους (Random Forest) και της C5. Ταυτόχρονα προτείνεται ένα νέο μέτρο επιλογής κατωφλίων και απόδοσης προσαρμογής (GoF) των μοντέλων πρόβλεψης και μια νέα συνδυαστική (combined) μέθοδος ταξινόμησης. Προκειμένου να διερευνηθεί η απόδοση της ανωτέρω ανάλυσης, χρησιμοποιείται ο εκτός του δείγματος έλεγχος (out-of-sample testing) με τη μέθοδο της ανά χώρα σταυρωτής επικύρωσης (country-blocked cross validation). Σύμφωνα με τη μέθοδο αυτή, πραγματοποιείται η ανάλυση και εξάγονται τα μοντέλα πρόβλεψης με τη χρήση των δεκατριών από τις δεκατέσσερις χώρες του δείγματος (in-sample), εφαρμόζονται τα εξαγόμενα μοντέλα για την δέκατη τέταρτη χώρα που είχε εξαιρεθεί από το αρχικό δείγμα (out-of-sample) και ελέγχονται τα αποτελέσματα πρόβλεψης με τα πραγματικά δεδομένα της χώρας αυτής. Η παραπάνω διαδικασία επαναλαμβάνεται δεκατέσσερις φορές, αφήνοντας δηλαδή κάθε φορά μια χώρα εκτός δείγματος και τελικά εξάγεται ο μέσος όρος των επαναλήψεων. Στην παρούσα διατριβή, και χρησιμοποιώντας τον εκτός του δείγματος έλεγχο, επιτυγχάνεται η κατά 82.4% σωστή ταξινόμηση (Ακρίβεια – Accuracy), 78.4% ποσοστό Αληθινών Θετικών (Τrue Ρositive Rate - TPR) και 80.6% ποσοστό Θετικής Τιμής Πρόβλεψης (Positive Predictive Value - PPV). Σύμφωνα με την πολυεπίπεδη προσέγγιση, διακρίνονται δύο επίπεδα-περίοδοι πρόβλεψης των Συστημικών Τραπεζικών Κρίσεων. Το πρώτο επίπεδο ονομάζεται έγκαιρη πρόβλεψη (early warning) και αφορά περίοδο -12 έως -7 τρίμηνα πριν την έλευση της κρίσης ενώ το δεύτερο επίπεδο ονομάζεται καθυστερημένη πρόβλεψη (late warning) και αφορά περίοδο -6 έως -1 τρίμηνα πριν την έλευση της κρίσης. Για την πολυεπίπεδη αυτή ταξινόμηση, γίνεται χρήση των Νευρωνικών Δικτύων (Neural Networks), της Πολυωνυμικής Λογιστικής Παλινδρόμησης (Multinomial Logistic Regression) και της Πολυεπίπεδης Γραμμικής Διακριτής Ανάλυσης (Multinomial Discriminant Analysis). Εφαρμόζοντας τον ίδιο εκτός του δείγματος έλεγχο με την πρώτη προσέγγιση επιτυγχάνεται η κατά 85.7% σωστή ταξινόμηση με την βέλτιστη μέθοδο που αποδεικνύεται ότι είναι η Πολυεπίπεδη Γραμμική Διακριτή Ανάλυση. Εφαρμόζοντας την ανωτέρω ανάλυση, οι ενδιαφερόμενοι φορείς άσκησης πολιτικής (policy makers) μπορούν να ανιχνεύσουν την ύπαρξης κρίσης σε βάθος χρόνου έως τριών ετών με τα προτεινόμενα μοντέλα, χρησιμοποιώντας μόνο δεδομένα που υπάρχουν ελεύθερα προσβάσιμα στο κοινό, ασκώντας με τον τρόπο αυτό την κατάλληλη ανά περίπτωση μακροπροληπτική πολιτική (macroprudential policy).


2021 ◽  
Vol 118 (31) ◽  
pp. e2103272118
Author(s):  
Nicholas J. Irons ◽  
Adrian E. Raftery

There are multiple sources of data giving information about the number of SARS-CoV-2 infections in the population, but all have major drawbacks, including biases and delayed reporting. For example, the number of confirmed cases largely underestimates the number of infections, and deaths lag infections substantially, while test positivity rates tend to greatly overestimate prevalence. Representative random prevalence surveys, the only putatively unbiased source, are sparse in time and space, and the results can come with big delays. Reliable estimates of population prevalence are necessary for understanding the spread of the virus and the effectiveness of mitigation strategies. We develop a simple Bayesian framework to estimate viral prevalence by combining several of the main available data sources. It is based on a discrete-time Susceptible–Infected–Removed (SIR) model with time-varying reproductive parameter. Our model includes likelihood components that incorporate data on deaths due to the virus, confirmed cases, and the number of tests administered on each day. We anchor our inference with data from random-sample testing surveys in Indiana and Ohio. We use the results from these two states to calibrate the model on positive test counts and proceed to estimate the infection fatality rate and the number of new infections on each day in each state in the United States. We estimate the extent to which reported COVID cases have underestimated true infection counts, which was large, especially in the first months of the pandemic. We explore the implications of our results for progress toward herd immunity.


Author(s):  
Linden Parkes ◽  
Tyler M. Moore ◽  
Monica E. Calkins ◽  
Matthew Cieslak ◽  
David R. Roalf ◽  
...  

ABSTRACTBackgroundThe psychosis spectrum is associated with structural dysconnectivity concentrated in transmodal association cortex. However, understanding of this pathophysiology has been limited by an exclusive focus on the direct connections to a region. Using Network Control Theory, we measured variation in both direct and indirect structural connections to a region to gain new insights into the pathophysiology of the psychosis spectrum.MethodsWe used psychosis symptom data and structural connectivity in 1,068 youths aged 8 to 22 years from the Philadelphia Neurodevelopmental Cohort. Applying a Network Control Theory metric called average controllability, we estimated each brain region’s capacity to leverage its direct and indirect structural connections to control linear brain dynamics. Next, using non-linear regression, we determined the accuracy with which average controllability could predict negative and positive psychosis spectrum symptoms in out-of-sample testing. We also compared prediction performance for average controllability versus strength, which indexes only direct connections to a region. Finally, we assessed how the prediction performance for psychosis spectrum symptoms varied over the functional hierarchy spanning unimodal to transmodal cortex.ResultsAverage controllability outperformed strength at predicting positive psychosis spectrum symptoms, demonstrating that indexing indirect structural connections to a region improved prediction performance. Critically, improved prediction was concentrated in association cortex for average controllability, whereas prediction performance for strength was uniform across the cortex, suggesting that indexing indirect connections is crucial in association cortex.ConclusionsExamining inter-individual variation in direct and indirect structural connections to association cortex is crucial for accurate prediction of positive psychosis spectrum symptoms.


2021 ◽  
Author(s):  
Annet M Nankya ◽  
Luke Nyakarahuka ◽  
Stephen Balinandi ◽  
John Kayiwa ◽  
Julius Lutwama ◽  
...  

Abstract Back ground: Corona Virus Disease 2019 (COVID 19) in Uganda was first reported in a male traveler from Dubai on 21st March, 2020 shortly after WHO had announced the condition as a global pandemic. Timely laboratory diagnosis of COVID -19 for all samples from both symptomatic and asymptomatic patients was observed as key in containing the pandemic and breaking the chain of transmission. However, there was a challenge of limited resources required for testing SARS-COV-2 in low and middle income countries. To mitigate this, a study was conducted to evaluate a sample pooling strategy for COVI-19 using real time PCR. The cost implication and the turn around time of pooled sample testing versus individual sample testing were also compared.Methods: In this study, 1260 randomly selected samples submitted to Uganda Virus Research Institute for analysis were batched in pools of 5, 10, and 15. The pools were then extracted using a Qiagen kit. Both individual and pooled RNA were screened for the SARS-COV-2 E gene using a Berlin kit. Results: Out of 1260 samples tested, 21 pools were positive in pools of 5 samples, 16 were positive in pools of 10 and 14 were positive in pools of 15 samples. The study also revealed that the pooling strategy helps to save a lot on resources, time and expands diagnostic capabilities without affecting the sensitivity of the test in areas with low SARS-COV-2 prevalence.Conclusion: This study demonstrated that the pooling strategy for COVID-19 reduced on the turnaround time and there was a substantial increase in the overall testing capacity with limited resources as compared to individual testing.


Sign in / Sign up

Export Citation Format

Share Document