simple type
Recently Published Documents


TOTAL DOCUMENTS

393
(FIVE YEARS 60)

H-INDEX

28
(FIVE YEARS 3)

2022 ◽  
Vol 6 (POPL) ◽  
pp. 1-28
Author(s):  
Matthias Eichholz ◽  
Eric Hayden Campbell ◽  
Matthias Krebs ◽  
Nate Foster ◽  
Mira Mezini

Programming languages like P4 enable specifying the behavior of network data planes in software. However, with increasingly powerful and complex applications running in the network, the risk of faults also increases. Hence, there is growing recognition of the need for methods and tools to statically verify the correctness of P4 code, especially as the language lacks basic safety guarantees. Type systems are a lightweight and compositional way to establish program properties, but there is a significant gap between the kinds of properties that can be proved using simple type systems (e.g., SafeP4) and those that can be obtained using full-blown verification tools (e.g., p4v). In this paper, we close this gap by developing Π4, a dependently-typed version of P4 based on decidable refinements. We motivate the design of Π4, prove the soundness of its type system, develop an SMT-based implementation, and present case studies that illustrate its applicability to a variety of data plane programs.


2021 ◽  
Vol 2021 (12) ◽  
pp. 124005
Author(s):  
Franco Pellegrini ◽  
Giulio Biroli

Abstract Neural networks have been shown to perform incredibly well in classification tasks over structured high-dimensional datasets. However, the learning dynamics of such networks is still poorly understood. In this paper we study in detail the training dynamics of a simple type of neural network: a single hidden layer trained to perform a classification task. We show that in a suitable mean-field limit this case maps to a single-node learning problem with a time-dependent dataset determined self-consistently from the average nodes population. We specialize our theory to the prototypical case of a linearly separable data and a linear hinge loss, for which the dynamics can be explicitly solved in the infinite dataset limit. This allows us to address in a simple setting several phenomena appearing in modern networks such as slowing down of training dynamics, crossover between rich and lazy learning, and overfitting. Finally, we assess the limitations of mean-field theory by studying the case of large but finite number of nodes and of training samples.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
I. Elbatal ◽  
Naif Alotaibi

In this paper, a new flexible generator of continuous lifespan models referred to as the Topp-Leone Weibull G (TLWG) family is developed and studied. Several mathematical characteristics have been investigated. The new hazard rate of the new model can be “monotonically increasing,” “monotonically decreasing,” “bathtub,” and “J shape.” The Farlie Gumbel Morgenstern (FGM) and the modified FGM (MFGM) families and Clayton Copula (CCO) are used to describe and display simple type Copula. We discuss the estimation of the model parameters by the maximum likelihood (MLL) estimations. Simulations are carried out to show the consistency and efficiency of parameter estimates, and finally, real data sets are used to demonstrate the flexibility and potential usefulness of the proposed family of algorithms by using the TLW exponential model as example of the new suggested family.


2021 ◽  
Author(s):  
David St-Amand ◽  
Curtis L Baker

Neurons in the primary visual cortex (V1) receive excitation and inhibition from two different pathways processing lightness (ON) and darkness (OFF). V1 neurons overall respond more strongly to dark than light stimuli (Yeh, Xing and Shapley, 2010; Kremkow et al., 2014), consistent with a preponderance of darker regions in natural images (Ratliff et al., 2010), as well as human psychophysics (Buchner & Baumgartner, 2007). However, it has been unclear whether this "dark-dominance" is due to more excitation from the OFF pathway (Jin et al., 2008) or more inhibition from the ON pathway (Taylor et al., 2018). To understand the mechanisms behind dark-dominance, we record electrophysiological responses of individual simple-type V1 neurons to natural image stimuli and then train biologically inspired convolutional neural networks to predict the neuronal responses. Analyzing a sample of 74 neurons (in anesthetized, paralyzed cats) has revealed their responses to be more driven by dark than light stimuli, consistent with previous investigations (Yeh et al., 2010; Kremkow et al., 2013). We show this asymmetry to be predominantly due to slower inhibition to dark stimuli rather than by stronger excitation from the thalamocortical OFF pathway. Consistent with dark-dominant neurons having faster responses than light-dominant neurons (Komban et al., 2014), we find dark-dominance to solely occur in the early latencies of neuronal responses. Neurons that are strongly dark-dominated also tend to be less orientation selective. This novel approach gives us new insight into the dark-dominance phenomenon and provides an avenue to address new questions about excitatory and inhibitory integration in cortical neurons.


2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Dan-meng Zheng ◽  
Zhen-ni An ◽  
Ming-hao Ge ◽  
Dong-zhuo Wei ◽  
Ding-wen Jiang ◽  
...  

Abstract Background Acylcarnitine is an intermediate product of fatty acid oxidation. It is reported to be closely associated with the occurrence of diabetic cardiomyopathy (DCM). However, the mechanism of acylcarnitine affecting myocardial disorders is yet to be explored. This current research explores the different chain lengths of acylcarnitines as biomarkers for the early diagnosis of DCM and the mechanism of acylcarnitines for the development of DCM in-vitro. Methods In a retrospective non-interventional study, 50 simple type 2 diabetes mellitus patients and 50 DCM patients were recruited. Plasma samples from both groups were analyzed by high throughput metabolomics and cluster heat map using mass spectrometry. Principal component analysis was used to compare the changes occurring in the studied 25 acylcarnitines. Multivariable binary logistic regression was used to analyze the odds ratio of each group for factors and the 95% confidence interval in DCM. Myristoylcarnitine (C14) exogenous intervention was given to H9c2 cells to verify the expression of lipid metabolism-related protein, inflammation-related protein expression, apoptosis-related protein expression, and cardiomyocyte hypertrophy and fibrosis-related protein expression. Results Factor 1 (C14, lauroylcarnitine, tetradecanoyldiacylcarnitine, 3-hydroxyl-tetradecanoylcarnitine, arachidic carnitine, octadecanoylcarnitine, 3-hydroxypalmitoleylcarnitine) and factor 4 (octanoylcarnitine, hexanoylcarnitine, decanoylcarnitine) were positively correlated with the risk of DCM. Exogenous C14 supplementation to cardiomyocytes led to increased lipid deposition in cardiomyocytes along with the obstacles in adenosine 5′-monophosphate (AMP)-activated protein kinase (AMPK) signaling pathways and affecting fatty acid oxidation. This further caused myocardial lipotoxicity, ultimately leading to cardiomyocyte hypertrophy, fibrotic remodeling, and increased apoptosis. However, this effect was mitigated by the AMPK agonist acadesine. Conclusions The increased plasma levels in medium and long-chain acylcarnitine extracted from factors 1 and 4 are closely related to the risk of DCM, indicating that these factors can be an important tool for DCM risk assessment. C14 supplementation associated lipid accumulation by inhibiting the AMPK/ACC/CPT1 signaling pathway, aggravated myocardial lipotoxicity, increased apoptosis apart from cardiomyocyte hypertrophy and fibrosis were alleviated by the acadesine.


Health of Man ◽  
2021 ◽  
pp. 74-84
Author(s):  
Garnik Kocharyan

The article deals with hypersexuality, which can be a manifestation of a wide variety of disorders. These include: consequences of a psychotrauma; premature psychosexual development; borderline personality disorder; submaniac and maniac states in bipolar affective disorder and schizoaffective type of the course of schizophrenia; disinhibition of libido, rough and naked eroticism in the initial stage of schizophrenia; consequence of advanced emotional volitional disorders in simple-type schizophrenia; temporal lobe epilepsy; dementia; mental retardation (oligophrenias); various organic brain lesions (resulting from neuroinfections, injuries, vascular damages, neoplasms) [the temporal and frontal lobes of the brain are indicated as regulators of libido]; consequence of disorders in the hypothalamic area and limbic system. It is reported that a correlation exists between development of hypersexuality and damages in certain cerebral areas in patients with non-traumatic brain damage. It is pointed out that hypersexuality may result from maldevelopment of the right hippocampus, anti-NMDA receptor encephalitis, in patients with disseminated sclerosis, Huntington’s disease, Kluver-Bucy syndrome as well as be caused by various hormonal disorders and consequences of premenstrual changes and effect of virilizing hormones during the childhood or intrauterine period. Numerous clinical illustrations of cases with hypersexuality, caused by various disorders, are given. The author holds the opinion that it is necessary to divide hypersexuality at least into two categories: true hypersexuality and sexual disinhibition, when with the normal or even reduced level of libido manifestation the hypersexual behaviour results from absence or insufficiency of inhibitory mechanisms (for example, in dementia).


2021 ◽  
Vol 73 (09) ◽  
pp. 8-10
Author(s):  
Justin Hayes

If you talk to a typical subsurface professional working on unconventionals today (e.g., a reservoir engineer, completion engineer, geologist, petrophysicist, etc.) as I have in person and through media such as LinkedIn, you will find that many lament one key thing: Our sophisticated models have been reduced too much. Of course, I am generalizing and those are not the words they use; the lamentations come in many forms. The dissatisfaction with oversimplification is most easily observed as dis-taste for the type curve, the simplified model we use to predict upcoming new drills. (Yes, I know many of you will want to refer to them by their “proper” name: type well curve; I will be sticking with the colloquial version.) A simple meme posted on LinkedIn about type curves garnered one of the most engaged conversations I have seen amongst technical staff. The responses varied from something like “Thank God someone finally said this out loud” to comments such as “I don’t know anything better than type curves.” Most comments were closer to the former than the latter. What is even more remarkable is that our investors feel the same. In personal conversations, many of them refer to our type curves simply as “lies.” This perception, coupled with the historical lack of corporate returns, led investors away from our industry in droves. Many within the industry see it differently and want to blame the exodus on other factors such as oil and gas prices, climate change, competition from renewables, other environmental, social, and governance (ESG) issues, the pandemic, or OPEC’s unwillingness to “hold the bag” any longer. If you ask them, though, investors will tell you a simple answer: The unconventional business destroyed way too much capital and lied too much through the type curves. Why is it that both investors and technical staff are unhappy with our ability to accurately model future performance? Why can’t we deliver returns? The typical unconventional-focused oil and gas company has two models that are critical to the business. First is the subsurface model, with which we are all intimately familiar in its various forms, and the second is the corporate financial model, which is focused on cash flows, income, and assets/liabilities. It is unfortunate that the two models are separate. It means we must simplify one or both so they can communicate with each other. How can you observe this oversimplification while it is happening? It is happening when the finance staff say, “Please just give me a simple type curve and well count; I need to model, optimize, and account for debt/leverage, equity, and cash flows.” Meanwhile, the technical staff say, “Please just give me a CAPEX budget or a well count; I need to model, optimize, and account for well spacing, completion design, land constraints, and operational constraints.” Looking back, we know that the winner in this tug-of-war of competing needs was the type curve.


2021 ◽  
Vol 18 (182) ◽  
pp. 20210388
Author(s):  
Alice Günther ◽  
Manfred Drack ◽  
Lionel Monod ◽  
Christian S. Wirkner

Although being one of the most well-known animal groups, functional and constructional aspects of scorpions and especially of their tail (metasoma) have so far been overlooked. This tail represents a special construction, as it consists of five tube-shaped segments made up of strong cuticle, which are movable against each other and thus manoeuvre the notorious stinger both quickly and very precisely in space. This high mobility of an exoskeletal structure can be attributed to the connection between the segments described here for the first time. This joint allows for the twisting and bending at the same time in a single, simple construction: adjoining metasomal segments each possess an almost circular opening posteriorly, where the next segment is lodged. Anteriorly, these segments possess two saddle-like protrusions laterally, which are able to rotate in two directions on the rim of the posterior circular opening of the previous segment allowing for twisting and bending. The metasomal joint is particularly noteworthy since its mechanism can be compared to that of arthropod appendages. The scorpion metasoma is actually the only known case in Chelicerata, in which an entire body section has been modified to perform tasks similar to that of an appendage while containing digestive organs. The joint mechanism can also inspire technical applications, for instance in robotics.


2021 ◽  
pp. 2771-2783
Author(s):  
Mohammed Sabeeh ◽  
Farah Khaled

     Plagiarism Detection Systems play an important role in revealing instances of a plagiarism act, especially in the educational sector with scientific documents and papers. The idea of plagiarism is that when any content is copied without permission or citation from the author. To detect such activities, it is necessary to have extensive information about plagiarism forms and classes. Thanks to the developed tools and methods it is possible to reveal many types of plagiarism. The development of the Information and Communication Technologies (ICT) and the availability of the online scientific documents lead to the ease of access to these documents. With the availability of many software text editors, plagiarism detections becomes a critical issue. A large number of scientific papers have already investigated in plagiarism detection, and common types of plagiarism detection datasets are being used for recognition systems, WordNet and PAN Datasets have been used since 2009. The researchers have defined the operation of verbatim plagiarism detection as a simple type of copy and paste. Then they have shed the lights on intelligent plagiarism where this process became more difficult to reveal because it may include manipulation of original text, adoption of other researchers' ideas, and translation to other languages, which will be more challenging to handle. Other researchers have expressed that the ways of plagiarism may overshadow the scientific text by replacing, removing, or inserting words, along with shuffling or modifying the original papers. This paper gives an overall definition of plagiarism and works through different papers for the most known types of plagiarism methods and tools.


Sign in / Sign up

Export Citation Format

Share Document