computer languages
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 17)

H-INDEX

9
(FIVE YEARS 1)

Author(s):  
Dr. Rudra Prasad Mishra

Abstract: Any language computer can be useful if the above two requirements are met. But we can only accept such behavior as a modern writing system. In other words, in the past we used to write with palm leaves and pencils, then we used to write on paper, then we used typewriters and typewriters. Now we can write via computer. This is the best way to write and print. This requirement is available in almost all languages, this requirement has been met by the Odia language since the 180s, which means that this year we have been able to type the letters of the Odia language on the computer and print it on the printer. This is the first step in using a computer language. The second step in making a language useful to a computer is: understanding the computer. Computers understand a language through a variety of programs. This requires connecting the operating system to a new language. It is imperative that the language be integrated into the Unicode system. Unicode has provided a code for the scripts of the world's most computer-friendly languages. This code can be understood by the operating system. As a result, it is possible to Unicode the computer by naming that language, sorting it from scratch, searching for a file or folder named in that language, deleting any incorrect word koji, and so on. Fortunately, the Odia language scripts were included in the Unicode in 2006. Odia is the third largest Indian language in this regard. Keywords: typewriters, computer, languages, Unicode, folder, deleting, scripts, Odia, Indian language,


2021 ◽  
pp. 613-638
Author(s):  
Daniel Zwillinger ◽  
Vladimir Dobrushkin
Keyword(s):  

Author(s):  
Archit Gupta

Abstract: Software Engineering has grown and developed from the 1960’s till now a lot as our knowledge and understanding of software is increasing day-by-day due to which software is becoming increasingly reliable and cost effective. Previous research was not able to express clearly how software engineering transitioned, how new technologies and services for software came to be known and were started using in the world of software engineering, decade or year wise. I use data from different websites and research papers to tell how software engineering has evolved along with the years with details about what happened in particular years, with respect to the corresponding decades. There are also details about manifestos and the developers of computer languages. The findings indicate that the software engineering field is vast and is still far from being fully developed, in a world where we have hands on every technology possible and hence new software’s and services are coming out on a regular basis now.


Author(s):  
Robert Kowalski ◽  
Akber Datoo

AbstractIn this paper, we present an informal introduction to Logical English (LE) and illustrate its use to standardise the legal wording of the Automatic Early Termination (AET) clauses of International Swaps and Derivatives Association (ISDA) Agreements. LE can be viewed both as an alternative to conventional legal English for expressing legal documents, and as an alternative to conventional computer languages for automating legal documents. LE is a controlled natural language (CNL), which is designed both to be computer-executable and to be readable by English speakers without special training. The basic form of LE is syntactic sugar for logic programs, in which all sentences have the same standard form, either as rules of the form conclusion if conditions or as unconditional sentences of the form conclusion. However, LE extends normal logic programming by introducing features that are present in other computer languages and other logics. These features include typed variables signalled by common nouns, and existentially quantified variables in the conclusions of sentences signalled by indefinite articles. Although LE translates naturally into a logic programming language such as Prolog or ASP, it can also serve as a neutral standard, which can be compiled into other lower-level computer languages.


2021 ◽  
Vol 11 (13) ◽  
pp. 6109
Author(s):  
Fabrizio Banfi ◽  
Mattia Previtali

In recent years, the advent of the latest-generation technologies and methods have made it possible to survey, digitise and represent complex scenarios such as archaeological sites and historic buildings. Thanks to computer languages based on Visual Programming Language (VPL) and advanced real-time 3D creation platform, this study shows the results obtained in eXtended Reality (XR) oriented to archaeological sites and heritage buildings. In particular, the scan-to-BIM process, digital photogrammetry (terrestrial and aerial) were oriented towards a digitisation process able to tell and share tangible and intangible values through the latest generation techniques, methods and devices. The paradigm of the geometric complexity of the built heritage and new levels of interactivity between users and digital worlds were investigated and developed to favour the transmissibility of information at different levels of virtual experience and digital sharing with the aim to archive, tell and implement historical and cultural baggage that over the years risks being lost and not told to future generations.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sarah S. Ji ◽  
Christopher A. German ◽  
Kenneth Lange ◽  
Janet S. Sinsheimer ◽  
Hua Zhou ◽  
...  

Abstract Background Statistical geneticists employ simulation to estimate the power of proposed studies, test new analysis tools, and evaluate properties of causal models. Although there are existing trait simulators, there is ample room for modernization. For example, most phenotype simulators are limited to Gaussian traits or traits transformable to normality, while ignoring qualitative traits and realistic, non-normal trait distributions. Also, modern computer languages, such as Julia, that accommodate parallelization and cloud-based computing are now mainstream but rarely used in older applications. To meet the challenges of contemporary big studies, it is important for geneticists to adopt new computational tools. Results We present , an open-source Julia package that makes it trivial to quickly simulate phenotypes under a variety of genetic architectures. This package is integrated into our OpenMendel suite for easy downstream analyses. Julia was purpose-built for scientific programming and provides tremendous speed and memory efficiency, easy access to multi-CPU and GPU hardware, and to distributed and cloud-based parallelization. is designed to encourage flexible trait simulation, including via the standard devices of applied statistics, generalized linear models (GLMs) and generalized linear mixed models (GLMMs). also accommodates many study designs: unrelateds, sibships, pedigrees, or a mixture of all three. (Of course, for data with pedigrees or cryptic relationships, the simulation process must include the genetic dependencies among the individuals.) We consider an assortment of trait models and study designs to illustrate integrated simulation and analysis pipelines. Step-by-step instructions for these analyses are available in our electronic Jupyter notebooks on Github. These interactive notebooks are ideal for reproducible research. Conclusion The package has three main advantages. (1) It leverages the computational efficiency and ease of use of Julia to provide extremely fast, straightforward simulation of even the most complex genetic models, including GLMs and GLMMs. (2) It can be operated entirely within, but is not limited to, the integrated analysis pipeline of OpenMendel. And finally (3), by allowing a wider range of more realistic phenotype models, brings power calculations and diagnostic tools closer to what investigators might see in real-world analyses.


Many practitioners are shy with implementing GAs. Due to this, a lot of researchers avoid using GAs as problem-solving techniques. It is desirable that an implementer of GA must be familiar in working with high-level computer languages. Implementation of GA involves complex coding and intricate computations which are of a repetitive nature. GAs if not implemented with caution will result in vague or bad solutions. This chapter overcomes the obstacles by implementing and defining various data structures required for implementing a simple GA. They will write various functions of GA code in C ++ programming language. In this chapter, initial string population generation, selection, crossover, and mutation operator used to optimize a simple function (one variable function) coded as unsigned binary integer is implemented using C ++ programming language. Mapping of fitness issue is also discussed in application of GAs.


Author(s):  
Philip C. Doesschate

Adjusting to change can be difficult for anyone. A commitment to continuous learning can help in coping with change. This chapter presents a picture of real lifelong learning in a field that has undergone dramatic changes. The author, Philip Doesschate, has had wide-ranging experience in the information technology field over almost five decades. As he recalls his career accomplishments and challenges, he identifies a set of personal life lessons from his work in this rapidly changing field. During his career, Doesschate has worked in numerous roles, industries, and specialties, on projects of small to significant size, using many different computer languages, operating systems, application frameworks, architectures, application packages, and analytical tools. The lessons illustrate issues not only of staying current but also of mastering new approaches as they evolve to meet customer needs and expectations.


2020 ◽  
Vol 635 ◽  
pp. A20
Author(s):  
Eduardo Vitral ◽  
Gary A. Mamon

The Sérsic model shows a close fit to the surface brightness (or surface density) profiles of elliptical galaxies and galaxy bulges, and possibly also those of dwarf spheroidal galaxies and globular clusters. The deprojected density and mass profiles are important for many astrophysical applications, in particular for mass-orbit modeling of these systems. However, the exact deprojection formula for the Sérsic model employs special functions that are not available in most computer languages. We show that all previous analytical approximations to the 3D density profile are imprecise at low Sérsic index (n ≲ 1.5). We derived a more precise analytical approximation to the deprojected Sérsic density and mass profiles by fitting two-dimensional tenth-order polynomials to the residuals of the analytical approximations by Lima Neto et al. (1999, MNRAS, 309, 481; LGM) for these profiles, relative to the numerical estimates. Our LGM-based polynomial fits have typical relative precision better than 0.2% for both density and mass profiles, for Sérsic indices 0.5 ≤ n ≤ 10 and radii 0.001 <  r/Re <  1000. Our approximation is much more precise than previously published approximations (except, in some models, for a few discrete values of the index). An appendix compares the deprojected Sérsic profiles with those of other popular simple models.


Animals ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 190 ◽  
Author(s):  
Yangyang Guo ◽  
Dongjian He ◽  
Lilong Chai

Requirements for animal and dairy products are increasing gradually in emerging economic bodies. However, it is critical and challenging to maintain the health and welfare of the increasing population of dairy cattle, especially the dairy calf (up to 20% mortality in China). Animal behaviors reflect considerable information and are used to estimate animal health and welfare. In recent years, machine vision-based methods have been applied to monitor animal behaviors worldwide. Collected image or video information containing animal behaviors can be analyzed with computer languages to estimate animal welfare or health indicators. In this proposed study, a new deep learning method (i.e., an integration of background-subtraction and inter-frame difference) was developed for automatically recognizing dairy calf scene-interactive behaviors (e.g., entering or leaving the resting area, and stationary and turning behaviors in the inlet and outlet area of the resting area) based on computer vision-based technology. Results show that the recognition success rates for the calf’s science-interactive behaviors of pen entering, pen leaving, staying (standing or laying static behavior), and turning were 94.38%, 92.86%, 96.85%, and 93.51%, respectively. The recognition success rates for feeding and drinking were 79.69% and 81.73%, respectively. This newly developed method provides a basis for inventing evaluation tools to monitor calves’ health and welfare on dairy farms.


Sign in / Sign up

Export Citation Format

Share Document