Digital Bookstore

Author(s):  
Amey Thakur

The project's main goal is to build an online book store where users can search for and buy books based on title, author, and subject. The chosen books are shown in a tabular style and the customer may buy them online using a credit card. Using this Website, the user may buy a book online rather than going to a bookshop and spending time. Many online bookstores, such as Powell's and Amazon, were created using HTML. We suggest creating a comparable website with .NET and SQL Server. An online book store is a web application that allows customers to purchase ebooks. Through a web browser the customers can search for a book by its title or author, later can add it to the shopping cart and finally purchase using a credit card transaction. The client may sign in using his login credentials, or new clients can simply open an account. Customers must submit their full name, contact details, and shipping address. The user may also provide a review of a book by rating it on a scale of one to five. The books are classified into different types depending on their subject matter, such as software, databases, English, and architecture. Customers can shop online at the Online Book Store Website using a web browser. A client may create an account, sign in, add things to his shopping basket, and buy the product using his credit card information. As opposed to a frequent user, the Administrator has more abilities. He has the ability to add, delete, and edit book details, book categories, and member information, as well as confirm a placed order. This application was created with PHP and web programming languages. The Online Book Store is built using the Master page, data sets, data grids, and user controls.

From the physical book store to the online bookstore, business owners find a way to meet the demands of their prospective customers. The daily advancement in technology has brought about a huge change the operation of e-commerce. The development of the Progressive Web Applications (PWA) by Google has caused a revolution in mobile development. Using an online bookstore as a case study, this research work presents a PWA architectural framework that can be adopted by any e-commerce applications. This was achieved after a systematic review of existing online bookstore models was carried out – identifying the gaps which will serve as strengths for the proposed model. Also, the emerging technology of PWA was critically reviewed to solidify the proposed model. Adoption of the model will avoid current issues faced the world of mobile development especially code fragmentation. However, exploring the payment gateways and modules will help solidify the model.


Author(s):  
Hadeel Ibrahim Alzahrani ◽  
Zahraa Al Thnayyan ◽  
Sahar Al-Qalaleef ◽  
Fatimah Al Talaq ◽  
Muneerah Alshabanah ◽  
...  

Nowadays there are so many people who are surviving on only one meal per day. Especially in developing countries, it is one of the major problems. On the other hand, there is so much wastage of food every day. Some poor people need clothes and vessels, and children need some books and study kits. Solution to this is that we only need to donate the leftover food to needy people, charities, and our old stuff. For that to happen, we need some sort of platform. This could be any online platform like a website/web application. In Saudi Arabia, there are so many people who are capable of making donations and also there are so many Nongovernmental Organizations (NGOs) which are helping poor and needy people of Saudi Arabia. But to connection gap is not as blur as it should be. There has to be some simple, fast, intuitive and secure way of doing such online donations so that users can donate easily with just a click. The aim of this work is to design and develop a Web Based Online charitable Donation System. Where, the charitable website will collect the charitable donations (such as clothes, toys, school tools) and delivers it to the children who need it. The proposed system will provide voluntary opportunities for those wishing to be volunteer in delivering the donations to the homes of the poor for free. The proposed work was designed and developed using the Unified Modeling Language (UML), SQL Server for implemented the database, and ASP.net and Visual basic programming languages.


2021 ◽  
Vol 4 ◽  
Author(s):  
Jakub Fusiak ◽  
Annemarie Käsbohrer

The lack of a harmonized model exchange formats among modelling tools impedes communication between researchers, since the exchange and usage of existing models in various software environments can be very difficult. The RaDAR model inventory aims to provide a platform to exchange models among professionals utilizing the Food Safety Knowledge Exchange (FSKX) Format (de Alba Aparicio et al. 2018) as a harmonized model exchange format. FSKX defines a framework that encodes all relevant data, metadata, and model scripts in an exchangeable file format. However, the creation of such a file can be a time-consuming and difficult process. To increase the usage of the FSK standard, we developed the RaDAR model inventory web application that targets the process of creating an FSKX file for the end user. Our inventory aims to be a user-friendly tool that allows users to create, read, edit, write, execute and compile FSKX files within the web browser. The possibility of sharing models with the public or a specific group of people facilitates collaboration and the exchange of information. Since the RaDAR model inventory is based on the open-source technology of Project Jupyter (Granger and Perez 2021), it can support nearly all relevant programming languages executed within a reproducible cloud-computing environment. The intuitive nature of the RaDAR model along with its wide range of features reduce the threshold for contribution to a harmonized model exchange format and eases collaboration. The RaDAR model inventory can be accessed at http://ejp-radar.eu.


2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.


Author(s):  
Thanh-Nhan Luong ◽  
Hanh-Phuc Nguyen ◽  
Ninh-Thuan Truong

The software security issue is being paid great attention from the software development community as security violations have emerged variously. Developers often use access control techniques to restrict some security breaches to software systems’ resources. The addition of authorization constraints to the role-based access control model increases the ability to express access rules in real-world problems. However, the complexity of combining components, libraries and programming languages during the implementation stage of web systems’ access control policies may arise potential flaws that make applications’ access control policies inconsistent with their specifications. In this paper, we introduce an approach to review the implementation of these models in web applications written by Java EE according to the MVC architecture under the support of the Spring Security framework. The approach can help developers in detecting flaws in the assignment implementation process of the models. First, the approach focuses on extracting the information about users and roles from the database of the web application. We then analyze policy configuration files to establish the access analysis tree of the application. Next, algorithms are introduced to validate the correctness of the implemented user-role and role-permission assignments in the application system. Lastly, we developed a tool called VeRA, to automatically support the verification process. The tool is also experimented with a number of access violation scenarios in the medical record management system.


2018 ◽  
Vol 11 (11) ◽  
pp. 6203-6230 ◽  
Author(s):  
Simon Ruske ◽  
David O. Topping ◽  
Virginia E. Foot ◽  
Andrew P. Morse ◽  
Martin W. Gallagher

Abstract. Primary biological aerosol including bacteria, fungal spores and pollen have important implications for public health and the environment. Such particles may have different concentrations of chemical fluorophores and will respond differently in the presence of ultraviolet light, potentially allowing for different types of biological aerosol to be discriminated. Development of ultraviolet light induced fluorescence (UV-LIF) instruments such as the Wideband Integrated Bioaerosol Sensor (WIBS) has allowed for size, morphology and fluorescence measurements to be collected in real-time. However, it is unclear without studying instrument responses in the laboratory, the extent to which different types of particles can be discriminated. Collection of laboratory data is vital to validate any approach used to analyse data and ensure that the data available is utilized as effectively as possible. In this paper a variety of methodologies are tested on a range of particles collected in the laboratory. Hierarchical agglomerative clustering (HAC) has been previously applied to UV-LIF data in a number of studies and is tested alongside other algorithms that could be used to solve the classification problem: Density Based Spectral Clustering and Noise (DBSCAN), k-means and gradient boosting. Whilst HAC was able to effectively discriminate between reference narrow-size distribution PSL particles, yielding a classification error of only 1.8 %, similar results were not obtained when testing on laboratory generated aerosol where the classification error was found to be between 11.5 % and 24.2 %. Furthermore, there is a large uncertainty in this approach in terms of the data preparation and the cluster index used, and we were unable to attain consistent results across the different sets of laboratory generated aerosol tested. The lowest classification errors were obtained using gradient boosting, where the misclassification rate was between 4.38 % and 5.42 %. The largest contribution to the error, in the case of the higher misclassification rate, was the pollen samples where 28.5 % of the samples were incorrectly classified as fungal spores. The technique was robust to changes in data preparation provided a fluorescent threshold was applied to the data. In the event that laboratory training data are unavailable, DBSCAN was found to be a potential alternative to HAC. In the case of one of the data sets where 22.9 % of the data were left unclassified we were able to produce three distinct clusters obtaining a classification error of only 1.42 % on the classified data. These results could not be replicated for the other data set where 26.8 % of the data were not classified and a classification error of 13.8 % was obtained. This method, like HAC, also appeared to be heavily dependent on data preparation, requiring a different selection of parameters depending on the preparation used. Further analysis will also be required to confirm our selection of the parameters when using this method on ambient data. There is a clear need for the collection of additional laboratory generated aerosol to improve interpretation of current databases and to aid in the analysis of data collected from an ambient environment. New instruments with a greater resolution are likely to improve on current discrimination between pollen, bacteria and fungal spores and even between different species, however the need for extensive laboratory data sets will grow as a result.


2021 ◽  
Author(s):  
Shichen Qiao ◽  
Chen Shen

In this study, a web database application with the Flask framework was developed to implement three types of queries and visualize the results over a bioinformatical dataset from Alfalfa (Medicago sativa). A backend SQLite database was constructed from genome FASTA, population variations, transcriptome, and annotation files with extensions ".fasta", ".gff", "vcf", ".annotate", etc. Further, a supplementary command-line-based Java application was also developed for faster access to the database without direct SQL programming. Overall, Python, Java, and HTML were the main programming languages used in this application. Those scripts and the development procedures are valuable for bioinformaticians to build online databases from similar raw datasets of other species.


Author(s):  
Firmansyah Adiputra ◽  
Khabib Mustofa

AbstrakAplikasi desktop adalah aplikasi yang berjalan lokal dalam lingkungan desktop dan hanya dapat diakses oleh pengguna desktop. Ini berbeda dengan aplikasi web yang dapat diakses dari manapun melalui jaringan. Namun tidak seperti halnya aplikasi desktop, aplikasi web yang berjalan di atas web browser tidak dapat berintegrasi dengan aplikasi desktop yang berjalan pada sisi klien.Dalam penelitian ini dibangun purwarupa framework yang diberi nama HAF (Hybrid Application Framework). HAF digunakan untuk mengembangkan dan mengeksekusi jenis aplikasi desktop baru yang diberi nama HyApp (Hybrid Application). Melalui HAF, HyApp dibangun menggunakan teknologi web dan dapat diakses secara lokal maupun melalui jaringan. Saat diakses secara lokal, walaupun dikembangkan dengan teknologi web, HyApp dapat berkomunikasi dengan aplikasi desktop lainnya. Selain itu, melalui API yang disediakan oleh HAF, HyApp akan dapat menerapkan perilaku yang berbeda berdasarkan modus pengaksesan yang dilakukannya. Kata kunci—framework, aplikasi desktop, aplikasi web    AbstractDesktop application is an application that runs locally in a desktop environment and can be accessed only by desktop users. It differs from web application which can be accessed from anywhere through networks. But unlike desktop applications, web applications cannot integrate nicely with desktop applications from where it is accessed.This research developes a prototype of framework which is named HAF (Hybrid Application Framework). HAF is used for developing and executing a new type of desktop application, named HyApp (Hybrid Application). Through HAF, HyApp is built using web technologies and can be accessed either locally or from networks. When accessed locally, even though it is built using web technologies, it still can communicate with other desktop applications. Also by using APIs provided by HAF, HyApp is capable to behave differently based on whether it is accessed locally or remotely. Keywords—framework, desktop applications, web applications


2020 ◽  
Author(s):  
Annika Tjuka ◽  
Robert Forkel ◽  
Johann-Mattis List

Psychologists and linguists have collected a great diversity of data for word and concept properties. In psychology, many studies accumulate norms and ratings such as word frequencies or age-of-acquisition often for a large number of words. Linguistics, on the other hand, provides valuable insights into relations of word meanings. We present a collection of those data sets for norms, ratings, and relations that cover different languages: ‘NoRaRe.’ To enable a comparison between the diverse data types, we established workflows that facilitate the expansion of the database. A web application allows convenient access to the data (https://digling.org/norare/). Furthermore, a software API ensures consistent data curation by providing tests to validate the data sets. The NoRaRe collection is linked to the database curated by the Concepticon project (https://concepticon.clld.org) which offers a reference catalog of unified concept sets. The link between words in the data sets and the Concepticon concept sets makes a cross-linguistic comparison possible. In three case studies, we test the validity of our approach, the accuracy of our workflow, and the applicability of our database. The results indicate that the NoRaRe database can be applied for the study of word properties across multiple languages. The data can be used by psychologists and linguists to benefit from the knowledge rooted in both research disciplines.


2019 ◽  
Author(s):  
Randy Heiland ◽  
Daniel Mishler ◽  
Tyler Zhang ◽  
Eric Bower ◽  
Paul Macklin

AbstractJupyter Notebooks [4, 6] provide executable documents (in a variety of programming languages) that can be run in a web browser. When a notebook contains graphical widgets, it becomes an easy-to-use graphical user interface (GUI). Many scientific simulation packages use text-based configuration files to provide parameter values and run at the command line without a graphical interface. Manually editing these files to explore how different values affect a simulation can be burdensome for technical users, and impossible to use for those with other scientific backgrounds. xml2jupyter is a Python package that addresses these scientific bottlenecks. It provides a mapping between configuration files, formatted in the Extensible Markup Language (XML), and Jupyter widgets. Widgets are automatically generated from the XML file and these can, optionally, be incorporated into a larger GUI for a simulation package, and optionally hosted on cloud resources. Users modify parameter values via the widgets, and the values are written to the XML configuration file which is input to the simulation’s command-line interface. xml2jupyter has been tested using PhysiCell [1], an open source, agent-based simulator for biology, and it is being used by students for classroom and research projects. In addition, we use xml2jupyter to help create Jupyter GUIs for PhysiCell-related applications running on nanoHUB [5].


Sign in / Sign up

Export Citation Format

Share Document