scholarly journals FVS-TECHNOLOGY: INTELLECTUAL SEARCH TOOLS

2021 ◽  
Vol 22 (1) ◽  
pp. 118-128
Author(s):  
BAHODIR MUMINOV ◽  
Ulugbek Bekmurodov

It is enough to have 3 basic stages of the modules in the SPD of a diversified corporate network: (F) - the method of submitting the request, i.e. the method of forming the expression of the information needs of the system user (S) - the function of the correspondence of the electronic resource to the request degree of compliance with the request and the found electronic resource; (V) - method of presenting electronic resources. Combining these three stages for models, methods, and software modules of the AML, is referred to as FSV technology (FSV platform, FSV Framework). FSV technology is an instrumental software platform based on a client-server architecture, integration and modification of models, and methods and algorithms of AML in the information environment of corporate networks. The following architecture has been developed for the FSV technology proposed for the search index in data retrieval systems. ABSTRAK: Tiga peringkat asas modul adalah cukup dalam pelbagai rangkaian korporat SPD iaitu: (F) - kaedah penyerahan permintaan, kaedah membentuk ungkapan keperluan maklumat pengguna sistem (S) - fungsi surat-menyurat sumber elektronik bagi permintaan tahap pematuhan permintaan dan sumber elektronik yang dijumpai; (V) - kaedah penyampaian sumber elektronik. Gabungan tiga peringkat model, kaedah dan modul perisian AML, dipanggil teknologi FSV (platform FSV, rangka FSV). Teknologi FSV adalah platform perisian instrumen berdasarkan seni bina pelanggan-pelayan, integrasi dan pengubah suaian model, kaedah-kaedah dan algoritma AML dalam persekitaran maklumat dalam rangkaian korporat. Seni bina ini telah di bina bagi teknologi FSV yang dicadangkan bagi indeks carian dalam sistem dapatan data.

1997 ◽  
pp. 13-26 ◽  
Author(s):  
David Johnson ◽  
Myke Gluck

This article looks at the access to geographic information through a review of information science theory and its application to the WWW. The two most common retrieval systems are information and data retrieval. A retrieval system has seven elements: retrieval models, indexing, match and retrieval, relevance, order, query languages and query specification. The goal of information retrieval is to match the user's needs to the information that is in the system. Retrieval of geographic information is a combination of both information and data retrieval. Aids to effective retrieval of geographic information are: query languages that employ icons and natural language, automatic indexing of geographic information, and standardization of geographic information. One area that has seen an explosion of geographic information retrieval systems (GIR's) is the World Wide Web (WWW). The final section of this article discusses how seven WWW GIR's solve the the problem of matching the user's information needs to the information in the system.


Author(s):  
Yanji Chen ◽  
Mieczyslaw M. Kokar ◽  
Jakub J. Moskal

AbstractThis paper describes a program—SPARQL Query Generator (SQG)—which takes as input an OWL ontology, a set of object descriptions in terms of this ontology and an OWL class as the context, and generates relatively large numbers of queries about various types of descriptions of objects expressed in RDF/OWL. The intent is to use SQG in evaluating data representation and retrieval systems from the perspective of OWL semantics coverage. While there are many benchmarks for assessing the efficiency of data retrieval systems, none of the existing solutions for SPARQL query generation focus on the coverage of the OWL semantics. Some are not scalable since manual work is needed for the generation process; some do not consider (or totally ignore) the OWL semantics in the ontology/instance data or rely on large numbers of real queries/datasets that are not readily available in our domain of interest. Our experimental results show that SQG performs reasonably well with generating large numbers of queries and guarantees a good coverage of OWL axioms included in the generated queries.


Author(s):  
Zahid Ashraf Wani ◽  
Huma Shafiq

Nowadays, we all rely on cyberspace for our information needs. We make use of different types of search tools. Some of them have specialization in a specific format or two, while few can crawl a good portion of the web irrespective of formats. Therefore, it is very imperative for information professionals to have thorough understandings of these tools. As such, the chapter is an endeavor to delve deep and highlight various trends in online information retrieval from primitive to modern ones. The chapter also made an effort to envisage the future requirements and expectation keeping in view the ever-increasing dependence on diverse species of information retrieval tools.


2015 ◽  
Vol 21 (4) ◽  
pp. 648-651
Author(s):  
Lukas Tanutama ◽  
Gerrard Polla ◽  
Raymond Kosala ◽  
Richard Kumaradjaja

The competitive nature of Internet access service business drives Service Providers to find innovative revenue generators within their core competencies. Internet connection is the essential infrastructure in the current business environment. Service Providers provide the Internet connections to corporate networks. It processes network data to enable the Internet business communications and transactions. Mining the network data of a particular corporate network resulted in its business traffic profile or characteristics. Based on the discovered characteristics, this research proposes novel generic Value Added Services (VAS). The VAS becomes the innovative and competitive revenue generators. The VAS is competitive as only the Service Provider and its customer know the traffic profile. The knowledge becomes the barrier of entry for competitors. To offer the VAS, a Service Provider must build close relationship with its customer for acceptance.


Author(s):  
R. V. Kyrychok ◽  
◽  
G. V. Shuklin

The article considers the problem of determining and assessing the quality of the vulnerability validation mechanism of the information systems and networks. Based on the practical analysis of the vulnerability validation process and the analytical dependencies of the basic characteristics of the vulnerability validation quality obtained using the Bernstein polynomials, additional key indicators were identified and characterised, which make it possible to assert with high reliability about the positive progress or consequences of the vulnerability validation of the target corporate network. The intervals of these indicators were experimentally determined at which the vulnerability validation mechanism is of high quality. In addition, during the calculations, a single integral indicator was also derived to quantitatively assess the quality of the vulnerability validation mechanism of the corporate networks, and an experimental study was carried out, as well as the assessment of the quality of the automatic vulnerability validation mechanism of the db_autopwn plugin designed to automate the Metasploit framework vulnerability exploitation tool. As a result, it was proposed the methodology for analysing the quality of the vulnerability validation mechanism in the corporate networks, which allows one to quantify the quality of the validation mechanism under study, which in turn will allow real-time monitoring and control of the validation progress of the identified vulnerabilities. Also, in the study, the dependences of previously determined key performance indicators of the vulnerability validation mechanism on the rational cycle time were obtained, which makes it possible to build the membership functions for the fuzzy sets. The construction of these sets, in particular, allows making decisions with minimal risks for an active analysis of the security of corporate networks.


Author(s):  
Roberto J.G. Unger ◽  
Isa Maria Freire

O artigo apresenta o conceito de regime de informação aos gestores de informação, como contribuição aos processos de adaptação e adequação de sistemas de informação e linguagens documentárias para atender às necessidades informacionais dos usuários. Regimes de informação são modos de produção informacional dominantes numa formação econômico-social que pressupõem, necessariamente, em seu contexto fontes de informação que são disseminadas e exercem influência no contexto social em que estão estabelecidas. Nesse aspecto, as sociedades têm regimes de informação através dos quais organizam a produção material e simbólica e representam a dinâmica das relações sociais. Dentre as diversas formas de manifestações institucionais atuais, destacam-se os sistemas de recuperação da informação, a manifestação per se do fenômeno que move o regime. Os sistemas de recuperação da informação, por sua vez, usam linguagens documentárias para organizar e comunicar a informação organizada nos inúmeros “agregados de informação”, que Barreto (1996) define como “estruturas” que armazenam “estoques de informação” e podem atuar como “agentes”, ou “mediadores”, entre uma fonte de informação e seus usuários. Abstract The article presents the concept of regime of information to information managers as a contribution for the proccesses of adaptation and adjustment of information systems and documentary language to really attend the information needs of users. Regimes of information are dominants modules of informational production in economic-social formation that presuppose, necessarily, in its context information sources wich are disseminated and put in actions influences in the structure which they are established. Under these circumstances, societies have regimes of information through whom organize symbolic and material production and represent the social dynamics relations. In the midst of several kinds of actual institutional manifestations, distinguish the information retrieval systems, the expression per se of the phenomenon that moves the regime. Under this configuration, the information retrieval systems make use of documentary language to organize, describe and communicate provided information in innumerable aggregates of information that, according Barreto (1996), “are structures which harvest “supply of information” and they operate as “agents” or “mediators” between a source of information and their users”.


2020 ◽  
Vol 10 (21) ◽  
pp. 7926
Author(s):  
Michał Walkowski ◽  
Maciej Krakowiak ◽  
Jacek Oko ◽  
Sławomir Sujecki

The time gap between public announcement of a vulnerability—its detection and reporting to stakeholders—is an important factor for cybersecurity of corporate networks. A large delay preceding an elimination of a critical vulnerability presents a significant risk to the network security and increases the probability of a sustained damage. Thus, accelerating the process of vulnerability identification and prioritization helps to red the probability of a successful cyberattack. This work introduces a flexible system that collects information about all known vulnerabilities present in the system, gathers data from organizational inventory database, and finally integrates and processes all collected information. Thanks to application of parallel processing and non relational databases, the results of this process are available subject to a negligible delay. The subsequent vulnerability prioritization is performed automatically on the basis of the calculated CVSS 2.0 and 3.1 scores for all scanned assets. The environmental CVSS vector component is evaluated accurately thanks to the fact that the environmental data is imported directly from the organizational inventory database.


2019 ◽  
pp. 497-513
Author(s):  
Ivan D. Burke ◽  
Renier P. van Heerden

Data breaches are becoming more common and numerous every day, where huge amount of data (corporate and personal) are leaked more frequently than ever. Corporate responses to data breaches are insufficient, when commonly remediation is minimal. This research proposes that a similar approach to physical pollution (environmental pollution) can be used to map and identify data leaks as Cyber pollution. Thus, IT institutions should be made aware of their contribution to Cyber pollution in a more measurable method. This article defines the concept of cyber pollution as: security vulnerable (such as unmaintained or obsolete) devices that are visible through the Internet and corporate networks. This paper analyses the recent state of data breach disclosures Worldwide by providing statistics on significant scale data breach disclosures from 2014/01 to 2016/12. Ivan Burke and Renier van Heerden model security threat levels similar to that of pollution breaches within the physical environment. Insignificant security openings or vulnerabilities can lead to massive exploitation of entire systems. By modelling these breaches as pollution, the aim is to introduce the concept of cyber pollution. Cyber pollution is a more tangible concept for IT managers to relay to staff and senior management. Using anonymised corporate network traffic with Open Source penetration testing software, the model is validated.


Author(s):  
Ying Sun

Collaborative search generally uses previously collected search sessions as a resource to help future users to improve their searching by query modification. The recommendation or automatic extension of the query is generally based on the content of the old sessions, or purely the sequence/order of queries/ texts in a session, or a combination. However, users with the same expressed query may need different information. The difference may not be topic related. This chapter proposes to enrich the context of query representation to incorporate non-topical properties of user information needs, which the authors believe will improve the results of collaborative search.


Sign in / Sign up

Export Citation Format

Share Document