The Intellectual Dimension of IT-Business Alignment Problem: Alloy Application

Author(s):  
Marina Ivanova ◽  
Pavel Malyzhenkov
Micromachines ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 673
Author(s):  
Wei Yuan ◽  
Cheng Xu ◽  
Li Xue ◽  
Hui Pang ◽  
Axiu Cao ◽  
...  

Double microlens arrays (MLAs) in series can be used to divide and superpose laser beam so as to achieve a homogenized spot. However, for laser beam homogenization with high coherence, the periodic lattice distribution in the homogenized spot will be generated due to the periodicity of the traditional MLA, which greatly reduces the uniformity of the homogenized spot. To solve this problem, a monolithic and highly integrated double-sided random microlens array (D-rMLA) is proposed for the purpose of achieving laser beam homogenization. The periodicity of the MLA is disturbed by the closely arranged microlens structures with random apertures. And the random speckle field is achieved to improve the uniformity of the homogenized spot by the superposition of the divided sub-beams. In addition, the double-sided exposure technique is proposed to prepare the rMLA on both sides of the same substrate with high precision alignment to form an integrated D-rMLA structure, which avoids the strict alignment problem in the installation process of traditional discrete MLAs. Then the laser beam homogenization experiments have been carried out by using the prepared D-rMLA structure. The laser beam homogenized spots of different wavelengths have been tested, including the wavelengths of 650 nm (R), 532 nm (G), and 405 nm (B). The experimental results show that the uniformity of the RGB homogenized spots is about 91%, 89%, and 90%. And the energy utilization rate is about 89%, 87%, 86%, respectively. Hence, the prepared structure has high laser beam homogenization ability and energy utilization rate, which is suitable for wide wavelength regime.


2021 ◽  
Vol 181 ◽  
pp. 333-340
Author(s):  
Samgwa Quintine Njanka ◽  
Godavari Sandula ◽  
Ricardo Colomo-Palacios

2009 ◽  
Vol 395 (1) ◽  
pp. 213-223 ◽  
Author(s):  
Erik Alm ◽  
Ralf J. O. Torgrip ◽  
K. Magnus Åberg ◽  
Ina Schuppe-Koistinen ◽  
Johan Lindberg

2018 ◽  
Vol 21 (1) ◽  
pp. 19-28
Author(s):  
Martin Peterson

2012 ◽  
Vol 2012 ◽  
pp. 1-6 ◽  
Author(s):  
Ernesto Liñán-García ◽  
Lorena Marcela Gallegos-Araiza

A new algorithm for solving sequence alignment problem is proposed, which is named SAPS (Simulated Annealing with Previous Solutions). This algorithm is based on the classical Simulated Annealing (SA). SAPS is implemented in order to obtain results of pair and multiple sequence alignment. SA is a simulation of heating and cooling of a metal to solve an optimization problem. In order to select randomly a current solution, SAPS algorithm chooses a solution from solutions that have been previously generated within the Metropolis Cycle. This simple change has led to increase the quality of the solution to the problem of aligning genomic sequences with respect to the classical Simulated Annealing algorithm. The parameters of SAPS, for certain instances, are tuned by an analytical method, and some parameters have experimentally been tuned. SAPS has generated high-quality results in comparison with the classical SA. The instances used are specific genes of the AIDS virus.


2020 ◽  
Vol 8 (2) ◽  
pp. 54-72
Author(s):  
Margit Sutrop ◽  

As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI ‘value alignment problem’ faces two kinds of challenges—a technical and a normative one—which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values or whose values should artificial intelligence align with?” My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.


Sign in / Sign up

Export Citation Format

Share Document