scholarly journals TGHop: an explainable, efficient, and lightweight method for texture generation

Author(s):  
Xuejing Lei ◽  
Ganning Zhao ◽  
Kaitai Zhang ◽  
C.-C. Jay Kuo

An explainable, efficient, and lightweight method for texture generation, called TGHop (an acronym of Texture Generation PixelHop), is proposed in this work. Although synthesis of visually pleasant texture can be achieved by deep neural networks, the associated models are large in size, difficult to explain in theory, and computationally expensive in training. In contrast, TGHop is small in its model size, mathematically transparent, efficient in training and inference, and able to generate high-quality texture. Given an exemplary texture, TGHop first crops many sample patches out of it to form a collection of sample patches called the source. Then, it analyzes pixel statistics of samples from the source and obtains a sequence of fine-to-coarse subspaces for these patches by using the PixelHop++ framework. To generate texture patches with TGHop, we begin with the coarsest subspace, which is called the core, and attempt to generate samples in each subspace by following the distribution of real samples. Finally, texture patches are stitched to form texture images of a large size. It is demonstrated by experimental results that TGHop can generate texture images of superior quality with a small model size and at a fast speed.

1996 ◽  
Vol 175 ◽  
pp. 71-72
Author(s):  
F. Mantovani ◽  
W. Junor ◽  
M. Bondi ◽  
L. Padrielli ◽  
W. Cotton ◽  
...  

Recently we focussed our attention on a sample of Compact Steep-spectrum Sources (CSSs) selected because of the large bent radio jets seen in the inner region of emission. The largest distortions are often seen in sources dominated by jets, and there are suggestions that this might to some extent be due to projection effects. However, superluminal motion is rare in CSSs. The only case we know of so far is 3C147 (Alef at al. 1990) with a mildly superluminal speed of ≃ 1.3v/c. Moreover, the core fractional luminosity in CSSs is ≃ 3% and ≤ 0.4% for quasars and radio galaxies respectively. Similar values are found for large size radio sources i.e. both boosting and orientations in the sky are similar for the two classes of objects. An alternative possibility is that these bent-jet sources might also be brightened by interactions with the ambient media. There are clear indications that intrinsic distortions due to interactions with a dense inhomogeneous gaseous environment play an important role. Observational support comes from the large RMs found in CSSs (Taylor et al. 1992; Mantovani et al. 1994; Junor et al. these proc.) and often associated with strong depolarization (Garrington & Akujor, t.p.). The CSSs also have very luminous Narrow Line Regions emission, with exceptional velocity structure (Gelderman, t.p.).


Crystals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 235
Author(s):  
Shuqi Zhao ◽  
Tongtong Yu ◽  
Ziming Wang ◽  
Shilei Wang ◽  
Limei Wei ◽  
...  

Two-dimensional (2D) materials driven by their unique electronic and optoelectronic properties have opened up possibilities for their various applications. The large and high-quality single crystals are essential to fabricate high-performance 2D devices for practical applications. Herein, IV-V 2D GeP single crystals with high-quality and large size of 20 × 15 × 5 mm3 were successfully grown by the Bi flux growth method. The crystalline quality of GeP was confirmed by high-resolution X-ray diffraction (HRXRD), Laue diffraction, electron probe microanalysis (EPMA) and Raman spectroscopy. Additionally, intrinsic anisotropic optical properties were investigated by angle-resolved polarized Raman spectroscopy (ARPRS) and transmission spectra in detail. Furthermore, we fabricated high-performance photodetectors based on GeP, presenting a relatively large photocurrent over 3 mA. More generally, our results will significantly contribute the GeP crystal to the wide optoelectronic applications.


BMC Genomics ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Seth Commichaux ◽  
Kiran Javkar ◽  
Padmini Ramachandran ◽  
Niranjan Nagarajan ◽  
Denis Bertrand ◽  
...  

Abstract Background Whole genome sequencing of cultured pathogens is the state of the art public health response for the bioinformatic source tracking of illness outbreaks. Quasimetagenomics can substantially reduce the amount of culturing needed before a high quality genome can be recovered. Highly accurate short read data is analyzed for single nucleotide polymorphisms and multi-locus sequence types to differentiate strains but cannot span many genomic repeats, resulting in highly fragmented assemblies. Long reads can span repeats, resulting in much more contiguous assemblies, but have lower accuracy than short reads. Results We evaluated the accuracy of Listeria monocytogenes assemblies from enrichments (quasimetagenomes) of naturally-contaminated ice cream using long read (Oxford Nanopore) and short read (Illumina) sequencing data. Accuracy of ten assembly approaches, over a range of sequencing depths, was evaluated by comparing sequence similarity of genes in assemblies to a complete reference genome. Long read assemblies reconstructed a circularized genome as well as a 71 kbp plasmid after 24 h of enrichment; however, high error rates prevented high fidelity gene assembly, even at 150X depth of coverage. Short read assemblies accurately reconstructed the core genes after 28 h of enrichment but produced highly fragmented genomes. Hybrid approaches demonstrated promising results but had biases based upon the initial assembly strategy. Short read assemblies scaffolded with long reads accurately assembled the core genes after just 24 h of enrichment, but were highly fragmented. Long read assemblies polished with short reads reconstructed a circularized genome and plasmid and assembled all the genes after 24 h enrichment but with less fidelity for the core genes than the short read assemblies. Conclusion The integration of long and short read sequencing of quasimetagenomes expedited the reconstruction of a high quality pathogen genome compared to either platform alone. A new and more complete level of information about genome structure, gene order and mobile elements can be added to the public health response by incorporating long read analyses with the standard short read WGS outbreak response.


Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


Scanning ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-7
Author(s):  
Xu Chen ◽  
Tengfei Guo ◽  
Yubin Hou ◽  
Jing Zhang ◽  
Wenjie Meng ◽  
...  

A new scan-head structure for the scanning tunneling microscope (STM) is proposed, featuring high scan precision and rigidity. The core structure consists of a piezoelectric tube scanner of quadrant type (for XY scans) coaxially housed in a piezoelectric tube with single inner and outer electrodes (for Z scan). They are fixed at one end (called common end). A hollow tantalum shaft is coaxially housed in the XY-scan tube and they are mutually fixed at both ends. When the XY scanner scans, its free end will bring the shaft to scan and the tip which is coaxially inserted in the shaft at the common end will scan a smaller area if the tip protrudes short enough from the common end. The decoupled XY and Z scans are desired for less image distortion and the mechanically reduced scan range has the superiority of reducing the impact of the background electronic noise on the scanner and enhancing the tip positioning precision. High quality atomic resolution images are also shown.


Author(s):  
Yang Wang ◽  
Weihua Wang ◽  
Shilin Yang ◽  
Jiaqi Zhu

Diamond is a material with excellent performances which attracts the attention from researchers for decades. Pt (111), owing to its catalytic activity on diamond synthesis, is regarded to be a candidate for diamond hetero-epitaxity, which can enhance nucleation density. Molten surface at diamond growth temperature can also improve mobility and aggregation capability of primitive nuclei. Generally, (100)-oriented is welcomed for the achivement of high quality and large size diamond, since the formation of defects and twins are prevented. First-principle calculations and experimental researches were carried out for the study of transformation of orientation. The transformation from {111} to {100}-oriented diamond has been observed on Pt (111) substrate, which can be promoted by the increase of carbon source concentration and substrate temperature. The process is energetic favorable, which may provides a way towards large-scale (100) diamond films.


2020 ◽  
Author(s):  
Wade R. Roberts ◽  
Kala M. Downey ◽  
Elizabeth C. Ruck ◽  
Jesse C. Traller ◽  
Andrew J. Alverson

ABSTRACTThe diatom, Cyclotella cryptica, is a well-established experimental model for physiological studies and, more recently, biotechnology applications of diatoms. To further facilitate its use as a model diatom species, we report an improved reference genome assembly and annotation for C. cryptica strain CCMP332. We used a combination of long- and short-read sequencing to assemble a high-quality and contaminant-free genome. The genome is 171 Mb in size and consists of 662 scaffolds with a scaffold N50 of 494 kb. This represents a 176-fold decrease in scaffold number and 41-fold increase in scaffold N50 compared to the previous assembly. The genome contains 21,250 predicted genes, 75% of which were assigned putative functions. Repetitive DNA comprises 59% of the genome, and an improved classification of repetitive elements indicated that a historically steady accumulation of transposable elements has contributed to the relatively large size of the C. cryptica genome. The high-quality C. cryptica genome will serve as a valuable reference for ecological, genetic, and biotechnology studies of diatoms.Data available fromNCBI BioProjects PRJNA628076 and PRJNA589195


2021 ◽  
Vol 7 ◽  
pp. 1
Author(s):  
Bertrand Mercier ◽  
Di Yang ◽  
Ziyue Zhuang ◽  
Jiajie Liang

We show with simplified numerical models, that for the kind of RBMK operated in Chernobyl: The core was unstable due to its large size and to its weak power counter-reaction coefficient, so that the power of the reactor was not easy to control even with an automatic system. Xenon oscillations could easily be activated. When there was xenon poisoning in the upper half of the core, the safety rods were designed in such a way that, at least initially, they were increasing (and not decreasing) the core reactivity. This reactivity increase has been sufficient to lead to a very high pressure increase in a significant amount of liquid water in the fuel channels thus inducing a strong propagating shock wave leading to a failure of half the pressure tubes at their junction with the drum separators. The depressurization phase (flash evaporation) following this failure has produced, after one second, a significant decrease of the water density in half the pressure tubes and then a strong reactivity accident due to the positive void effect reactivity coefficient. We evaluate the fission energy released by the accident


1999 ◽  
Vol 121 (10) ◽  
pp. 70-72
Author(s):  
Hutchinson Harry

This article presents study that shows beta testing shapes software to the users’ hands so the product will fit the marketplace. MoldWizard is intended to reduce the time necessary to design complex mold tooling, such as this mold used to manufacture the plastic housings for high-quality nail guns. Depending on the complexity of a mold and its eventual use, the design process can require as many as 50 different steps, including tasks such as importing and cleaning up the CAD model of the part, adjusting its size for shrinkage, separating the core and cavity, generating mold bases, and adding sliders, inserts, and other standard components. Minco Tool & Mold uses Unigraphics to design molds like the one shown in the article for an automobile hubcap. Minco participated in the MoldWizard beta test program. A news group at the website let the test users communicate directly with each other. When beta testers had questions about how to use the program, they posted them in the news group and other testers would respond.


2016 ◽  
Vol 37 (8) ◽  
pp. 984-989
Author(s):  
巩 哲 GONG Zhe ◽  
何大伟 HE Da-wei ◽  
王永生 WANG Yong-sheng ◽  
许海腾 XU Hai-teng ◽  
董艳芳 DONG Yan-fang

Sign in / Sign up

Export Citation Format

Share Document