scholarly journals Exploration on Machine Learning Layout Generation of Chinese Private Garden in Southern Yangtze

2021 ◽  
pp. 35-44
Author(s):  
Yubo Liu ◽  
Chenrong Fang ◽  
Zhe Yang ◽  
Xuexin Wang ◽  
Zhuohong Zhou ◽  
...  

AbstractMachine learning has been proved to be feasible and reasonable in architectural field by extensive researches recently, whereas its potential is far from being tapped. Previous studies show that the training of GAN by labelling can enable a computer to grasp interrelationship of spatial elements and logical relationship between spatial elements and boundary. This study set the learning object as layout of private gardens in southern Yangtze with higher complexity. Chinese scholars usually analyse private garden layout based on their observation and experience. In this paper, based on Pix2Pix model, we enable a computer to generate private garden layout plan for given site conditions by learning classic cases of traditional Chinese private gardens. Through the experiment, taking Lingering garden as example, we continuously adjust the labelling method to improve learning effect. The finally trained model can quickly generate private garden layout and aid designers to complete scheme design with private garden element corpus. In addition, the working process of training GAN enables us to discover and verify some private garden layout rules that have not been paid attention to.

Author(s):  
Toivo Ylinampa ◽  
Hannu Saarenmaa

New innovations are needed to speed up digitisation of insect collections. More than one half of all specimens in scientific collections are pinned insects. In Europe this means 500-1,000 million such specimens. Today’s fastest mass-digitisation (i.e., imaging) systems for pinned insects can achieve circa 70 specimens/hour and 500/day by one operator (Tegelberg et al. 2014, Tegelberg et al. 2017). This is in contrast of the 5,000/day rate of the state-of-the-art mass-digitisation systems for herbarium sheets (Oever and Gofferje 2012). The slowness of imaging pinned insects follows from the fact that they are essentially 3D objects. Although butterflies, moths, dragonflies and similar large-winged insects can be prepared (spread) as 2D objects, the fact that the labels are pinned under the insect specimen makes even these samples 3D. In imaging, the labels are often removed manually, which slows down the imaging process. If the need for manual handling of the labels can be skipped, the imaging speed can easily multiplied. ENTODIG-3D (Fig. 1) is an automated camera system, which takes pictures of insect collection boxes (units and drawers) and digitizes them, minimizing time-consuming manual handling of specimens. “Units” are small boxes or trays contained in drawers of collection cabinets, and are being used in most major insect collections. A camera is mounted on motorized rails, which moves in two dimensions over a unit or a drawer. Camera movement is guided by a machine learning object detection program. QR-codes are printed and placed underneath the unit or drawer. QR-codes may contain additional information about each specimen, for example the place it originated from in the collection. Also, the object detection program detects the specimen, and stores its coordinates. The camera mount rotates and tilts, which ensures that the camera may take photographs from all angles and positions. Pictures are transferred into the computer, which calculates a 3D-model with photogrammetry, from which the label text beneath the specimen may be read. This approach requires heavy computation in the segmentation of the top images, and in the creation of a 3D model of the unit, and in extraction of label images of many specimens. Firstly, a sparse point cloud is calculated. Secondly, a dense point cloud is calculated. Finally, a textured mesh is calculated. With machine learning object detection, the top layer, which consists of the insect, may be removed. This leaves the bottom layer with labels visible for later processing by OCR (optical character recognition). This is a new approach to digitise pinned insects in collections. The physical setup is not expensive. Therefore, many systems could be installed in parallel to work overnight to produce the images of tens of drawers. The setup is not physically demanding for the specimens, as they can be left untouched in the unit or drawer. A digital object is created, consisting of label text, unit or drawer QR-code, specimen coordinates in a drawer with unique identifier, and a top-view photo of the specimen. The drawback of this approach is the heavy computing that is needed to create the 3D-models. ENTODIG-3D can currently digitise one sample in five minutes, almost without manual work. Theoretically, potentially sustainable rate is approximately one hundred thousand samples per year. The rate is similar as the current insect digitisation system in Helsinki (Tegelberg & al. 2017), but without the need for manual handling of individual specimens. By adding more computing power, the rate may be increased in linear fashion.


Author(s):  
Yuzhe Pan ◽  
Jin Qian ◽  
Yingdong Hu

AbstractRecently, the mainstream gradually has become replacing neighborhood-style communities with high-density residences. The original pleasant scale and enclosed residential spaces have been broken, and the traditional neighborhood relations are going away. This research uses machine learning to train the model to generate a new plan, which is used in today’s residential design. First, in order to obtain a better generation effect, this study extracts the transcendental information of the neighborhood community in north of China, using roads, buildings etc. as morphological representations; GauGAN, compared to the pix2pix and pix2pixHD, used by predecessors, can achieve a clearer and a more diversified output and also fit irregular contours more realistically. ANN model trained by 167 general layout samples of a neighborhood community in north of China from 1950s to 1970s can generate various general layouts in different shapes and scales. The experiment proves that GauGAN is more suitable for general layout generation than pix2pix (pix2pixHD); Distributed training can improve the clarity of the generation and allow later vectorization to be more convenient.


2020 ◽  
Vol 156 ◽  
pp. 113445
Author(s):  
Danyu Bai ◽  
Hanyu Xue ◽  
Ling Wang ◽  
Chin-Chia Wu ◽  
Win-Chin Lin ◽  
...  

Author(s):  
Xinhui Wang ◽  
Xinchun Li ◽  
Houjin Chen ◽  
Yahui Peng ◽  
Yanfeng Li

Omega ◽  
2010 ◽  
Vol 38 (1-2) ◽  
pp. 3-11 ◽  
Author(s):  
Wen-Chiung Lee ◽  
Chin-Chia Wu ◽  
Peng-Hsiang Hsu

Sign in / Sign up

Export Citation Format

Share Document