scholarly journals Relationship Between Multi-Factors and Short-Term Changes in Fishery Resources

2021 ◽  
Vol 8 ◽  
Author(s):  
Mingshuai Sun ◽  
Xianyong Zhao ◽  
Yancong Cai ◽  
Kui Zhang ◽  
Zuozhi Chen

The objective of this research is to explore the relationships among various multidimensional factor groups and the density of fishery resources of ecosystems in offshore waters and to expand the application of deep machine learning algorithm in this field. Based on XGBoost and random forest algorithms, we first conducted regulatory importance ranking analysis on the time factor, space factor, acoustic technology factor, abiotic factor, and acoustic density of offshore fishery resources in the South China Sea. Based on these analyses, data slicing is carried out for multiple factors and acoustic density, and the relationship between multidimensional factor group and the density of marine living resources in the ecosystem of offshore waters is elaborately compared and analyzed. Importance ranking shows that the concentration of active silicate at 20 m depth, water depth, moon phase perfection, and the number of pulses per unit distance (Ping) are the first-order factors with a cumulative contribution rate of 50%. The comparative analysis shows that there are some complex relationships between the multidimensional factor group and the density of marine biological resources. Within a certain range, one factor strengthens the influence of another factor. When Si20 is in the range of 0–0.1, and the moon-phase perfection is in the range of 0.3–1, both Si20 and moon-phase perfection strengthened the positive influence of water depth on the density of fishery biological resources.

Energies ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 2486
Author(s):  
Vanesa Mateo-Pérez ◽  
Marina Corral-Bobadilla ◽  
Francisco Ortega-Fernández ◽  
Vicente Rodríguez-Montequín

One of the fundamental maintenance tasks of ports is the periodic dredging of them. This is necessary to guarantee a minimum draft that will enable ships to access ports safely. The determination of bathymetries is the instrument that determines the need for dredging and permits an analysis of the behavior of the port bottom over time, in order to achieve adequate water depth. Satellite data processing to predict environmental parameters is used increasingly. Based on satellite data and using different machine learning algorithm techniques, this study has sought to estimate the seabed in ports, taking into account the fact that the port areas are strongly anthropized areas. The algorithms that were used were Support Vector Machine (SVM), Random Forest (RF) and the Multi-Adaptive Regression Splines (MARS). The study was carried out in the ports of Candás and Luarca in the Principality of Asturias. In order to validate the results obtained, data was acquired in situ by using a single beam provided. The results show that this type of methodology can be used to estimate coastal bathymetry. However, when deciding which system was best, priority was given to simplicity and robustness. The results of the SVM and RF algorithms outperform those of the MARS. RF performs better in Candás with a mean absolute error (MAE) of 0.27 cm, whereas SVM performs better in Luarca with a mean absolute error of 0.37 cm. It is suggested that this approach is suitable as a simpler and more cost-effective rough resolution alternative, for estimating the depth of turbid water in ports, than single-beam sonar, which is labor-intensive and polluting.


2020 ◽  
Vol 61 (5) ◽  
pp. 77-87
Author(s):  
Thang Trong Dam ◽  
Lam Tung Vu ◽  

When it comes to blasting in soil, in general, and blasting in clay medium under water, in particular, typical parameters for destruction effect of explosion known as splashed funnel based on water depth, the depth of buried explosives in clay medium and the explosive mass. This relation is multidimensional and multivariable. Using traditional integation addressing empirical data still has limits in presenting a general law in entire domain relating to such relation. Therefore, based on the empirical results collected from the previuos study, this paper will concentrate to make a machine learning algorithm building a regression model, finding the general empirical law about the dependence relation of the radius of splashed funnel in clay medium under water, based on splashed funnel and the water depth, the depth of buried explosives in clay medium and the radius of explosives charges. The efficiency of the model will be evaluated with correlation coefficent R2 between the calculated values of funnels and its real values in experiments. Consequently, the model reached high accuracies which can be applicable in the reality.


2018 ◽  
Vol 71 ◽  
pp. 00009 ◽  
Author(s):  
Maciej Bodlak ◽  
Jan Kudełko ◽  
Andrzej Zibrow

In order to develop a method for forecasting the costs generated by rock and gas outbursts for hard coal deposit "Nowa Ruda Pole Piast Wacław-Lech", the analyses presented in this paper focused on key factors influencing the discussed phenomenon. Part of this research consisted in developing a prediction model of the extentof rock and gas outbursts with regard to the most probable mass of rock [Mg] and volume of gas [m3] released in an outburst and to the length of collapsed and/or damaged workings [running meters, rm]. For this purpose, a machine learning method was used, i.e. a "random forests method" with the "XGBoost" machine learning algorithm. After performing the machine learning process with the cross-validation technique, with five iterations, the lowest possible values of the mean-square prediction error "RMSE" were achieved. The obtained model and the program written in the programming language "R" was verified on the basis of the "RMSE" values, prediction matching graphs, out of sample analysis, importance ranking of input parameters and the sensitivity of the model during the forecast for hypothetical conditions.


Author(s):  
Minakhi Pujari ◽  
Joachim Frank

In single-particle analysis of macromolecule images with the electron microscope, variations of projections are often observed that can be attributed to the changes of the particle’s orientation on the specimen grid (“rocking”). In the multivariate statistical analysis (MSA) of such projections, a single factor is often found that expresses a large portion of these variations. Successful angle calibration of this “rocking factor” would mean that correct angles can be assigned to a large number of particles, thus facilitating three-dimensional reconstruction.In a study to explore angle calibration in factor space, we used 40S ribosomal subunits, which are known to rock around an axis approximately coincident with their long axis. We analyzed micrographs of a field of these particles, taken with 20° tilt and without tilt, using the standard methods of alignment and MSA. The specimen was prepared with the double carbon-layer method, using uranyl acetate for negative staining. In the MSA analysis, the untilted-particle projections were used as active, the tilted-particle projections as inactive objects. Upon tilting, those particles whose rocking axes are parallel to the tilt axis will change their appearance in the same way as under the influence of rocking. Therefore, each vector, in factor space, joining a tilted and untilted projection of the same particle can be regarded as a local 20-degree calibration bar.


2018 ◽  
Author(s):  
C.H.B. van Niftrik ◽  
F. van der Wouden ◽  
V. Staartjes ◽  
J. Fierstra ◽  
M. Stienen ◽  
...  

2020 ◽  
pp. 1-12
Author(s):  
Li Dongmei

English text-to-speech conversion is the key content of modern computer technology research. Its difficulty is that there are large errors in the conversion process of text-to-speech feature recognition, and it is difficult to apply the English text-to-speech conversion algorithm to the system. In order to improve the efficiency of the English text-to-speech conversion, based on the machine learning algorithm, after the original voice waveform is labeled with the pitch, this article modifies the rhythm through PSOLA, and uses the C4.5 algorithm to train a decision tree for judging pronunciation of polyphones. In order to evaluate the performance of pronunciation discrimination method based on part-of-speech rules and HMM-based prosody hierarchy prediction in speech synthesis systems, this study constructed a system model. In addition, the waveform stitching method and PSOLA are used to synthesize the sound. For words whose main stress cannot be discriminated by morphological structure, label learning can be done by machine learning methods. Finally, this study evaluates and analyzes the performance of the algorithm through control experiments. The results show that the algorithm proposed in this paper has good performance and has a certain practical effect.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


Sign in / Sign up

Export Citation Format

Share Document