2-level hierarchical depression recognition method based on task-stimulated and integrated speech features

2022 ◽  
Vol 72 ◽  
pp. 103287
Author(s):  
Yujuan Xing ◽  
Zhenyu Liu ◽  
Gang Li ◽  
ZhiJie Ding ◽  
Bin Hu
2019 ◽  
Vol 6 (1) ◽  
pp. 1
Author(s):  
Yuli Anwar

Revenue and cost recognitions is the most important thing to be done by an entity,  time and the recognition method must be based on the rules from Financial Accounting Standards. Revenue and cost recognition which is done by PT. EMKL Jelutung Subur located on Pangkalpinang, Bangka Belitung province is done by using the accrual basis, and it can be seen with its influences to company profits every year.  This research is useful to get a data and information for preparing this thesis and improving my knowledge and also for comparing between theories accepted against facts applied in the field.  The result of this research shows that PT. EMKL Jelutung Subur has implemented one of the revenue and cost recognition method (accrual basis) continually, so that profit accuracy is accountable to be used for developing this kind of expedition business in order to become a better company. The accuracy is evaluated because all revenues received and cost spent  have clear evidence and found in the period of time.  The evaluation shows there is one thing that miss from revenue and cost recognition done by PT. EMKL Jelutung Subur, that is charge to the customers who use the storage service temporary, because some customers keep their goods for a long time in the warehouse, and it will increase the costs of loading, warehouse maintenance, damaged goods and decreasing a quantity of goods. If the storage service is charged to the customers, PT. EMKL Jelutung Subur will earn additional revenue to cover all the expenses above


2020 ◽  
Vol 64 (4) ◽  
pp. 40404-1-40404-16
Author(s):  
I.-J. Ding ◽  
C.-M. Ruan

Abstract With rapid developments in techniques related to the internet of things, smart service applications such as voice-command-based speech recognition and smart care applications such as context-aware-based emotion recognition will gain much attention and potentially be a requirement in smart home or office environments. In such intelligence applications, identity recognition of the specific member in indoor spaces will be a crucial issue. In this study, a combined audio-visual identity recognition approach was developed. In this approach, visual information obtained from face detection was incorporated into acoustic Gaussian likelihood calculations for constructing speaker classification trees to significantly enhance the Gaussian mixture model (GMM)-based speaker recognition method. This study considered the privacy of the monitored person and reduced the degree of surveillance. Moreover, the popular Kinect sensor device containing a microphone array was adopted to obtain acoustic voice data from the person. The proposed audio-visual identity recognition approach deploys only two cameras in a specific indoor space for conveniently performing face detection and quickly determining the total number of people in the specific space. Such information pertaining to the number of people in the indoor space obtained using face detection was utilized to effectively regulate the accurate GMM speaker classification tree design. Two face-detection-regulated speaker classification tree schemes are presented for the GMM speaker recognition method in this study—the binary speaker classification tree (GMM-BT) and the non-binary speaker classification tree (GMM-NBT). The proposed GMM-BT and GMM-NBT methods achieve excellent identity recognition rates of 84.28% and 83%, respectively; both values are higher than the rate of the conventional GMM approach (80.5%). Moreover, as the extremely complex calculations of face recognition in general audio-visual speaker recognition tasks are not required, the proposed approach is rapid and efficient with only a slight increment of 0.051 s in the average recognition time.


Author(s):  
Zixuan Liu ◽  
Dan Niu ◽  
Qi Li ◽  
Xisong Chen ◽  
Li Ding ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document