scholarly journals Layoffs, inequity and COVID-19: A longitudinal study of the journalism jobs crisis in Australia from 2012 to 2020

Journalism ◽  
2021 ◽  
pp. 146488492199628
Author(s):  
Nikolas Dawson ◽  
Sacha Molitorisz ◽  
Marian-Andrei Rizoiu ◽  
Peter Fray

In Australia and beyond, journalism is reportedly an industry in crisis, a crisis exacerbated by COVID-19. However, the evidence revealing the crisis is often anecdotal or limited in scope. In this unprecedented longitudinal research, we draw on data from the Australian journalism jobs market from January 2012 until March 2020. Using Data Science and Machine Learning techniques, we analyse two distinct data sets: job advertisements (ads) data comprising 3698 journalist job ads from a corpus of over 8 million Australian job ads; and official employment data from the Australian Bureau of Statistics. Having matched and analysed both sources, we address both the demand for and supply of journalists in Australia over this critical period. The data show that the crisis is real, but there are also surprises. Counter-intuitively, the number of journalism job ads in Australia rose from 2012 until 2016, before falling into decline. Less surprisingly, for the entire period studied the figures reveal extreme volatility, characterised by large and erratic fluctuations. The data also clearly show that COVID-19 has significantly worsened the crisis. We then tease out more granular findings, including: that there are now more women than men journalists in Australia, but that gender inequity is worsening, with women journalists getting younger and worse-paid just as men journalists are, on average, getting older and better-paid; that, despite the crisis besetting the industry, the demand for journalism skills has increased; and that, perhaps concerningly, the skills sought by journalism job ads increasingly include ‘social media’ and ‘generalist communications’ skills.

2018 ◽  
Vol 7 (4) ◽  
pp. 2738
Author(s):  
P. Srinivas Rao ◽  
Jayadev Gyani ◽  
G. Narsimha

In online social network’s phony account detection is one of the major task among the ability of genuine user from forged user account. The fundamental objective of detection of phony account framework is to detect fake account and removal technique in Social network user sites. This work concentrates on detection of phony account in which it depends on normal basis framework, transformative Algorithms and fuzzy technique. Initially, the most essential attributes including personal attributes, comparability techniques and various real user review, tweets, or comments are extricated. A direct blend of these attributes demonstrates the significance of each reviews tweets comments etc. To compute closeness measure, a consolidated strategy in view of artificial honey bee state Algorithm and fuzzy technique are utilized. Second approach is proposed to alter the best weights of the normal user attributes utilizing the social network activities/transaction and inherited Algorithm. Finally, a normal rank rationale framework is utilized to calculate the final scoring of normal user activities. The decision making of proposed approach to find phony account are variation with existing techniques user behavioral analysis using data sets and machine learning techniques such as crowdflower_sample and genuine_accounts_sample dataset of facebook and Twitter. The outcomes demonstrate that proposed strategy overcomes the previously mentioned strategies. 


2021 ◽  
Vol 35 (1) ◽  
pp. 99-104
Author(s):  
Revathi Vankayalapati ◽  
Kalyani Balaso Ghutugade ◽  
Rekha Vannapuram ◽  
Bejjanki Pooja Sree Prasanna

Data Clustering is the process of grouping the objects in a way which is identical to the objects in the same group than in other classes. In this paper, the clustering of data is used as k-means to assess the output of students. Machine Learning is an area used in all systems. Machine learning is used in education, pattern recognition, sports, industrial applications. Its significance increases with the future of the students in the educational system. Data collection in education is very useful, as data volumes in the education system are growing each day. Higher education is relatively new, but due to the growing database its significance grows. There are several ways to assess the success of students. K-means is one of the best and most successful methods. The secret information in the database is extracted using data mining to increase the output of students. The decision tree is also a way to predict the success of the students. In recent years, educational institutions have the greatest challenges in increasing data growth and using it to increase efficiency, such that better decision-making can be made. Clustering is one of the most important methods used for the analysis of data sets. This trial uses cluster analyses according to their features for section students in various classes. Uncontrolled K-means algorithm is discussed. The mining of education data is used for the study of the knowledge available in the field of education in order to provide secret, significant and useful information. The proposed model considers K-means clustering model for analyzing learners performance. The outcomes and future of students can be strengthened with this support. The results show that the K-means cluster algorithm is useful for grouping students based on similar performance features.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


2021 ◽  
Vol 11 (7) ◽  
pp. 317
Author(s):  
Ismael Cabero ◽  
Irene Epifanio

This paper presents a snapshot of the distribution of time that Spanish academic staff spend on different tasks. We carry out a statistical exploratory study by analyzing the responses provided in a survey of 703 Spanish academic staff in order to draw a clear picture of the current situation. This analysis considers many factors, including primarily gender, academic ranks, age, and academic disciplines. The tasks considered are divided into smaller activities, which allows us to discover hidden patterns. Tasks are not only restricted to the academic world, but also relate to domestic chores. We address this problem from a totally new perspective by using machine learning techniques, such as cluster analysis. In order to make important decisions, policymakers must know how academic staff spend their time, especially now that legal modifications are planned for the Spanish university environment. In terms of the time spent on quality of teaching and caring tasks, we expose huge gender gaps. Non-recognized overtime is very frequent.


Author(s):  
Gediminas Adomavicius ◽  
Yaqiong Wang

Numerical predictive modeling is widely used in different application domains. Although many modeling techniques have been proposed, and a number of different aggregate accuracy metrics exist for evaluating the overall performance of predictive models, other important aspects, such as the reliability (or confidence and uncertainty) of individual predictions, have been underexplored. We propose to use estimated absolute prediction error as the indicator of individual prediction reliability, which has the benefits of being intuitive and providing highly interpretable information to decision makers, as well as allowing for more precise evaluation of reliability estimation quality. As importantly, the proposed reliability indicator allows the reframing of reliability estimation itself as a canonical numeric prediction problem, which makes the proposed approach general-purpose (i.e., it can work in conjunction with any outcome prediction model), alleviates the need for distributional assumptions, and enables the use of advanced, state-of-the-art machine learning techniques to learn individual prediction reliability patterns directly from data. Extensive experimental results on multiple real-world data sets show that the proposed machine learning-based approach can significantly improve individual prediction reliability estimation as compared with a number of baselines from prior work, especially in more complex predictive scenarios.


The Intrusion is a major threat to unauthorized data or legal network using the legitimate user identity or any of the back doors and vulnerabilities in the network. IDS mechanisms are developed to detect the intrusions at various levels. The objective of the research work is to improve the Intrusion Detection System performance by applying machine learning techniques based on decision trees for detection and classification of attacks. The methodology adapted will process the datasets in three stages. The experimentation is conducted on KDDCUP99 data sets based on number of features. The Bayesian three modes are analyzed for different sized data sets based upon total number of attacks. The time consumed by the classifier to build the model is analyzed and the accuracy is done.


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Author(s):  
P. Priakanth ◽  
S. Gopikrishnan

The idea of an intelligent, independent learning machine has fascinated humans for decades. The philosophy behind machine learning is to automate the creation of analytical models in order to enable algorithms to learn continuously with the help of available data. Since IoT will be among the major sources of new data, data science will make a great contribution to make IoT applications more intelligent. Machine learning can be applied in cases where the desired outcome is known (guided learning) or the data is not known beforehand (unguided learning) or the learning is the result of interaction between a model and the environment (reinforcement learning). This chapter answers the questions: How could machine learning algorithms be applied to IoT smart data? What is the taxonomy of machine learning algorithms that can be adopted in IoT? And what are IoT data characteristics in real-world which requires data analytics?


Sign in / Sign up

Export Citation Format

Share Document