Collusive Algorithms as Mere Tools, Super-tools or Legal Persons

Author(s):  
Guan Zheng ◽  
Hong Wu

Abstract The widespread use of algorithmic technologies makes rules on tacit collusion, which are already controversial in antitrust law, more complicated. These rules have obvious limitations in effectively regulating algorithmic collusion. Although some scholars and practitioners within antitrust circles in the United States, Europe and beyond have taken notice of this problem, they have failed to a large extent to make clear its specific manifestations, root causes, and effective legal solutions. In this article, the authors make a strong argument that it is no longer appropriate to regard algorithms as mere tools of firms, and that the distinct features of machine learning algorithms as super-tools and as legal persons may inevitably bring about two new cracks in antitrust law. This article clarifies the root causes why these rules are inapplicable to a large extent to algorithmic collusion particularly in the case of machine learning algorithms, classifies the new legal cracks, and provides sound legal criteria for the courts and competition authorities to assess the legality of algorithmic collusion much more accurately. More importantly, this article proposes an efficacious solution to revive the market pricing mechanism for the purposes of resolving the two new cracks identified in antitrust law.

2021 ◽  
Author(s):  
Jason Williams ◽  
Sally Potter-McIntyre ◽  
Justin Filiberto ◽  
Shaunna Morrison ◽  
Daniel Hummer

<p>Indicator minerals have special physical and chemical properties that can be analyzed to glean information concerning the composition of host rocks and formational (or altering) fluids. Clay, zeolite, and tourmaline mineral groups are all ubiquitous at the Earth’s surface and shallow crust and distributed through a wide variety of sedimentary, igneous, metamorphic, and hydrothermal systems. Traditional studies of indicator mineral-bearing deposits have provided a wealth of data that could be integral to discovering new insights into the formation and evolution of naturally occurring systems. This study evaluates the relationships that exist between different environmental indicator mineral groups through the implementation of machine learning algorithms and network diagrams. Mineral occurrence data for thousands of localities hosting clay, zeolite, and tourmaline minerals were retrieved from mineral databases. Clustering techniques (e.g., agglomerative hierarchical clustering and density based spatial clustering of applications with noise) combined with network analyses were used to analyze the compiled dataset in an effort to characterize and identify geological processes operating at different localities across the United States. Ultimately, this study evaluates the ability of machine learning algorithms to act as supplementary diagnostic and interpretive tools in geoscientific studies.</p>


10.2196/18401 ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. e18401
Author(s):  
Jane M Zhu ◽  
Abeed Sarker ◽  
Sarah Gollust ◽  
Raina Merchant ◽  
David Grande

Background Twitter is a potentially valuable tool for public health officials and state Medicaid programs in the United States, which provide public health insurance to 72 million Americans. Objective We aim to characterize how Medicaid agencies and managed care organization (MCO) health plans are using Twitter to communicate with the public. Methods Using Twitter’s public application programming interface, we collected 158,714 public posts (“tweets”) from active Twitter profiles of state Medicaid agencies and MCOs, spanning March 2014 through June 2019. Manual content analyses identified 5 broad categories of content, and these coded tweets were used to train supervised machine learning algorithms to classify all collected posts. Results We identified 15 state Medicaid agencies and 81 Medicaid MCOs on Twitter. The mean number of followers was 1784, the mean number of those followed was 542, and the mean number of posts was 2476. Approximately 39% of tweets came from just 10 accounts. Of all posts, 39.8% (63,168/158,714) were classified as general public health education and outreach; 23.5% (n=37,298) were about specific Medicaid policies, programs, services, or events; 18.4% (n=29,203) were organizational promotion of staff and activities; and 11.6% (n=18,411) contained general news and news links. Only 4.5% (n=7142) of posts were responses to specific questions, concerns, or complaints from the public. Conclusions Twitter has the potential to enhance community building, beneficiary engagement, and public health outreach, but appears to be underutilized by the Medicaid program.


Author(s):  
K. Kuwata ◽  
R. Shibasaki

Satellite remote sensing is commonly used to monitor crop yield in wide areas. Because many parameters are necessary for crop yield estimation, modelling the relationships between parameters and crop yield is generally complicated. Several methodologies using machine learning have been proposed to solve this issue, but the accuracy of county-level estimation remains to be improved. In addition, estimating county-level crop yield across an entire country has not yet been achieved. In this study, we applied a deep neural network (DNN) to estimate corn yield. We evaluated the estimation accuracy of the DNN model by comparing it with other models trained by different machine learning algorithms. We also prepared two time-series datasets differing in duration and confirmed the feature extraction performance of models by inputting each dataset. As a result, the DNN estimated county-level corn yield for the entire area of the United States with a determination coefficient (<i>R</i><sup>2</sup>) of 0.780 and a root mean square error (<i>RMSE</i>) of 18.2 bushels/acre. In addition, our results showed that estimation models that were trained by a neural network extracted features from the input data better than an existing machine learning algorithm.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. e13555-e13555
Author(s):  
Chris Sidey-Gibbons ◽  
Charlotte C. Sun ◽  
Cai Xu ◽  
Amy Schneider ◽  
Sheng-Chieh Lu ◽  
...  

e13555 Background: Contra to national guidelines, women with ovarian cancer often receive aggressive treatment until the end-of-life. We trained machine learning algorithms to predict mortality within 180 days for women with ovarian cancer. Methods: Data were collected data from a single academic cancer institution in the United States. Women with recurrent ovarian cancer completed biopsychosocial patient-reported outcome measures (PROMs) every 90 days. We randomly partitioned our dataset into training and testing samples with a 2:1 ratio. We used synthetic minority oversampling to reduce class imbalance in the training dataset. We fitted training data to six machine learning algorithms and combined their classifications on the testing dataset into a voting ensemble. We assessed the accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) for each algorithm. Results: We recruited 245 patients who completed 1319 assessments. The final voting ensemble performed well across all performance metrics (Accuracy = .79, Sensitivity = .71, Specificity = .80, AUROC = .76). The algorithm correctly identified 25 of the 35 women in the testing dataset who died within 180 days of assessment Conclusions: Machine learning algorithms trained using PROM data offer state-of-the-art performance in predicting whether a woman with ovarian cancer will reach the end-of-life within 180 days. We highlight the importance of PROM data in ML models of mortality. Our model exhibits substantial improvements in prediction sensitivity compared to other similar models trained using electronic health record data alone. This model could inform clinical decision making and improve the uptake of appropriate end-of-life care. Further research is warranted to expand on these findings in a larger, more diverse sample.


2019 ◽  
Author(s):  
Sing-Chun Wang ◽  
Yuxuan Wang

Abstract. Occurrences of devastating wildfires have been on the rise in the United States for the past decades. While the environmental controls, including weather, climate, and fuels, are known to play important roles in controlling wildfires, the interrelationships between fires and the environmental controls are highly complex and may not be well represented by traditional parametric regressions. Here we develop a model integrating multiple machine learning algorithms to predict gridded monthly wildfire burned area during 2002–2015 over the South Central United States and identify the relative importance of the environmental drivers on the burned area for both the winter-spring and summer fire seasons of that region. The developed model is able to alleviate the issue of unevenly-distributed burned area data and achieve a cross-validation (CV) R2 value of 0.42 and 0.40 for the two fire seasons. For the total burned area over the study domain, the model can explain 50 % and 79 % of interannual total burned area for the winter-spring and summer fire season, respectively. The prediction model ranks relative humidity (RH) anomalies and preceding months’ drought severity as the top two most important predictors on the gridded burned area for both fire seasons. Sensitivity experiments with the model show that the effect of climate change represented by a group of climate-anomaly variables contributes the most to the burned area for both fire seasons. Antecedent fuel amount and conditions are found to outweigh weather effects for the burned area in the winter-spring fire season, while the current-month fire weather is more important for the summer fire season likely due to the controlling effect of weather on fuel moisture in this season. This developed model allows us to predict gridded burned area and to access specific fire management strategies for different fire mechanisms in the two seasons.


2020 ◽  
Author(s):  
Jane M Zhu ◽  
Abeed Sarker ◽  
Sarah Gollust ◽  
Raina Merchant ◽  
David Grande

BACKGROUND Twitter is a potentially valuable tool for public health officials and state Medicaid programs in the United States, which provide public health insurance to 72 million Americans. OBJECTIVE We aim to characterize how Medicaid agencies and managed care organization (MCO) health plans are using Twitter to communicate with the public. METHODS Using Twitter’s public application programming interface, we collected 158,714 public posts (“tweets”) from active Twitter profiles of state Medicaid agencies and MCOs, spanning March 2014 through June 2019. Manual content analyses identified 5 broad categories of content, and these coded tweets were used to train supervised machine learning algorithms to classify all collected posts. RESULTS We identified 15 state Medicaid agencies and 81 Medicaid MCOs on Twitter. The mean number of followers was 1784, the mean number of those followed was 542, and the mean number of posts was 2476. Approximately 39% of tweets came from just 10 accounts. Of all posts, 39.8% (63,168/158,714) were classified as general public health education and outreach; 23.5% (n=37,298) were about specific Medicaid policies, programs, services, or events; 18.4% (n=29,203) were organizational promotion of staff and activities; and 11.6% (n=18,411) contained general news and news links. Only 4.5% (n=7142) of posts were responses to specific questions, concerns, or complaints from the public. CONCLUSIONS Twitter has the potential to enhance community building, beneficiary engagement, and public health outreach, but appears to be underutilized by the Medicaid program.


2020 ◽  
Author(s):  
Matthew G. Crowson ◽  
Amr Hamour ◽  
Vincent Lin ◽  
Joseph M. Chen ◽  
Timothy C. Y. Chan

ABSTRACTImportanceThe United States Food & Drug Administration (FDA) passively monitors medical device performance and safety through submitted medical device reports (MDRs) in the Manufacturer and User Facility Device Experience (MAUDE) database. These databases can be analyzed for patterns and novel opportunities for improving patient safety and/or device design.ObjectivesThe objective of this analysis was to use supervised machine learning to explore patterns in reported adverse events involving cochlear implants.DesignThe MDRs for the top three CI manufacturers by volume from January 1st 2009 to August 30th 2019 were retained for the analysis. Natural language processing was used to measure the importance of specific words. Four supervised machine learning algorithms were used to predict which adverse event narrative description pattern corresponded with a specific cochlear implant manufacturer and adverse event type - injury, malfunction, or death.SettingU.S. government public database.ParticipantsAdult and pediatric cochlear patients.ExposureSurgical placement of a cochlear implant.Main Outcome MeasureMachine learning model classification prediction accuracy (% correct predictions).Results27,511 adverse events related to cochlear implant devices were submitted to the MAUDE database during the study period. Most adverse events involved patient injury (n = 16,736), followed by device malfunction (n = 10,760), and death (n = 16). Submissions to the database were dominated by Cochlear Corporation (n = 13,897), followed by MedEL (n = 7,125), and Advanced Bionics (n = 6,489). The random forest, linear SVC, naïve Bayes and logistic algorithms were able to predict the specific CI manufacturer based on the adverse event narrative with an average accuracy of 74.8%, 86.0%, 88.5% and 88.6%, respectively.Conclusions & RelevanceUsing supervised machine learning algorithms, our classification models were able to predict the CI manufacturer and event type with high accuracy based on patterns in adverse event text descriptions.Level of evidence3


Sign in / Sign up

Export Citation Format

Share Document