Science and Technology Law Review
Latest Publications


TOTAL DOCUMENTS

9
(FIVE YEARS 9)

H-INDEX

0
(FIVE YEARS 0)

Published By Columbia University Libraries

1938-0976

2021 ◽  
Vol 22 (2) ◽  
pp. 263-283
Author(s):  
David Kappos ◽  
Asa Kling

Humankind has always sought to solve problems. This impetus has transformed hunters and gatherers into a society beginning to enjoy the fruits of the fourth industrial revolution. As part of the fourth industrial revolution, and the increased computing power accompanying it, the long-theorized concept of artificial intelligence (“AI”) is finally becoming a reality. This raises new issues in myriad fields—from the moral and ethical implications of replacing human activity with machines to who will own inventions created by AI. While these questions are worth exploring, they have already received a fair amount of coverage in popular and theoretical writing. This paper will take a different direction, focusing on the current and near-future issues arising on the ground at the intersection of AI and intellectual property (“IP”). After providing a brief overview of AI, we will analyze legal issues unique to AI, including access to data, patent requirements, open source licenses and trade secrecy. We will then suggest best practices for obtaining and preserving IP protection for AI-related innovations through the United States and European Union IP systems. By addressing these issues, the intellectual property system will be better positioned to do its part in unlocking AI’s immense potential.


2021 ◽  
Vol 22 (2) ◽  
pp. 231-262
Author(s):  
Shin-Ru Cheng

Facebook, the world’s largest online networking platform, is the subject of multiple antitrust investigations by various state and federal regulators. Yet scholars and practitioners remain divided on how to measure Facebook’s market power. Some argue that conventional approaches for identifying market power are suitable for the online networking market. This Article argues such conventional approaches are inadequate for assessing market power in online networking markets.This Article begins by introducing the traditional approaches that courts have employed to assess market power: the direct effects approach, the Lerner Index approach, and the market share approach. It next describes Facebook’s business model and shows that, because Facebook is a two-sided market, these traditional approaches should not be applied to Facebook.Instead, the Article proposes that the information gaps, switching costs, and entry barriers approaches are better suited for assessing the market power of online networking platforms. The Article thus concludes by proposing a legal framework for assessing market power in online networking platforms which employs such non-traditional approaches. While this Article uses Facebook as the main case study, this paper’s findings are equally applicable to similar online networking platforms.


2021 ◽  
Vol 22 (2) ◽  
pp. 284-307
Author(s):  
Monika Zalnieriute

Live automated facial recognition technology, rolled out in public spaces and cities across the world, is transforming the nature of modern policing.  R (on the application of Bridges) v Chief Constable of South Wales Police, decided in August 2020, is the first successful legal challenge to automated facial recognition technology in the world. In Bridges, the United Kingdom’s Court of Appeal held that the South Wales Police force’s use of automated facial recognition technology was unlawful. This landmark ruling could influence future policy on facial recognition in many countries. The Bridges decision imposes some limits on the police’s previously unconstrained discretion to decide whom to target and where to deploy the technology. Yet, while the decision requires that the police adopt a clearer legal framework to limit this discretion, it does not, in principle, prevent the use of facial recognition technology for mass-surveillance in public places, nor for monitoring political protests. On the contrary, the Court held that the use of automated facial recognition in public spaces – even to identify and track the movement of very large numbers of people – was an acceptable means for achieving law enforcement goals. Thus, the Court dismissed the wider impact and significant risks posed by using facial recognition technology in public spaces. It underplayed the heavy burden this technology can place on democratic participation and freedoms of expression and association, which require collective action in public spaces. The Court neither demanded transparency about the technologies used by the police force, which is often shielded behind the “trade secrets” of the corporations who produce them, nor did it act to prevent inconsistency between local police forces’ rules and regulations on automated facial recognition technology. Thus, while the Bridges decision is reassuring and demands change in the discretionary approaches of U.K. police in the short term, its long-term impact in burning the “bridges” between the expanding public space surveillance infrastructure and the modern state is unlikely. In fact, the decision legitimizes such an expansion. 


2021 ◽  
Vol 22 (2) ◽  
pp. 308-345
Author(s):  
Wayne Unger

Disinformation campaigns reduce trust in democracy, harm democratic institutions, and endanger public health and safety. While disinformation and misinformation are not new, their rapid and widespread dissemination has only recently been made possible by technological developments that enable never-before-seen levels of mass communication and persuasion.Today, a mix of social media, algorithms, personal profiling, and psychology enable a new dimension of political messaging—a dimension that disinformers exploit for their political gain. These enablers share a root cause—the poor data privacy and security regime in the U.S.At its core, democracy requires independent thought, personal autonomy, and trust in democratic institutions. A public that thinks critically and acts independently can check the government’s power and authority. However, when the public is misinformed, it lacks the autonomy to freely elect and check its representatives and the fundamental basis for democracy erodes. This Article addresses a root cause of misinformation dissemination —the absence of strong data privacy protections in the U.S.—and its effects on democracy. This Article explains, from a technological perspective, how personal information is used for personal profiling, and how personal profiling contributes to the mass interpersonal persuasion that disinformation campaigns exploit to advance their political goals.


2021 ◽  
Vol 22 (2) ◽  
pp. 346-382
Author(s):  
Aviel Menter

In Rucho v. Common Cause, the Supreme Court held that challenges to partisan gerrymanders presented a nonjusticiable political question. This decision threatened to discard decades of work by political scientists and other experts, who had developed a myriad of techniques designed to help the courts objectively and unambiguously identify excessively partisan district maps. Simulated redistricting promised to be one of the most effective of these techniques. Simulated redistricting algorithms are computer programs capable of generating thousands of election-district maps, each of which conforms to a set of permissible criteria determined by the relevant state legislature. By measuring the partisan lean of both the automatically generated maps and the map put forth by the state legislature, a court could determine how much of this partisan bias was attributable to the deliberate actions of the legislature, rather than the natural distribution of the state’s population.Rucho ended partisan gerrymandering challenges brought under the U.S. Constitution—but it need not close the book on simulated redistricting. Although originally developed to combat partisan gerrymanders, simulated redistricting algorithms can be repurposed to help courts identify intentional racial gerrymanders. Instead of measuring the partisan bias of automatically generated maps, these programs can gauge improper racial considerations evident in the legislature’s plan and demonstrate the discriminatory intent that produced such an outcome. As long as the redistricting process remains in the hands of state legislatures, there is a threat that constitutionally impermissible considerations will be employed when drawing district plans. Simulated redistricting provides a powerful tool with which courts can detect a hidden unconstitutional motive in the redistricting process.


2021 ◽  
Vol 22 (1) ◽  
pp. 127-180
Author(s):  
Todd Emerson Hutchins

A recent spate of governmental shutdowns of the civilian internet in a broad range of violent contexts, from uprisings in Hong Kong and Iraq to armed conflicts in Ethiopia, Kashmir, Myanmar, and Yemen, suggests civilian internet blackouts are the ‘new normal.’ Given the vital and expanding role of internet connectivity in modern society, and the emergence of artificial intelligence, internet shutdowns raise important questions regarding their legality under intentional law. This article considers whether the existing international humanitarian law provides adequate protection for civilian internet connectivity and infrastructure during armed conflicts. Concluding that current safeguards are insufficient, this article proposes a new legal paradigm with special protections for physical internet infrastructure and the right of civilian access, while advocating the adoption of emblems (such as the Red Cross or Blue Shield) in the digital world to protect vital humanitarian communications.


2021 ◽  
Vol 22 (1) ◽  
pp. 63-89
Author(s):  
Sarah Wood

This Article acknowledges the necessity of including social determinants of health (SDH) data in healthcare planning and treatment but highlights the lack of regulation around the collection of SDH data and potential for violating consumers’ basic rights to be treated equally, protected from discrimination, and to have their privacy respected. The Article analyzes different approaches from the U.S. and EU and proffers the global application of the GDPR plus data human rights provisions as the most sustainable option in a world where technology is ever-changing.


2021 ◽  
Vol 22 (1) ◽  
pp. 181-230
Author(s):  
Karen Kim

Many states’ sales and use tax provisions, updated in response to the Supreme Court’s decision in South Dakota v. Wayfair, Inc., will likely impose a disproportionate tax compliance burden on small- and medium-sized businesses (SMBs) that engage in e-commerce. Relative to large companies like Amazon and eBay, SMBs cannot absorb the high compliance costs associated with tracking, collecting, and remitting taxes. Wayfair expanded states’ authority to collect sales taxes on companies without a physical presence in the state. But states should wield this power judiciously. While mimicking South Dakota’s statute (upheld as constitutional in Wayfair) may help states avoid litigation, they would better promote the goals of fairness and efficiency by exempting a larger category of small vendors from sales tax obligations. In light of the COVID‑19 pandemic, which has acutely hurt SMBs, reducing sales tax-related compliance burden would also help states provide relief to struggling SMBs. States should (1) clarify which entities are subject to the remote seller and marketplace facilitator statutes and (2) raise the de minimis safe harbor thresholds that shield smaller businesses from having to remit taxes.


2021 ◽  
Vol 22 (1) ◽  
pp. 90-126
Author(s):  
Anat Lior

Some argue that applying a strict liability regime on AI-inflicted damages may allow well-financed big AI companies to monopolize the industry. They hypothesize that a strict liability regime would expose AI companies to significant legal liability. Since small AI companies lack the necessary resources to pay for damages inflicted by their AI technology, a strict liability regime could erect barriers to entry for these small companies. Ultimately, the argument continues, such a regime would give a small group of companies a virtual monopoly on the AI industry. Thus, some conclude that strict liability inherently stifles innovation and should not be applied to emerging technologies, such as AI. This Article maintains that legislators should adopt a strict liability regime, and it rejects the above argument for two reasons. First, there is no substantial connection between a strict liability regime and the AI monopolization that is already underway. Second, insurance policies could mitigate the effects a strict liability regime may have on the capabilities of small AI companies to enter and compete in this important market. Therefore, the ongoing process of monopolization of the AI market should not by itself render strict liability a non-viable regime when AI-inflicted damages occur.


Sign in / Sign up

Export Citation Format

Share Document