Michigan Technology Law Review
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 19)

H-INDEX

1
(FIVE YEARS 1)

Published By University Of Michigan Law Library

2688-5484, 2688-4941

Author(s):  
Camilla Hrdy ◽  
Daniel Brean

Patent law promotes innovation by giving inventors 20-year-long exclusive rights to their inventions. To be patented, however, an invention must be “enabled,” meaning the inventor must describe it in enough detail to teach others how to make and use the invention at the time the patent is filed. When inventions are not enabled, like a perpetual motion machine or a time travel device, they are derided as “mere science fiction”—products of the human mind, or the daydreams of armchair scientists, that are not suitable for the patent system. This Article argues that, in fact, the literary genre of science fiction has its own unique—albeit far laxer—enablement requirement. Since the genre’s origins, fans have demanded that the inventions depicted in science fiction meet a minimum standard of scientific plausibility. Otherwise, the material is denigrated as lazy hand-waving or, worse, “mere fantasy.” Taking this insight further, the Article argues that, just as patents positively affect the progress of science and technology by teaching others how to make and use real inventions, so too can science fiction, by stimulating scientists’ imagination about what sorts of technologies might one day be possible. Thus, like patents, science fiction can have real world impacts for the development of science and technology. Indeed, the Article reveals that this trajectory—from science fiction to science reality—can be seen in the patent record itself, with several famous patents tracing their origins to works of science fiction.


Author(s):  
Pamela Samuelson

For more than two decades, internet service providers (ISPs) in the United States, the European Union (EU), and many other countries have been shielded from copyright liability under “safe harbor” rules. These rules apply to ISPs who did not know about or participate in user-uploaded infringements and who take infringing content down after receiving notice from rights holders. Major copyright industry groups were never satisfied with these safe harbors, and their dissatisfaction has become more strident over time as online infringements have grown to scale. Responding to copyright industry complaints, the EU in 2019 adopted its Directive on Copyright and Related Rights in the Digital Single Market. In particular, the Directive’s Article 17 places much stricter obligations on for-profit ISPs that host large amounts of user contents. Article 17 is internally contradictory, deeply ambiguous, and harmful to small and medium-sized companies as well as to user freedoms of expression. Moreover, Article 17 may well violate the European Charter of Fundamental Rights. In the United States, Congress commenced a series of hearings in 2020 on the safe harbor rules now codified as 17 U.S.C. § 512 of the Digital Millennium Copyright Act (DMCA). In May 2020, the U.S. Copyright Office issued its long-awaited study on Section 512, which recommended several significant changes to existing safe harbor rules. The Study’s almost exclusively pro–copyright industry stances on reform of virtually every aspect of the rules notably shortchanges other stakeholder interests. Congress should take a balanced approach in considering any changes to the DMCA safe harbor rules. Any meaningful reform of ISP liability rules should consider the interests of a wide range of stakeholders. This includes U.S.-based Internet platforms, smaller and medium-sized ISPs, startups, and the hundreds of millions of Internet users who create and enjoy user-generated content (UGC) uploaded to these platforms, as well as the interests of major copyright industries and individual creators who have been dissatisfied with the DMCA safe harbor rules.


Author(s):  
Jorge Contreras

The Supreme Court’s 2013 decision in Association for Molecular Pathology v. Myriad Genetics is an essential piece of the Court’s recent quartet of patent eligibility decisions, which also includes Bilski v. Kappos, Mayo v. Prometheus, and Alice v. CLS Bank. Each of these decisions has significantly shaped the contours of patent eligibility under Section 101 of the Patent Act in ways that have been both applauded and criticized. The Myriad case, however, was significant beyond its impact on Section 101 jurisprudence. It was seen, and litigated, as a case impacting patient rights, access to healthcare, scientific freedom, and human dignity. In this article, I offer a close textual analysis of the Myriad decision and respond to both its critics and supporters. I then situate Myriad within the larger context of biotechnology patenting, the commercialization of academic research, and the U.S. healthcare system. In this regard, the failure of public institutions and governmental agencies to constrain the private exploitation of publicly-funded innovations contributed as much to the healthcare access disparities highlighted by the case as the overly broad protection afforded by the Patent and Trademark Office to genetic inventions. I conclude with observations about the ways that cases like Myriad exemplify the manner in which the common law evolves, particularly in areas of rapid technological change.


Author(s):  
Sabrina Glavota

Mitochondrial replacement therapy (MRT) is an in vitro fertilization technique designed to prevent women who are carriers of mitochondrial diseases from passing on these heritable genetic diseases to their children. It is an innovative assisted reproductive technology that is only legal in a small number of countries. The United States has essentially stagnated all opportunities for research and clinical trials on MRT through a rider in H.R.2029 – Consolidated Appropriations Act, 2016. The rider bans clinical trials on all therapies in which a human embryo is intentionally altered to include a heritable genetic modification. This note argues that the rider should be amended to permit therapies such as MRT, which do not create artificial DNA sequences, while continuing to prohibit clinical trials on germline therapies that modify the sequence of a gene. MRT is distinct from the types of therapies that Congress intended to ban through the rider. Amending the rider would not automatically approve MRT trials, but rather allow the FDA to evaluate investigational new drug applications and determine whether individual trials may proceed. Without proper FDA oversight, carriers of mitochondrial diseases are denied access to a therapy that provides them with benefits they cannot enjoy by any other means, and researchers may look abroad to conduct the therapy illegally or dangerously. Further, the United States can look to other countries such as the United Kingdom as a model for how to proceed with research and trials on MRT in an ethical manner.


Author(s):  
Luc von Danwitz

Internet regulation in the European Union (EU) is receiving significant attention and criticism in the United States. The European Court of Justice’s (ECJ) judgment in the case Glawischnig-Piesczek v. Facebook Ireland, in which the ECJ found a take-down order against Facebook for defamatory content with global effect permissible under EU law, was closely scrutinized in the United States. These transsystemic debates are valuable but need to be conducted with a thorough understanding of the relevant legal framework and its internal logic. This note aims to provide the context to properly assess the role the ECJ and EU law play in the regulation of online speech. The note argues that the alleged shortcomings of the Glawischnig- Piesczek case are actually the result of a convincing interpretation of the applicable EU law while respecting the prerogatives of the member states in the areas of speech regulation, jurisdiction, and comity. Most of the issues that commentators wanted the ECJ to decide were beyond its reach in this case. The paper argues that EU law’s contribution in the field of online speech regulation should be regarded as a realization of the dangers of illegal online content, resulting in an effective protection of the interests harmed. This implies the rejection of a “whack-a-mole” approach towards illegal online content in favor of more effective ways to protect against the harm caused by illegal online speech. At the same time, the case highlights the necessity to establish a workable theory of jurisdiction and comity in the digital age.


2021 ◽  
pp. 97
Author(s):  
Robert Weber

This Article undertakes a critical examination of the unintended consequences for the legal system if we arrive at the futurist dream of a legal singularity—the moment when predictive, mass-data technologies evolve to create a perfectly predictable, algorithmically-expressed legal system bereft of all legal uncertainty. It argues that although the singularity would surely enhance the efficiency of the legal system in a narrow sense, it would also undermine the rule of law, a bedrock institution of any liberal legal order and a key source of the legal system’s legitimacy. It would do so by dissolving the normative content of the two core pillars of the rule of law: the predictability principle and the universality principle, each of which has traditionally been conceived as a bulwark against arbitrary government power. The futurists heralding the legal singularity privilege a weak-form predictability principle that emphasizes providing notice to legal subjects about the content of laws over a strong-form variant that also emphasizes the prevention of arbitrary governmental action. Hence, an inattentive and hurried embrace of predictive technologies in service of the (only weak-form) predictability principle will likely attenuate the rule of law’s connection to the deeper (strong-form) predictability principle. The legal singularity will also destabilize law’s universality principle, by reconceiving of legal subjects as aggregations of data points rather than as individual members of a polity. In so doing, it will undermine the universality principle’s premise that the differences among legal subjects are outweighed by what we—or, better still, “We the People” who are, as Blackstone put it, the “community in general”—have in common. A cautionary directive emerges from this analysis: that lawyers should avoid an uncritical embrace of predictive technologies in pursuit of a shrunken ideal of predictability that might ultimately require them to throw aside much of the normative ballast that has kept the liberal legal order stable and afloat.


2021 ◽  
pp. 55
Author(s):  
David Nersessian ◽  
Ruben Mancha

The increasing prominence of artificial intelligence (AI) systems in daily life and the evolving capacity of these systems to process data and act without human input raise important legal and ethical concerns. This article identifies three primary AI actors in the value chain (innovators, providers, and users) and three primary types of AI (automation, augmentation, and autonomy). It then considers responsibility in AI innovation from two perspectives: (i) strict liability claims arising out of the development, commercialization, and use of products with built-in AI capabilities (designated herein as “AI artifacts”); and (ii) an original research study on the ethical practices of developers and managers creating AI systems and AI artifacts. The ethical perspective is important because, at the moment, the law is poised to fall behind technological reality—if it hasn’t already. Consideration of the liability issues in tandem with ethical perspectives yields a more nuanced assessment of the likely consequences and adverse impacts of AI innovation. Companies should consider both legal and ethical strategies thinking about their own liability and ways to limit it, as well as policymakers considering AI regulation ex ante.


2021 ◽  
pp. 377
Author(s):  
Marc Blitz

n Anarchy, State, and Utopia, the philosopher Robert Nozick describes what he calls an “Experience Machine.” In essence, it produces a form of virtual reality (VR). People can use it to immerse themselves in a custom-designed dream: They have the experience of climbing a mountain, reading a book, or conversing with a friend when they are actually lying isolated in a tank with electrodes feeding perceptions into their brain. Nozick describes the Experience Machine as part of a philosophical thought experiment—one designed to show that a valuable life consists of more than mental states, like those we receive in this machine. As Nozick says, “we want to do certain things, and not just have the experience of doing them.” An 80-year sequence of experiences generated by the machine would not be of equivalent value to the lifetime of the identical set of experiences we derive from interactions with real people (who are not illusions, but have minds of their own), and with a physical universe that lies outside of us. On the contrary, says Nozick, a solipsistic life in the Experience Machine is a deeply impoverished one.


2021 ◽  
pp. 213
Author(s):  
Karni Chagal-Feferkorn

Self-learning algorithms are gradually dominating more and more aspects of our lives. They do so by performing tasks and reaching decisions that were once reserved exclusively for human beings. And not only that—in certain contexts, their decision-making performance is shown to be superior to that of humans. However, as superior as they may be, self-learning algorithms (also referred to as artificial intelligence (AI) systems, “smart robots,” or “autonomous machines”) can still cause damage. When determining the liability of a human tortfeasor causing damage, the applicable legal framework is generally that of negligence. To be found negligent, the tortfeasor must have acted in a manner not compliant with the standard of “the reasonable person.” Given the growing similarity of self-learning algorithms to humans in the nature of decisions they make and the type of damages they may cause (for example, a human driver and a driverless vehicle causing similar car accidents), several scholars have proposed the development of a “reasonable algorithm” standard, to be applied to self-learning systems. To date, however, academia has not attempted to address the practical question of how such a standard might be applied to algorithms, and what the content of analysis ought to be in order to achieve the goals behind tort law of promoting safety and victims’ compensation on the one hand, and achieving the right balance between these goals and encouraging the development of beneficial technologies on the other. This Article analyzes the “reasonableness” standard used in tort law in the context of the unique qualities, weaknesses, and strengths that algorithms possess comparatively to human actors and also examines whether the reasonableness standard is at all compatible with self-learning algorithms. Concluding that it generally is, the Article’s main contribution is its proposal of a concrete “reasonable algorithm” standard that could be practically applied by decisionmakers. This standard accounts for the differences between human and algorithmic decision-making. The “reasonable algorithm” standard also allows the application of the reasonableness standard to algorithms in a manner that promotes the aims of tort law while avoiding a dampening effect on the development and usage of new, beneficial technologies.


2021 ◽  
pp. 263
Author(s):  
Gabriel Nicholas

Policymakers are faced with a vexing problem: how to increase competition in a tech sector dominated by a few giants. One answer proposed and adopted by regulators in the United States and abroad is to require large platforms to allow consumers to move their data from one platform to another, an approach known as data portability. Facebook, Google, Apple, and other major tech companies have enthusiastically supported data portability through their own technical and political initiatives. Today, data portability has taken hold as one of the go-to solutions to address the tech industry’s competition concerns. This Article argues that despite the regulatory and industry alliance around data portability, today’s public and private data portability efforts are unlikely to meaningfully improve competition. This is because current portability efforts focus solely on mitigating switching costs, ignoring other barriers to entry that may preclude new platforms from entering the market. The technical implementations of data portability encouraged by existing regulation—namely one-off exports and API interoperability—address switching costs but not the barriers of network effects, unique data access, and economies of scale. This Article proposes a new approach to better alleviate these other barriers called collective portability, which would allow groups of users to coordinate to transfer data they share to a new platform, all at once. Although not a panacea, collective portability would provide a meaningful alternative to existing approaches while avoiding both the privacy/competitive utility trade off of one-off exports and the hard-to regulate power dynamics of APIs.


Sign in / Sign up

Export Citation Format

Share Document