1NC Round 4
1NC Round 4
A. The courts are managing increased caseloads now—slower filings and technology
are decreasing overload
Reuters 24 (Thomson Reuters Institute, engages professionals from the legal, corporate, tax & accounting, and government communities to host
conversation and debate, analyze trends, and provide the insights and guidance needed to help shape the way forward in an evolving and increasingly complex
environment. “State of the Courts Report 2024” Accessed 7/11/2024. https://www.thomsonreuters.com/en-us/posts/wp-content/uploads/sites/20/2024/02/2024-
State-of-the-Courts-Report.pdf) wtk
Beyond generative AI, our survey showed that judges and court professionals at all levels said they are still managing many of
the challenges that they had cited in our previous survey, conducted in 2022. These concerns were strongly reflective of how courts have had to navigate
through the post-pandemic environment, and included burdensome hearing delays, growing caseloads, and the glacial pace of modernization within the courts. Yet,
while these concerns were still reflected in our latest survey, many of these challenges have receded, making room in many
respondents’ minds for a clearer path toward more positive outcomes. For example, while increasing
caseloads continue to be the biggest change that respondents said they had experienced in the past two years, the portion of
our survey saying that has decreased. So too have the portions of our survey citing increases in case delays and court backlogs greatly
diminished as well. And even though more than half of respondents (56%) said they expect to experience staffing
shortages in the coming 12 months, that was down from the past 12 months when almost two-thirds (64%) reported
staffing shortages. Overall, it seems that courts and their workers are enjoying broader engagement with technology
solutions, especially around such critical areas as evidence collection and storage as digital storage and certain case-material sharing and management tools
are seeing more acceptance across the board.
B. The plan clogs the courts by encouraging applications that lack true inventive
concepts
Alex Moss, 24 - the executive director of the Public Interest Patent Law Institute. She was previously at the Electronic Frontier Foundation and a patent
litigator at Sullivan & Cromwell. After graduating from Stanford Law School, she served as a judicial clerk to the Honorable Timothy B. Dyk of the U.S. Court of
Appeals for the Federal Circuit. Letter to Senators Durbin, Coons, Tillis, and Representatives Issa and Lofgren on the Patent Eligibility Restoration Act of 2023, signed
by American Civil Liberties Union, Electronic Frontier Foundation, Generation Patient, Public Citizen, Public Interest Patent Law Institute, and the R Street Institute.
1/30, https://static1.squarespace.com/static/60e5457fb89be21d705fa914/t/65b9687cbb120871dd820938/1706649724600/Public+Interest+Letter+re+PERA+-
+01.30.24.pdf //DH
• Patent eligibility law is clear and works well. An ACLU study found that the Federal Circuit affirmed 89% of district court
decisions finding patents ineligible in the five years following the 2014 Alice decision.20 From 2013 through 2020,
decisions applying § 101 had an affirmance rate of 65% when appealed to the Federal Circuit and decided in
precedential opinions, higher than the circuit’s overall affirmance rate of 56%. 21 From 2014 through 2020, district
court and agency decisions addressing § 101 were affirmed at a higher rate than decisions under other
sections of the Patent Act (§§ 102, 103 or 112).22 Moreover, the Federal Circuit was asked to review nearly three times as many decisions relating to
§ 103 as decisions relating to § 101.23 Section 101 cases constituted only 4% of all appeals.24 Federal Circuit judges have explicitly
acknowledged that the test set forth by the Supreme Court for assessing subject matter eligibility is clear
and the Federal Circuit’s application of the test to individual cases is consistent. Indeed, Federal Circuit
Judge Alan D. Lourie emphasized in a concurring opinion accompanying the court’s per curiam Order in Athena Diagnostics, Inc. v. Mayo Collaborative
Services, LLC, “our cases are consistent.”25 He further recognized that in the context of diagnostic methods in particular, the distinction between
eligible and ineligible subject matter is “a clear line.”26 Despite dissenting in that case, Chief Judge Kimberly A. Moore acknowledged the clarity and certainty of
current patent eligibility jurisprudence.27 Some
Federal Circuit judges have expressed distaste for the Supreme Court’s
patent eligibility jurisprudence, often appearing to be motivated by sympathy for concerns expressed by patent owners.28 But these
statements generally reflect the judges’ policy views rather than any confusion regarding the proper
application of existing law. Such policy preferences have no bearing on the clarity of the current law or its
ability to produce predictable outcomes. Moreover, some judges have explicitly acknowledged the value of the Supreme Court’s clarification
of patent eligibility law. For example, Judge Haldane Robert Mayer, whose tenure on the Federal Circuit spans 34 years, asserted that “[b]efore the
Supreme Court stepped in to resuscitate section 101, a scourge of meritless infringement suits clogged
the courtrooms and exacted a heavy tax on scientific innovation and technological change.”29 Judge Mayer
called the current § 101 framework “an expeditious tool for weeding out patents clearly lacking any
‘inventive concept.’”30
C. Court clog causes hurdles to the success of litigation that is necessary to solve
climate change
Banda and Fulton 17 (Dr. Maria L. Banda is an international lawyer. She is currently the Graham Fellow at the University of Toronto Faculty of
Law, a member of the World Commission on Environmental Law, and a Visiting Attorney at the Environmental Law Institute. Scott Fulton is President of the
Environmental Law Institute and a member of the United Nations Environment Programme's International Advisory Council on Environmental Justice. He is a former
U.S. Environmental Protection Agency General Counsel, Environmental Appeals Judge, environmental prosecutor, and environmental diplomat. “Litigating Climate
Change in National Courts: Recent Trends and Developments in Global Climate Law” Environmental Law Reporter News & Analysis, Vol. 47, Issue 2 (February 2017),
pp. 10121-10134. Accessed 7/15/2024 via University of Michigan Online Library) wtk
First, national judges are by and large successfully adapting their traditional role of administration of justice to
the new challenges posed by climate change litigation. Presented with a number of novel and urgent legal questions, they have
increasingly held their own governments accountable under national constitutional principles, implementing legislation, or
common-law doctrines. Since 2013, as one court noted, courts have increasingly "recognized the role of the third
branch of government in protecting the earth's resources that it holds in trust ."' 1 6 1 Thus, though climate
litigation will continue facing different substantive and procedural hurdles in different jurisdictions-from
standing to the political question doctrinecourts have demonstrated that, generally, they are rising to the task and using the tools at
their disposal for administration of justice in this emerging area. This trend can be greatly enabled by making the emerging
cross-jurisdictional jurisprudence in this area more readily accessible to judges around the world. Likewise, climate literacy training aimed at building judicial
awareness of climate science, climate impacts, and climate resilience imperatives can help equip judges to perform their traditional role of administering justice in
the context of this global phenomenon. Thus far, constitutionally protected environmental rights have represented the most straightforward vehicle for climate
litigants in some jurisdictions (e.g., Norway), although litigation based on broader rights (e.g., to enforce fundamental constitutional rights to life and property) has
also provided a viable strategy in other jurisdictions (e.g., Pakistan). In still other jurisdictions, courts have relied on the language of environmental statutes and
common-law doctrines to mandate agency rulemaking or more effective implementation of climate policies (e.g., Foster). A number of lawsuits may not even
present climate change as the core issue, but may have an indirect impact on climate mitigation or adaptation efforts by 161. Foster et al. v. Washington Dep't of
Ecology, No. 14-2-25295-1 SEA, Order Denying Motion for Order of Contempt and Granting Sua Sponte Leave to File Amended Pleading (Wash. Super. Ct. Dec. 19,
2016) (citingJuliana and Urgenda). regulating conventional pollutants in a manner that may have climate co-benefits (e.g., Gbemre). The experience to date suggests
that courts are more willing to exercise an active role in guiding regulatory development where the statutory
framework has proven ineffective (e.g., Leghari), as well as where States' actions to avert climate harm are seen as out of step with national policy or international
commitments (e.g., Urgenda). Resort to international legal principles as a means of defining the scope of the State's legal obligation appears to occur most
frequently when the content of national law is ambiguous (e.g., Urgenda; Leghari), and in legal systems that permit a direct uptake of international law. Second,
while courts to date have been unwilling to impose civil liability on private entities, especially in the United States where the majority of these suits has been
brought, emerging
science appears likely to feed additional litigation insofar as it helps to address some of
the causation and apportionment hurdles that have made these cases challenging. Also, additional and collateral avenues
for private-sector accountability may emerge. For example, following the announcement that several state attorneys general and the Securities and Exchange
Commission in the United States are investigating energy companies for allegedly misleading investors and the public about climate change, a securities fraud class
action was filed against an energy company relating to climate change and non-disclosure of climate-related risks. 162 Finally, while few national judges to date
have been called upon to adjudicate transnational climate claims, several recent cases, supported by emerging climate research, appear likely to lead to an increase
in cases of this kind. Overall, the
number of climate lawsuits is unquestionably on the rise, positioning the courts
for an increasingly vital role in ensuring climate-related accountability, enabling resiliency, and
contributing to a sustainable future.
D. Warming causes extinction.
Ng ’19 [Yew-Kwang; May 2019; Professor of Economics at Nanyang Technology University, Fellow of the Academy of Social Sciences in Australia and Member
of the Advisory Board at the Global Priorities Institute at Oxford University, Ph.D. in Economics from Sydney University; Global Policy, “Keynote: Global Extinction
and Animal Welfare: Two Priorities for Effective Altruism,” vol. 10, no. 2, p. 258-266] [edited for gendered language]
Catastrophic climate change Though by no means certain, CCC causing global extinction is possible due to interrelated
factors of non‐linearity, cascading effects, positive feedbacks, multiplicative factors, critical thresholds
and tipping points (e.g. Barnosky and Hadly, 2016; Belaia et al., 2017; Buldyrev et al., 2010; Grainger, 2017; Hansen and Sato, 2012; IPCC 2014; Kareiva
and Carranza, 2018; Osmond and Klausmeier, 2017; Rothman, 2017; Schuur et al., 2015; Sims and Finnoff, 2016; Van Aalst, 2006).7 A possibly imminent
tipping point could be in the form of ‘an abrupt ice sheet collapse [that] could cause a rapid sea level rise’ (Baum
et al., 2011, p. 399). There are many avenues for positive feedback in global warming, including: the replacement of an ice
sea by a liquid ocean surface from melting reduces the reflection and increases the absorption of sunlight, leading to
faster warming; the drying of forests from warming increases forest fires and the release of more carbon; and higher
ocean temperatures may lead to the release of methane trapped under the ocean floor, producing runaway global
warming. Though there are also avenues for negative feedback, the scientific consensus is for an overall net positive feedback (Roe and Baker, 2007). Thus, the
Global Challenges Foundation (2017, p. 25) concludes, ‘The world is currently completely unprepared to envisage, and even less
deal with, the consequences of CCC’. The threat of sea‐level rising from global warming is well known, but there are
also other likely and more imminent threats to the survivability of [hu]mankind and other living things. For example, Sherwood
and Huber (2010) emphasize the adaptability limit to climate change due to heat stress from high environmental wet‐bulb
temperature. They show that ‘even modest global warming could … expose large fractions of the [world] population
to unprecedented heat stress’ p. 9552 and that with substantial global warming, ‘the area of land rendered
uninhabitable by heat stress would dwarf that affected by rising sea level’ p. 9555, making extinction much more likely and
the relatively moderate damages estimated by most integrated assessment models unreliably low.
1NC AI Industry DA
The next off-case is the AI Industry DA.
Artificial intelligence has become big business — and the pace of innovation is only picking up. According to
Deutsche Bank, 175,072 AI patents were filed between 2012 and 2022, with more than half of them coming in those final three years. The bank
anticipates a dramatic spike this year and next in companies adopting AI applications, especially in such fields as product
development, sales, marketing and human resources. Legal firms now use AI to generate contracts; travel companies rely on chatbots to provide help during the
booking process. Already,
the global AI market is worth roughly U.S.$136.6 billion, and it’s on track to reach U.S.
$1.3 trillion by 2032. Patents for AI innovations, as seen in the figure below, are being filed in many different
sectors. From 2022 to 2030, AI use by organizations around the world is expected to expand at a compound annual growth rate of more than 38 percent. It’s
clear that AI adoption is climbing at a breakneck rate. Experts predict that as computational power
grows exponentially, the capabilities of these AI applications — in reasoning, in accuracy, in specialization and in personalization
— will skyrocket. At the same time, regulations and policy can take much longer to develop. The European Union spent three years drafting its 125-page law
to regulate artificial intelligence, introduced in April 2021. But none of those 125 pages mentioned generative AI, the breakthrough that powers applications like
ChatGPT and that blindsided lawmakers. While regulators work to catch up, business leaders need to take their own steps to ensure that the technology being
developed and used today doesn’t have harmful consequences. Policy-makers are having to play catch up. For instance, a bipartisan group of U.S. House
representatives proposed new legislation in January to regulate the use of AI to create clones or likenesses of artists. As the technology develops, it’s important for
business leaders and policy-makers to ensure AI is used in the service of society.
B. The plan decks AI innovation—it increases costs, complexity, and barriers to entry
Brough and Nazeri 23 (Wayne T. Brough, Policy Director of Technology and Innovation at the The R Street Institute. Ahmad Nazeri, the R Street
Institute, “Artificial Intelligence and Copyright: Notice and Request for Public Comment” public comments before the U.S. Copyright Office. 10/30/2023. Accessed
5/25/2024 from https://www.regulations.gov/comment/COLC-2023-0006-8302) wtk
Introducing a licensing requirement for the development and adoption of generative AI systems would
have profound economic implications. •Barrier to Entry: Given the vast number of works an AI training
dataset might need to use—and the fact that thousands or millions of individuals might own those works—
obtaining licenses for all underlying content becomes a significant challenge. This could act as a barrier
to entry for smaller companies or startups that lack the resources to negotiate and secure such licenses. •Increased Costs: The
process of identifying, negotiating and securing licenses for every individual piece of content in a dataset
would be resource-intensive. These increased costs could be passed on to consumers or could deter
companies from pursuing certain AI-driven projects altogether. •Stifling Innovation: The sheer
complexity and cost associated with obtaining licenses might discourage innovation. Companies might
opt for safer, less ambitious projects to avoid potential copyright pitfalls, thereby limiting the advancement
of AI technologies. •Monopoly Concerns: Only large entities, like tech giants, that have the resources to
navigate the licensing landscape or have already amassed vast amounts of data might be able to compete effectively in
the AI space. This could lead to a monopolistic environment where only a few players dominate, thereby
reducing competition and potentially stifling innovation. •Economic Incentives for Litigation: Given the structure of copyright
remedies, even small-value infringements can lead to lawsuits due to the potential for statutory damages. This could encourage opportunistic lawsuits, further
increasing costs for AI developers. •Potential Negative Outcomes: While broader access to data can help mitigate some of the negative outcomes associated with AI
(e.g., biases), restricting access through licensing could exacerbate these issues. For instance, limited data access might hinder the ability of AI systems to be trained
on diverse datasets and lead to biased outcomes. •Impact on Broader Economy: The ripple effects of these challenges could extend
beyond the AI industry. Reduced innovation in AI could slow advancements in sectors across the
economy that rely on AI, such as health care, finance and transportation.
A threat even more dire than misinformation is the “risk of extinction from AI” that the Center for AI Safety highlights in its open statement. Yet, in
terms of
whether machines or humans are more likely to initiate extinction-level events such as nuclear war,
humans still seem to have the upper hand. In recent empirical work that analyzes the decision processes employed by senior
leaders in war-game scenarios involving weapons of mass destruction, humans showed an alarming tendency to err on
the side of initiating catastrophic attacks.Footnote5 These simulations, if implemented in reality, would
pose much graver risks to humanity than machine-driven ones. Our exploration of the use of AI in critical
decision-making has shown AI’s superiority to human decisions in nearly all scenarios. In most cases, the AI
makes the choice that humans do not make at first—but then, upon more careful consideration and deliberation, change their minds and do make, realizing it was
the correct decision all along. Other, more quotidian concerns raised about AI apply far more to human beings than to
machines. Consider algorithmic bias, the phenomenon whereby algorithms involved in hiring decisions, medical diagnoses, or image detection produce
outcomes that unfairly disadvantage a particular social group. For example, when Amazon implemented an algorithmic recruiting tool to score new applicants’
resumes, the algorithm systematically rated female applicants worse than men, in large part because the algorithm was trained on resumes submitted over the
previous 10 years that were disproportionately male.Footnote6 In other words, an
algorithm trained on human bias will reproduce
this bias. Unlike humans, however, algorithmic bias can be readily deprogrammed, or as economist Sendhil Mullainathan puts
it, “Biased algorithms are easier to fix than biased people .”Footnote7 Mullainathan and colleagues’ research showed that an
algorithm used by UnitedHealth to score patients’ health risks systematically underscored black patients relative to white patients because it measured illness in
terms of health-care costs (which are systematically lower for black versus white individuals, given that society spends less on black patients) (Obermeyer et al.
Citation2019). However, once identified, the researchers could easily modify this feature of the algorithm to produce risk scores that were relatively unbiased. Other
work has shown that algorithms can produce less racially biased outcomes (and more effective public safety outcomes) than human judges in terms of decisions
about whether or not to grant bail to defendants awaiting trial (Kleinberg et al. Citation2018). As biased as algorithms can be, their biases appear less ingrained and
more pliable than those of humans. Compounded by recent work showing that, in hiring and lending contexts, managers reject biased algorithms in favor of more
biased humans, the suggestion that humans should remain at the helm of those functions is, at best, questionable (Cowgill, Dell’acqua, and Matz Citation2020).
Finally, consider the threat to cybersecurity. Although commentators have warnedFootnote8,Footnote9,Footnote10 that large language models add tools to the
arsenals of hackers by democratizing cybercrime, most high-profile information leaks and hacks to date are ushered in by
human beings with no reliance on AI (i.e. a disgruntled employee who knows the system's flaws and perpetrates an attack by remembering key
passwords, or bad programmers who effectively enable future attacks by making wrong assumptions on their software use-cases—such as “no one would create a
password that is 1,000,000 characters long” leading to a classical buffer overflow hack). In fact, AI
is often the last bastion of defense
against those hacks, identifying complex human coding mistakes early-on and correcting them. Recently,
national guardsman Jack Teixeira, who exposed highly classified material in an online chat group, did not require sophisticated technology to access sensitive
documents—he was granted top secret clearance from the Pentagon. Further, a recent study conducted by IBM indicates that 95 percent of security breaches were
caused by human errors such as biting on phishing scams or downloading malware.Footnote11 If anything, the most concerning cybersecurity risk currently posed
by AI results from its increased reliance on human trained code, which is flawed. AI takes hackable human codes and uses them to generate new codes, spreading
these human-generated errors further. The only concerning current cybersecurity attacks by AI involve AI that simulates human communication to dupe humans
into revealing key information. Cybersecurity may represent a case in which technology is more likely to be the solution rather than the problem, with research
indicating, for example, that humans working with AI outperform humans alone in detecting machine-manipulated media such as deepfakes (Groh et al.
Citation2021). Even
when technology contributes to unwanted outcomes, humans are often the ones
pressing the buttons. Consider the effect of AI on unemployment. The Future of Life Institute letter raises concerns that AI will eliminate jobs, yet
whether or not to eliminate jobs is a choice that humans ultimately make. Just because AI can perform the jobs of, say, customer service representatives does not
mean that companies should outsource these jobs to bots. In fact, research indicates that many customers would prefer to talk to a human than to a bot, even if it
means waiting in a queue.Footnote12 Along similar lines, increasingly common statements that AI-based systems—like “the Internet,” “social media,” or the set of
interconnected online functions referred to as “The Algorithm”—are destroying mental health,Footnote13 causing political polarization,Footnote14 or threatening
democracyFootnote15 neglect an obvious fact: These systems are populated and run by human beings. Blaming technology lets people off the hook. Although
expressions of concern toward AI are invaluable in matching the excitement around new technology with caution, outsized news cycles around the threats of
technology can distract from the threats of human beings. Recent research indicates that humans have a “finite pool of attention” such that “when we pay more
attention to one threat, our attention to other threats decreases” (Sisco et al. Citation2023). So, as we contend with the rise of AI and its concomitant harms to
privacy, human survival, and our relationship with truth itself, we must equally pay attention to the humans who are already well equipped to perpetrate these
harms without the assistance of machines. Specifically, it
has not escaped our notice that when engaging in a conversation
about the risks of AI, the benchmark is often “is AI perfect in handling this task” (making critical decisions or guiding a
self-driving car), rather than “is it better than humans.” The answer to the latter question in many cases, is
A better question might be who cares. The number of shoes is not a good indicator of national power. In fact, no
single technology is a
good indicator of national power. The U.S. economy is vast, decentralized, continental in size, and is guided by
actively competitive markets. It has been exceptionally innovative for decades. Leading in a single technology (like
railroads in the 19th century or semiconductors today) reflects a common analytical error that misjudges how economies and technology
actually create national power. The concept of a “race” itself is a questionable legacy of Cold War thinking – the Cold War had a finish line
(identified by Eisenhower and Dulles at the onset), while the current situation does not. Stories
about the United States falling
behind are so predictable that they form a literary genre. In 1957, the President’s Science Advisor predicted that Soviet
performance in math and science education would give it global leadership in a decade. In 1969, the Departments of Treasury, Commerce and
Agriculture warned President Nixon that a powerful new economic entity, the European Union, would displace the United States. Starting the
1980s, assorted pundits announced that Japan would dominate the global economy. And until recently, there were routine
predictions that China would displace the United States, predictions that still make regular appearances. These
predictions have two things in common. First, they were wrong. Second, they were wrong because they
counted the wrong things. They did not place their analyses in the context of larger national economies. Instead, they relied on
picking illustrative metrics, usually proxy indicators that provide an indirect measurement of technological
success. One recurring problem is the tendency to measure inputs rather than outcomes. A politician may have great staff and spend more
on an election, but the ultimate metric is how many votes are received. Claims that the Unites States is falling behind China in
5G because China has deployed more base stations or has more 5G enabled phones reflects a similar
confusion over metrics. It is not the number of base stations that is important, it is the ability to use 5G to create new goods and
services (or be more efficient in the use of existing goods and services) that is important, and this is best measured by the monetization of new
5G enabled services and products and their revenue. A report that announces that China leads in 37 technologies out
of 44 technologies does not explain why the United States is the center for development of artificial
intelligence, quantum technology, and biotechnology. China did not develop successful COVID vaccines,
lags in quantum, and there are anecdotes that China’s leaders asked the author of a best-selling 2018 book on China’s coming
dominance of artificial intelligence why, if that was the case, were GPT technologies developed first in the United States? Of the
37 technologies listed, China’s alleged lead is open to question in 23. Does China really lead in cybersecurity, as the Report
asserts? The digital economy depends on cloud computing, next generation networks (like 5G and 6G), and
software (like AI products) and these are technologies where the United States has a strong if not dominant position.
Nor does technological “leadership” guarantee military advantage. Other factors determine military
success, the most important being political will, leadership, and strategy. An opponent that has an
advantage in these areas will be able to resist and thwart a more technologically advanced opponent (the
Taliban, for example). Advanced technology in the service of flawed strategy will not change outcomes. There is an
assumption that technology provides an advantage and, in a contest, where other factors are equal, technological leadership can be critical, but in most
situations this technological advantage is only one factor among many in determining effectiveness. Specific quantitative
measures may not actually measure what we want to assess. It may be better to ask what nations want (wealth, international influence, military power) to
determine the contribution of a basket of technologies.
3. No uncertainty - The Alice framework is consistently interpreted by the courts and
USPTO
Nikola L. Datzov, 23 – Assistant Professor of Law, University of North Dakota School of Law. “THE ROLE OF PATENT (IN)ELIGIBILITY IN PROMOTING
ARTIFICIAL INTELLIGENCE INNOVATION” 92 UMKC L. Rev. 1 *, Nexis Uni, accessed via University of Michigan //DH
As the Supreme Court is the only court that can overturn its precedent, absent intervention from Congress, the Alice framework will continue
to govern the analysis of whether an invention is patent eligible. However, lower courts led by the Federal Circuit have and will
continue to refine the boundaries and application of the Alice framework. The USPTO also has taken steps to provide clarity in this
area of law by promulgating iterative guidance to those seeking a patent, including example AI claims
that are patent eligible.84 Importantly, such guidance while helpful to practitioners and examiners comes with an important disclaimer, as it is not "the
law of patent eligibility, does not carry the force of law, and is not binding on" the courts.85 Despite the criticisms, and the need for further
clarity regarding the application of the Alice framework, some evidence suggests that the law is much more
predictably applied now86 (and perhaps always has been),87 including with regard to AI patents. Indeed, a co-author and I's
empirical examination of all post-Mayo Federal Circuit decisions on § 101 found that the lower tribunal
correctly decided the eligibility issue nearly 90% of the time.88 Additionally, perhaps somewhat surprisingly, the affirmance
rate on § 101 decisions has remained incredibly [*14] consistent since 2014 never falling below 81%.89
Moreover, looking more closely to determine whether lower tribunals were reaching the correct outcome but still misapplying the law, the Federal Circuit
noted an error in the lower court's analysis in a very small percentage of decisions in which the overall result was
correctly decided.90 Thus, it appears that the district courts and USPTO are not only reaching the correct
outcomes in nearly all cases, they are also applying the law correctly.
Italics in original 6. How did Alice/Mayo impact patent litigation and how would PERA impact patent litigation? The U.S. Supreme Court’s decision in
Alice Corp. v. CLS Bank Int’l, 573 U.S. 208 (2014), has had a tremendously beneficial effect on patent litigation in the
United States. During the decade and a half before Alice, virtually any type of subject matter was patentable under the Federal Circuit’s misguided decision
in State Street Bank & Trust Co. v. Signature Financial Grp., Inc., 149 F.3d 1368 (Fed. Cir. 1998). Alice overruled State Street and returned the patent system to its
roots. Under Alice, patents once again can only be obtained for improvements to technology. Thus under Alice,
patents can no longer be obtained or enforced for business methods and other human activities that
have traditionally been outside the patent system. Alice has been robustly applied over the last decade
to block patents for ideas such as running a type of business; for advertising and sales strategies; for
investment schemes, ways of structuring a transaction, or financial instruments; and for copyrighted
media content, games, and other forms of entertainment. Alice has also restored the long-standing
requirement, first articulated by the U.S. Supreme Court in Corning v. Burden, 56 U.S. 252 (1853), and O’Reilly v. Morse, 56 U.S. 62 (1853), that a
patent must claim an actual means or method for achieving a result. Under this well-settled rule, it is improper for
a patent to simply claim the result or objective itself, and thereby claim all possible means of achieving
that objective (including those invented by others in the future). This rule prohibits, for example, software patents that end with a claim to “a module that
solves for X”—and that can then be asserted against anyone who does the real work of figuring out the solution to problem X. PERA would destroy all
the progress made since the Alice decision. Under PERA, any type of subject matter—business methods, advertising
techniques, legal contracts, games and entertainment, or claims to mere objectives or results— could be claimed in a patent so long as the
invention cannot be “practically performed” without the use of a “machine or manufacture.” As witnesses
testifying in support of PERA made clear at the January 23 Senate IP Subcommittee hearing, the “practically performed” test would mean that any process that
requires communicating across long distances in “real time,” or that requires storage and retrieval of large amounts of data—and thus practically requires the use of
computers and electronic communications equipment—would become patent eligible. Under PERA, a patent would no longer need to describe and claim an
improvement to technology. Rather, to be eligible, the patent need only use technology—including off-the-shelf technology that was developed by others. PERA
would also effectively overrule hundreds of decisions of the U.S. Court of Appeals for the Federal Circuit
that have issued over the last ten years. It is difficult to fathom just how destabilizing PERA would be for
the U.S. patent system and for the American manufacturing and technology sectors . The result would be
enormous uncertainty for a broad spectrum of U.S. industries and billions of dollars in additional (and
unnecessary) litigation costs, most of which would be passed along to American consumers in the form of higher prices.
5. Plan can’t solve - Chinese IP theft prevents winning the tech race
Derek Scissors, 23 - Senior fellow, American Enterprise Institute “Oversight of US Investment Into China” Written statement for The House Select
Committee on the Chinese Communist Party On Ensuring U.S. Leadership in the Critical and Emerging Technologies of the 21st Century, 7/26,
https://selectcommitteeontheccp.house.gov/sites/evo-subsites/selectcommitteeontheccp.house.gov/files/evo-media-document/scc-expert-written-
testimony_20230726_derek-scissors.pdf //DH
While we claim to be competing, there are two giant holes through which America helps China develop technologies
that are potentially vital to winning the competition. The first is our failure to respond to theft of intellectual
property (IP). Even if claims of trillions in IP losses are exaggerated, the amount is certainly in the hundreds of billions. American
retaliation against PRC beneficiaries of coerced or stolen IP is nearly non-existent, guaranteeing more theft.13
Outbound investment is the second. The US attempts to protect technology embedded in goods sales through the export control regime,
though the regime has its own problems. For inbound investment, the Committee on Foreign Investment in the United Sates (CFIUS) has a solid track record in
preventing loss of sensitive technology through acquisition here. Yet the
same technologies that are controlled in exports or
shielded off from acquisition can be developed in China, with American funding. Consider a Chinese entity bidding for
an American company developing a useful technology. It is likely be rebuffed by CFIUS. Then it tries to buy the company’s products to reverse-engineer. There’s at
least a chance this is prevented by export controls. Yet the
Chinese entity, often backed by Chinese government guarantees which make investment much
more appealing, can freely solicit American money to develop the technology in the PRC. There’s no barrier to
this at all – CFIUS has no jurisdiction and export controls (contrary to some claims) do not apply. And it’s actually the worst outcome, since we are
helping China become more self-sufficient.
Innovation Answers
1. Innovation is high - Investment increased since Alice
David Jones, 2024 – executive director of the High Tech Inventors Alliance. Statement Before the Subcommittee on Intellectual Property U.S. Senate
Committee on the Judiciary, Hearing on “The Patent Eligibility Restoration Act – Restoring Clarity, Certainty, and Predictability to the U.S. Patent System” 1/24,
https://www.judiciary.senate.gov/imo/media/doc/2024-01-23_-_testimony_-_jones.pdf //DH
Moreover, the benefits of the Alice decision are not limited to the relative increase in predictability . Other
empirical studies have, for example, separately concluded that the Alice decision directly resulted in increased
R&D investment,15 was correlated with increased sales by software firms,16 that “Alice was associated with a significantly
higher likelihood of receiving a new round of VC funding” for tech startups,17 and there was “a positive
association between Alice and both R&D spending by software firms and patenting by firms that held
relatively more software patents prior to the Court’s opinion .”18 In sum, the current patent eligibility
jurisprudence is neither unpredictable in its outcomes nor harmful to venture capital or R&D investments.
Rather, it is Alice and its progeny that encourages and protects technological innovation.
A recently introduced patent bill would authorize patents on abstract ideas just for including computer jargon, and would even legalize the patenting of human
genes. The “Patent Eligibility Restoration Act,” sponsored by Sen. Thom Tillis (R-NC), explicitly overrides some of the most important Supreme
Court decisions of the past 15 years, and would tear down some of the public’s only protections from the worst patent abuses.
Pro-patent maximalists are trying to label the Tillis bill as a “consensus,” but it’s nothing of the sort. We need EFF supporters to send a message to Congress that it
isn’t acceptable to allow patent trolls, or large patent-holders, to hold our technology hostage. We Don’t Need 'Do it on a Computer' Patents Starting in
the
late 1990s, the U.S. Court of Appeals for the Federal Circuit essentially did away with any serious limits on what could
be patented. This court, the top patent appeals court in the U.S., allowed patents on anything that produced a “useful result,” even when that result was just
a number. This allowed for a period of more than a decade during which the U.S. Patent Office issued , and the
courts enforced, all kinds of ridiculous patents. Several Supreme Court decisions eventually limited the power of
bad patents. Most importantly, the Supreme Court’s 2014 Alice Corp. v. CLS Bank decision made a clear rule—just adding “on a
computer” to an abstract idea isn’t enough to make it patentable. The Alice Corp. decision was not a panacea. It did not eliminate the serious
problem of patent trolls—that is, companies that have no products or services, but simply sue and threaten others over patents. But it did put
a big dent in the patent trolling business. Vaguely worded software patents can still be used to extort
money from software developers and small businesses. But when those patents can be challenged in
court, they now rarely survive.
3. Patent thickets turn - Expanding eligibility increases patent thickets that decrease
innovation
Wayne Brough, 24 - policy director for R Street’s Technology and Innovation team. Prior to R Street, Wayne was the president of the Innovation Defense
Foundation. PhD in economics from George Mason University “Congress Wants to Revive Patents but May Strangle Innovation and Damage Health Care Access
Instead” R Street Institute (a public policy think tank), 4/3, https://www.rstreet.org/commentary/congress-wants-to-revive-patents-but-may-strangle-innovation-
and-damage-health-care-access-instead/ //DH
Expanding Patent Eligibility Limits Innovation and Competition Supporters claim that patent eligibility practices in the wake of the aforementioned Supreme Court
decisions hinder innovation and investment in critical technologies like AI, personalized medicine, and diagnostic testing. On the other hand, critics assert that the
PERA would restore an earlier patent regime that granted patent protections too broadly, making it
difficult for true innovators to navigate complex and far-reaching patents that limit entry into the market. “Patent
thickets” became prevalent in the late 1990s and early 2000s, particularly among pharmaceutical companies seeking to extend
their exclusivity over blockbuster drugs in order to keep generic competitors at bay. This term refers to the practice of creating
impenetrable walls of patents on different aspects of the same drug—from the primary patent covering the compound itself
to secondary patents on characteristics like method of use, formulation, and other aspects of drug delivery—thereby creating a dense web, or
“thicket,” of patents surrounding a single product. For example, I-MAK found that America’s top 10 best-selling drugs possess an
average of 74 granted patents. These patent thickets extend the monopoly protection enjoyed by drug manufacturers well beyond the original patent while keeping
low-cost generics out of the market. Another
tactic for extending monopoly protection of brand-name drugs is
“evergreening,” or adding new patents to a drug as existing ones expire. By pursuing patents on different aspects of a drug,
such as packaging, dosage, or other properties, it is possible to extend its dominance in the marketplace. While some changes may be warranted, others may simply
protect exclusivity. This makes it important to evaluate a patent’s validity—something proposed changes within the PERA would make more difficult. “Product
hopping” is yet another way to game the patent system. Here, a company produces a reformulated
version of a drug—by changing the dosage or introducing an extended-release version , for instance—and
markets the new product heavily while urging prescribers to switch. They may even take the old product off the market. In
one example, as the patent on its popular anti-ulcer drug Prilosec expired, AstraZeneca switched to Nexium, a drug with minimal modifications and an additional 13-
year patent life. This product hop was estimated to cost the U.S. health care system over $2 billion annually. Expanding
patent eligibility to
allow companies to obtain patents on abstract ideas, natural phenomena, or even basic computing
functions would significantly extend the monopolies of patent owners, stifling innovation and
competition by giving patent holders greater authority to challenge new entrants for infringement as
they try to access the market. It would abuse the tools of government to slow down competition and
stifle innovation.
4. No chemical pollution impact – it’s regional not global so it won’t cause extinction
Dr Jennifer Bernstein, 2024 - is Editor-in-Chief of the academic journal, Case Studies in the Environment, Senior Fellow Liaison at the Breakthrough
Institute, and adjunct faculty at Tarleton State University. “Climate in crisis: The deceptive call of the apocalypse” The Academic, 5/2,
https://theacademic.com/climate-crisis-deceptive-call-of-apocalypse/ //DH
“Planetary boundaries” are problematic Much of the apocalyptic rhetoric is based on the idea that quantifiable limits, once
breached, signal the end of life on this planet. Johan Rockström and colleagues coined the term in 2009. It defines nine
biophysical factors—land-use change, biodiversity loss, nitrogen and phosphorous levels, freshwater use, ocean acidification,
climate change, ozone depletion, aerosol loading, and chemical pollution—which are “hard boundaries”. The appeal of the Planetary
Boundaries hypothesis lies in its clarity and clear directives. But like so many scientific theories, the concept of planetary boundaries does
not emerge objectively from science; rather, it is the product of a number of methodological—and sometimes
prescriptive—decisions made by very human scientists at a particular place and time. Rockström clearly stated that the Planetary
Boundary hypothesis was a normative project. But science is infused with problematic assumptions. For one, planetary
boundaries are determined using the global scale. Assuredly, one of the nine factors—climate—is global in scope.
However, the other eight factors vary at the local and regional scale. Some, like biodiversity, have no global
tipping point at all. Despite appearing as hard science, the Planetary Boundaries hypothesis is part of science and
advocacy.
5. Synthetic biology innovation fails – innovation is incremental and won’t be scaled
up for remediation
Andrew D. Hanson and Víctor de Lorenzo, 2023 - *Horticultural Sciences Department, University of Florida and **Systems Biology
Department, Centro Nacional de Biotecnología-CSIC, Campus de Cantoblanco, Madrid “Synthetic Biology—High Time to Deliver?” ACS Synth Biol. 2023 Jun 16; 12(6):
1579–1582. //DH
Like any advance, SynBio evokes sincere evangelical zeal in its practitioners , which is good in itself and helps spread SynBio ideas
and technology throughout the public and private sectors.16 Informed, data-driven forecasting of what SynBio advances could enable is likewise good and helpful.
Diffusing advances and faithfully predicting their impacts fuel hope, a virtue we cannot afford to lose in facing what has been called a global polycrisis.17 Like any
crisis, though, this one creates opportunities for merchants of false hope, or hype. Hype flourishes when
disoriented, poorly informed populations want to believe in easy solutions to difficult problems; SynBio
can be–and sometimes is–pushed in this way as a techno-fix in areas such as atmospheric CO2 drawdown,18
nitrogen pollution in agriculture,19 and green jet fuel production.20 It is not that SynBio cannot contribute to
these areas; it can. The problem is one of claims for the scale and timeline of the contribution that do
not follow the engineering practice of running the numbers to get rough but robust estimates of how
much a SynBio-based project could possibly do and how fast,21 and then sticking to these estimates when advocating and
publicizing the project. A habitual disregard of what is involved in scale-up is an Achilles’ heel of the whole field. It is
one thing to have a smart genetic construct showing a spectacular phenotype in a Petri dish or small bioreactor
for a short period of time but a very different thing to have the same at an industrial or even global scale10 and
following rigorous implementation standards. Going from one dimension to the other makes the whole difference
between a merely intellectual exercise and a truly transformative development.22 Alas, the conceptual
excitement occasioned by SynBio has thus far been more supported than scale-up technologies–unfairly
considered a lower-rank endeavor.23 The solution is just to learn from and to follow sound engineering practices and traditions. Not doing so, in the long term, only
weakens public trust in bioscience and biotechnology.