EfficientTimes.com
  • Politics
  • Tech News
  • Investing
  • Stock
  • Editor’s Pick
InvestingStock

AI Policy Already Exists, We Just Don’t Call It That: Generally Applicable Law and New Technology

by October 13, 2025
October 13, 2025

Jennifer Huddleston and Christopher Gardner

Artificial Intelligence - AI

Recent debates around a potential moratorium on state-level artificial intelligence (AI) laws have raised questions about what might happen without federal action on AI. Opponents of the moratorium often express concerns about what might happen if there is a gap between such a moratorium and the establishment of a federal AI framework. However, many concerns about discrimination, fraud, or other abuses have existing legal frameworks, which means the legal and regulatory environment in which a new technology like AI operates is far from the Wild West, particularly in many already regulated industries. How might generally applicable law play out when it comes to AI, and what does this mean both for laws that might be preempted and for the concerns that might already be resolved?

Generally Applicable Law in the AI Context

The US has been a culture of fast-paced innovation and technological advancement since its founding, so it is only natural that our existing common law-focused legal system has, in many ways, been able to adapt to quickly changing technologies without new regulatory bodies or laws. This inherent adaptability stems primarily from generally applicable laws that focus on particular harms rather than regulating a specific technology.

In the context of new and developing general-purpose technologies, such as AI, a generally applicable law should not unfairly favor one form of technology over another. Such an approach focuses on the bad actor or the harm rather than on the technology used. This is typically achieved through legislation that reflects the functions and values citizens are expected to embody in their interactions with one another. This is also advantageous from a legislative or regulatory perspective, as it is more adaptable given the static nature of law compared to the dynamic nature of technology and innovation.

This recognition and trust in the existing generally applicable guardrails stem from a recognition that we all need to make: AI is not the first, or the last, new technology that society and our existing governance have adapted to.

Current Examples of Generally Applicable Law in a US Federal Context

The role of the US as a global leader in technological innovation has been made possible by the light-touch regulation associated with the reliance on generally applicable laws. This historical precedent is particularly important to keep in mind when considering our lack of understanding of the full potential use cases of generative AI. Generative AI has been around for only a few years, but it has already impacted nearly every aspect of our lives. Its role as an evolutionary technology augmenting the productivity of our work means that even industry-specific laws can be generally applicable when it comes to AI.

A prime example of this is in FINRA’s (Financial Industry Regulatory Authority) Regulatory Notice 24–0[CG1] 9. This notice did not introduce any new regulatory obligations but simply referred member firms back to the tech-neutral, industry-specific regulations that govern its member firms’ behavior in a highly competitive environment.

Other agencies have also stated their intention to either reconsider problematic, generally applicable law that can deter innovation or use their existing standards and regulations to resolve these concerns. For example, in 2023, Rohit Chopra, then director of the Consumer Financial Protection Bureau, said, “There is no AI exemption to the laws on the books.” In this regard, existing laws likely already address many of the high-risk concerns, such as the use of AI in the financial services sector, as the use of AI does not alleviate the deployer of wrongdoing.

However, this example does not extend only to precise regulations and requirements, such as those under the CFPB. Existing standards around professional conduct and common law may also already cover many of the underlying concerns about AI. For example, courts have responded with sanctions or other appropriate professional misconduct steps when attorneys have filed briefs or other documents with AI “hallucinations” that turn out to be erroneous or nonexistent. Such actions allow professional norms to continue to be enforced, but without specific changes due to the emergence of new technology.

Generally Applicable Law in the Face of a Potential AI Moratorium

In 2025, Congress considered, but initially rejected, a potential moratorium on state-level AI regulation as part of the “One Big Beautiful Bill.” However, in September, Senator Ted Cruz (R‑TX) introduced a new state AI policy moratorium as separate legislation.

One grievance expressed by critics of the law is that it would prevent states from trying to protect their citizens from the potential harms of AI, such as discriminatory applications in child safety. However, under the moratorium proposals, the “primary purpose” of the law must be to regulate AI; thus, allowing states to continue to pass generally applicable laws that focus on potential harm.

Generally applicable laws would allow states to respond to concerns such as data privacy, fraud, or discrimination, provided that such laws are applied in a technologically neutral manner and not merely to AI or its applications. A potential improvement to current moratorium debates might be to clarify the process of updating existing generally applicable laws. This would clarify that merely applying or updating existing law to include violations via AI does not render those laws’ “primary purpose” to be the regulation of AI in violation of the moratorium. This would allow states to clarify the applicability of existing general-purpose laws grounded in technologically neutral harms in ways that could benefit the clarity and resolve concerns for developers, deployers, and consumers; however, it would still limit the potential to engage in model-level or other regulation.

The Risk of Potential Over-Application of Generally Applicable Law to AI and How to Minimize It

The impact of generally applicable law on AI products is not without its own risks of overregulation. As with the internet before it, there may be cases where AI illustrates that existing regulations are suboptimal or ill-fitted.

For example, many states have or are considering general-purpose technology laws around issues like data privacy or youth online safety. These have their own consequences and would significantly impact AI, its application, and its development. But they would also likely be considered as laws that are general-purpose and not directly targeting AI. Another risk of generally applicable law is how agencies could engage in overzealous interpretations of their own regulations in ways that could hinder AI deployment. While in some cases such deterrence may prevent potential harm, in other cases it could also prevent significant beneficial applications. For example, this is why the Department of Transportation has considered how its own currently generally applicable regulations may need to be amended or reconsidered to enable autonomous vehicles and other AI-driven transportation innovation applications.

One potential way to overcome some of the possible risks of poorly applying generally applicable law to AI is to consider the use of regulatory sandboxes. Sandboxes allow state or federal regulators to remove certain regulations or requirements for participants, usually for a limited period of time and often with alternative requirements in place. This approach can be an intermediary step to more significant deregulation and allow testing of whether current regulations still reflect an appropriate risk profile. Ideally, sandboxes are not an end step, but rather a part of a broader analysis of whether regulations are still necessary and can allow new technologies, such as AI, to impact and improve previously regulated industries.

Conclusion

Fears of an AI moratorium or the slow pace of federal AI regulation are often based on the idea that there is nothing to handle the problems that arise. However, many of the risks and concerns about AI are not unique to the technology itself and may be addressed by existing laws. In fact, there may be times when we need to consider not only whether existing laws sufficiently cover AI, but also whether they no longer serve their intended purpose.


[CG1]Regulatory Notice 24–09 | FIN​RA​.org

previous post
Johnson warns US ‘barreling toward one of the longest shutdowns’ in history
next post
Fetterman marks release of last living hostages: ‘The nightmare finally ends’

You may also like

Finlay Minerals Announces Closing of Non-Brokered Private Placement...

October 18, 2025

Tech Weekly: Broadcom and OpenAI Sign Deal, AMD...

October 18, 2025

Top 5 Canadian Mining Stocks This Week: JZR...

October 18, 2025

US Cancels US$500 Million Cobalt Tender in Setback...

October 17, 2025

Vince Lanci: Silver’s London Liquidity Crisis — What’s...

October 17, 2025

CSE Bulletin: MOC Eligibility Update

October 17, 2025

The Real Drivers of This Market: AI, Semis...

October 16, 2025

July Strength, Late-Summer Caution: 3 Charts to Watch

October 16, 2025

Tech Taps the Brakes, Homebuilders Hit the Gas:...

October 16, 2025

The Best Five Sectors, #28

October 16, 2025
Join The Exclusive Subscription Today And Get Premium Articles For Free


Your information is secure and your privacy is protected. By opting in you agree to receive emails from us. Remember that you can opt-out any time, we hate spam too!

Recent Posts

  • Meta AI will use its ‘memory’ to provide better recommendations

    October 16, 2025
  • Apple’s Sports app now tells you where you can watch nationally broadcast games

    October 16, 2025
  • The Pebble smartwatch is making a comeback

    October 16, 2025
  • iOS 18.3 is out with tweaks to AI notification summaries

    October 16, 2025
  • Sony reduces OLED burn-in fears with a three-year warranty on InZone monitors

    October 16, 2025
  • Now Apple tells us how to update AirPods

    October 16, 2025
  • About Us
  • Contacts
  • Email Whitelisting
  • Privacy Policy
  • Terms and Conditions

Copyright © 2023 EfficientTimes.com All Rights Reserved.

EfficientTimes.com
  • Politics
  • Tech News
  • Investing
  • Stock
  • Editor’s Pick