• Investing
  • Tech News
  • Stock
  • World News
Grand Expo Event
Editor's PickInvesting

Hawley and Blumenthal’s AI Bill Is a Brazen Executive Power Grab 

by October 24, 2025
October 24, 2025

Juan Londoño

hawley

On September 29, Senators Josh Hawley (R‑MO) and Richard Blumenthal (D‑CT) introduced a bill to create a risk evaluation program within the Department of Energy (DOE). The bill would charge the DOE with conducting various assessments and tests on “advanced” artificial intelligence (AI) systems, with a special emphasis on the potential of regulating hypothetical artificial superintelligence systems. In their press release, the senators repeated their belief that AI products are already “rushed to market with products that are unsafe for the public and often lack basic due diligence and testing.” According to the lawmakers, the bill would establish safety stopgaps and testing to ameliorate that issue. The reality is quite different.

Not only are the senators’ statements about current models being “untested” demonstrably false. Most major developers already publish public testing data regularly. For example, OpenAI has a “safety evaluations hub,” which shows the results of safety tests on issues such as models producing disallowed content, hallucinations, or prompts that circumvent their content policy. However, the scope of the legislation extends far beyond testing. Hidden in the bill is a provision that tasks the DOE with developing “proposed options for regulatory or governmental oversight, including potential nationalization or other strategic measures, for preventing or managing the development of artificial superintelligence if artificial superintelligence seems likely to arise.” 

In other words, this bill would grant the government broad power to seize assets whenever a company crosses the technological frontier of what the bill defines as superintelligence. Granting such powers would hamper the development of frontier models, which, as their name indicates, are the models trying to push the “frontier” of what commercially available AI models can do. Thwarting the development of these models would put the US AI industry at a significant disadvantage in the global race for AI dominance and push American consumers toward riskier, less secure foreign-made frontier models.

Under this bill, a model has to fulfill three conditions to be considered an “artificial superintelligence.” It must be able to operate autonomously for long stretches of time, it must enable a device or software to match or exceed human performance across most tasks, and it must have the capacity to enhance the capabilities of a device or software independently, with little or no human oversight. These types of advanced models are being developed in hopes of enabling AI’s most impactful uses, such as automated, hyper-precise, and hyper-personalized health diagnostics.

blumenthal

However, the bill would put developers under a regulatory Sword of Damocles, constantly hanging over their heads, once their models are considered an AI superintelligence. The government would have the power to take over a company’s assets whenever it sees fit. As a result, US-based AI companies will underinvest and underdevelop their models to avoid being considered a superintelligence.

This approach would recklessly enable other nations’ AI development efforts, as the market for frontier models will be filled by foreign—and potentially adversarial—nations. The US would effectively be ceding leadership to countries like China, which will inevitably take the lead. However, as a study by the National Institute of Standards and Technology’s Center for AI Standards and Innovation has already shown, Chinese models are substantially more susceptible to agent hijacking and jailbreaking attacks than US models. In other words, the models that would replace the vacancy of American frontier models are more likely to go rogue or will have weaker, easily circumventable safety guardrails, making them more vulnerable to being exploited by bad actors. Thus, the bill is ultimately self-defeating, as it will concede the development of the most advanced and riskiest AI models to nations with consistently subpar safety standards.

There are examples in other industries, such as hazardous chemicals or automobiles, where government agencies are asked to validate or evaluate existing testing, but they have stronger limits on government power. In the worst cases, the executive can only go as far as suspending or prohibiting the sale of non-compliant products, a far cry from nationalization. But equipping the DOE with broad regulatory powers to the point where it can seize the assets of a company is an unseen and dangerous concession of power to the executive.

While the lawmakers focus mostly on the ways these frontier models can go wrong, they forget the significant upside these technologies could have in cybersecurity, scientific research, and productivity. Under this proposal, the US would have to rely on foreign-made, riskier models or miss out on these AI-powered technological advancements altogether.

The Hawley-Blumenthal AI bill disguises a brazen power grab by the executive as an otherwise harmless third-party safety testing regime. If a model gets advanced enough to cross that “superintelligence” threshold, the government has the authority to potentially wipe out all its investments through nationalization on a whim. It would create a clear disincentive for frontier AI companies to innovate and improve their models, depriving Americans and the world of models capable of producing valuable scientific breakthroughs, or they would have to rely on foreign models. Ironically, a bill premised on curtailing AI-related risks would push the global population toward lower-quality, riskier, and foreign-made frontier models.

Juan Londoño is the Chief Regulatory Analyst at the Taxpayers Protection Alliance

previous post
“Experts” Don’t Know How Ignorant They Are
next post
Friday Feature: Horseshoe Mountain Village School

You may also like

Dismantling the Department of Education by Interagency Agreement:...

November 18, 2025

Paving Over Debate: The ROAD Act’s Quiet Passage...

November 18, 2025

The Real Barrier to Psychedelic Treatment Isn’t Science—It’s...

November 18, 2025

What OBBBA Means for the OECD Global Minimum...

November 18, 2025

Under Trump’s 2018 Health Insurance Relief, Obamacare Premiums...

November 17, 2025

The Left Collapses in Chile

November 17, 2025

Price Control Apologia

November 17, 2025

Healthcare Data in the Age of AI: Exploring...

November 17, 2025

Friday Feature: Scholé Center for Innovative Education

November 14, 2025

A Tale of Two Cannabinoids: Congress’s Protectionist Pivot...

November 14, 2025

    Fill Out & Get More Relevant News


    Stay ahead of the market and unlock exclusive trading insights & timely news. We value your privacy - your information is secure, and you can unsubscribe anytime. Gain an edge with hand-picked trading opportunities, stay informed with market-moving updates, and learn from expert tips & strategies.

    Recent Posts

    • Dismantling the Department of Education by Interagency Agreement: Hot Take Edition

      November 18, 2025
    • Paving Over Debate: The ROAD Act’s Quiet Passage and Its Empty Promises

      November 18, 2025
    • The Real Barrier to Psychedelic Treatment Isn’t Science—It’s Regulation

      November 18, 2025
    • What OBBBA Means for the OECD Global Minimum Tax

      November 18, 2025
    • Under Trump’s 2018 Health Insurance Relief, Obamacare Premiums Stabilized and Enrollments Doubled

      November 17, 2025
    • The Left Collapses in Chile

      November 17, 2025
    • Privacy Policy
    • Terms & Conditions

    Copyright © 2025 grandexpoevent.com | All Rights Reserved

    Grand Expo Event
    • Investing
    • Tech News
    • Stock
    • World News