View other perspectives:

Artificial Intelligence and the Future of the United States Economy: A 2030 Policy Assessment

Federal policy framework for managing AI's impact on workforce, competitiveness, and economic growth through 2030

1. Economic Exposure Assessment

Current National AI Position

The United States maintains the world's largest economy at approximately $28 trillion in annual GDP, with artificial intelligence emerging as a critical driver of future competitiveness and growth. The federal government has recognized AI's strategic importance through unprecedented investments, policy frameworks, and regulatory approaches that balance innovation with risk mitigation.

Small businesses comprise the backbone of the American economy. As of 2025, there are 36.2 million small businesses operating in the United States, representing 99.9% of all U.S. businesses. These enterprises employ 46% of the private sector workforce—approximately 61.6 million workers—and contribute 43.5% of U.S. GDP and 39% of private sector payroll.[1] The impact of AI on this small business ecosystem carries profound implications for national economic resilience.

Key Finding: Small businesses have dramatically accelerated AI adoption, rising from 39% in 2024 to 55% in 2025—a 41% year-over-year increase. According to the U.S. Chamber of Commerce, 58% of small businesses adopted generative AI in 2025, with 96% planning to adopt emerging technologies.

Federal AI Investment Architecture

The federal government has allocated substantial resources to maintain U.S. AI leadership across civilian and defense sectors:

Federal AI R&D spending has grown at 6% annually from FY2021-2025, adding $2.8 billion to baseline investments. This trajectory reflects both bipartisan recognition of AI's strategic importance and acknowledgment of intensifying global competition from China and the European Union.

Economic Growth Projections

McKinsey Global Institute research projects that AI will add $13 trillion to the global economy by 2030, with the United States positioned to capture significant value through early-stage innovation and deployment.[8] Current technology can already automate 57% of U.S. work hours, with potential expansion to 30% of all work hours by 2030 under wide adoption scenarios.

2. Workforce Impact by Sector

Job Displacement Projections

Goldman Sachs workforce analysis projects that 6-7% of the U.S. workforce faces exposure to AI-driven displacement under wide adoption scenarios, with a plausible range of 3-14% depending on implementation speed and geography.[9] This translates to direct impacts on millions of American workers, though the analysis emphasizes that displacement is likely transitory as new opportunities emerge.

Net Job Impact Estimate: By 2030, global AI deployment may create 170 million jobs while displacing 92 million positions, yielding a net gain of 78 million positions worldwide. However, this aggregate figure masks significant sectoral and geographic variation.

High-Risk Occupations

Goldman Sachs identifies four occupational categories with elevated displacement risk:

The U.S. Bureau of Labor Statistics reported in 2025 that AI-related efficiency gains are already affecting employment trends in marketing, graphic design, office administration, and call center operations, though adoption speed remains gradual as organizations integrate new technologies.

Sectoral Workforce Adjustments

Unlike previous technological disruptions, AI automation is spreading across white-collar and professional services simultaneously with blue-collar manufacturing. The discontinuity presents policy challenges: traditional retraining programs designed for manufacturing-sector displacement may not address the skill requirements for AI-adjacent roles requiring more sophisticated education.

The unemployment rate during transition periods could increase by approximately 0.5 percentage points—modest in absolute terms but affecting hundreds of thousands of workers in vulnerable communities and demographic groups lacking AI-adjacent skills.[10]

3. Policy Options: Lessons from Peer Countries

The EU AI Act: Prescriptive Regulation

The European Union adopted a comprehensive, extraterritorial regulatory framework under the AI Act, effective in stages through 2026-2027. The EU approach establishes binding regulations with enforceable penalties without requiring additional legislation.[11] The framework classifies AI systems into risk tiers:

The EU Act applies to any AI system marketed or used within EU borders, regardless of developer location. Penalties for non-compliance can reach 6% of annual global revenue for systemic violations. This prescriptive approach prioritizes public safety and rights protection but imposes substantial compliance costs on developers.

China's State-Directed Innovation Model

China has adopted a layered regulatory approach emphasizing content control, national security, and technological sovereignty:[12]

Critically, China's approach combines mandatory security standards with aggressive government investment in domestic AI champions. State ownership and control mechanisms enable faster policy implementation but raise concerns about innovation stagnation and human rights implications.

United States: Decentralized Framework with Executive Dominance

The U.S. has rejected both the EU's prescriptive regulation and China's state-directed model, instead adopting a hybrid approach combining:

Executive Order Evolution: 2023-2025

President Biden's Executive Order 14110, signed October 30, 2023, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," established the initial federal framework.[13] Key provisions included:

President Trump's Executive Order 14179, signed January 23, 2025, titled "Removing Barriers to American Leadership in Artificial Intelligence," replaced Biden's order with a deregulation-focused approach emphasizing innovation promotion over risk mitigation.[14] Most significantly, Trump's December 11, 2025 executive order, "Ensuring a National Policy Framework for Artificial Intelligence," directed federal agencies to challenge state-level AI regulations inconsistent with national innovation policy, potentially leveraging federal broadband funding as enforcement mechanism.

Federal-State Tension: The Trump administration's December 2025 order threatens to strip remaining Broadband Equity, Access, and Deployment Program funds from states with "onerous" AI laws, creating pressure for regulatory uniformity around innovation-friendly standards rather than safety-first approaches.

State-Level Policy Divergence

State legislatures have adopted varying regulatory approaches despite federal signals favoring deregulation:

California: Governor Newsom vetoed SB 1047 (Safe and Secure Innovation for Frontier AI Models Act) in 2024, which would have required risk assessments for models costing $100M+ to train. However, SB 53 (2025) requires frontier AI developers to publish safety testing transparency reports, representing a compromise approach. California has enacted 38 AI-related laws overall.[15]

Texas: HB 149 (Texas Responsible AI Governance Act), signed June 22, 2025, prohibits AI use for restricted purposes (self-harm, discrimination, child exploitation) with penalties of $10,000-$12,000 for curable violations and $80,000-$200,000 for uncurable violations.[16]

New York: Governor Hochul signed the RAISE Act on December 19, 2025, requiring large AI developers to publish safety protocols and report incidents within 72 hours, establishing the nation's most prescriptive incident reporting framework.[17]

4. Budget Implications and Federal Spending Trajectories

Five-Year Federal AI Investment Outlook

Federal AI spending through 2030 will likely reach $15-20 billion annually across civilian agencies and defense, with concentrations in:

CHIPS and Science Act Economic Multiplier

The $280 billion CHIPS Act authorization represents the largest industrial policy investment since the Interstate Highway System. The $200 billion AI and advanced computing allocation assumes federal matching investment of 50-75%, requiring private sector co-investment of $100-150 billion. This leverage ratio generates substantial economic activity in semiconductor manufacturing regions, primarily in Arizona, Texas, New York, and Intel legacy regions.

Estimated fiscal impact through 2030: $50-80 billion in federal outlays generating $300-500 billion in private investment and $1-2 trillion in downstream economic activity through innovation, manufacturing efficiency, and AI-enabled productivity gains.

NIST Risk Management Framework Implementation Costs

NIST released AI Risk Management Framework 1.0 in January 2023, followed by RMF 2.0 in February 2024, with a Generative AI Profile (NIST-AI-600-1) issued July 26, 2024.[18] The voluntary framework applies governance and risk management principles across four core functions: GOVERN (applies to all stages), MAP (system-specific contexts), MEASURE (performance evaluation), and MANAGE (mitigation strategies).

Implementation costs for federal agencies, estimated at $500M-1B through 2027, include:

Workforce Safety Net Expansion

Current U.S. workforce retraining programs inadequately address AI-driven displacement. The Workforce Innovation and Opportunity Act (WIOA) Title I program, administered by the Department of Labor, provided $3.3 billion in FY2023 to serve 700,000 participants with job search assistance and training.[19] These programs are not designed for extensive retraining required for AI displacement affecting millions of workers simultaneously.

Proposed solutions include wage insurance programs, extended unemployment benefits for AI-displaced workers, and accelerated certification programs, with estimated costs of $700M annually under the proposed AAA (Adjustment and Advancement Act) model, scaling to $2-3 billion by 2030 if displacement accelerates.

Federal workforce training initiatives launched in 2024-2025 include:

5. Six Policy Recommendations with Implementation Phases

Recommendation 1: Establish a Federal AI Safety Board with Enforcement Authority

Objective: Create a statutory body within NIST with authority to mandate safety testing for foundation models, establish binding technical standards, and levy penalties for non-compliance.

Rationale: Current reliance on voluntary NIST frameworks has been superseded by state-level mandates (New York's RAISE Act, California's SB 53) and international pressure. A federal board would establish uniform standards, reduce compliance costs for multistate companies, and align U.S. approaches with international practice while remaining less prescriptive than the EU's framework.

Implementation Phases:

Budget Estimate: $50M one-time startup, $100M annually for operations, testing infrastructure, and technical staff (50-100 FTE professionals).

Key Metrics: 90% of U.S.-developed foundation models reporting safety test results; zero critical safety incidents escaping federal awareness within 72 hours; $200M+ in compliance spending by AI companies (spillover investment in safety infrastructure).

Recommendation 2: Scale NAIRR Pilot to Full National Implementation with $2B Investment

Objective: Expand the National AI Research Resource pilot (currently connecting 400+ research teams through 14 federal agencies and 28 private/nonprofit partners) into a permanent, fully-funded national infrastructure serving 5,000+ research teams by 2030.

Rationale: The NAIRR pilot, authorized under CHIPS Act, demonstrates effective model for democratizing access to AI compute resources and datasets. Current funding (estimated $69M annually for NSF National AI Institutes) is insufficient for national scale. China's government is investing $10B+ in equivalent infrastructure. Full-scale NAIRR would accelerate American AI competitiveness while supporting smaller institutions, underrepresented researchers, and rural universities currently excluded from advanced AI research.

Implementation Phases:

Budget Estimate: $2B over 5 years ($400M annually by Year 5). Allocation: 50% compute infrastructure, 30% personnel and operations, 20% data and tool development.

Key Metrics: 5,000+ research teams with resource access by 2030; 50% growth in U.S. AI publications from researchers at non-R1 institutions; $500M in follow-on commercial funding from NAIRR-supported research; 25% increase in women and underrepresented minorities in AI research.

Recommendation 3: Establish Sectoral AI Readiness Assessments and Industry-Specific Compliance Pathways

Objective: Develop sector-specific AI governance frameworks for healthcare, finance, critical infrastructure, and defense, recognizing that uniform approaches are ineffective across industries with vastly different risk profiles and existing regulatory regimes.

Rationale: Healthcare AI applications present different risks (patient safety, liability) than financial AI (market manipulation, discrimination) or defense AI (operational security, civilian casualty prevention). Current federal approach lacks sector specificity. International models (EU sectoral risk classifications, China's targeted requirements) demonstrate that industry-tailored frameworks drive both innovation and safety.

Implementation Phases:

Budget Estimate: $50M one-time for sector workgroups and standards development. $200M annually for compliance assessment infrastructure and auditing.

Key Metrics: Sector-specific standards published for all priority industries by Year 2; 80% of critical infrastructure operators achieving compliance by Year 4; zero high-harm AI incidents in regulated sectors.

Recommendation 4: Implement Wage Insurance and Comprehensive Retraining for AI-Displaced Workers

Objective: Establish a federally-funded wage insurance program and retraining system specifically addressing AI-driven workforce displacement, with particular focus on mid-career workers (ages 35-55) with obsolete skills and limited retraining capacity.

Rationale: Current WIOA programs ($3.3B annually serving 700,000 participants) are inadequate for potential AI displacement of millions. Wage insurance—providing partial income replacement when displaced workers transition to lower-wage roles—is more politically sustainable than direct income support and encourages labor force participation. Retraining investments require substantial co-funding from employers and workers (skin in the game), increasing program effectiveness.

Implementation Phases:

Budget Estimate: $700M annually for pilot (50,000 workers × $14,000 average cost), scaling to $3-5B annually by Year 5 as displacement accelerates.

Key Metrics: Re-employment rate within 18 months; average wage replacement ratio at 12 and 24 months; participant satisfaction scores; regional economic resilience (avoiding lasting poverty in AI-disrupted communities).

Recommendation 5: Accelerate AI Talent Immigration and Domestic Workforce Development

Objective: Streamline visa pathways for AI researchers, engineers, and specialists while simultaneously launching federal AI talent pipeline initiatives targeting K-12 education, community colleges, and underrepresented populations.

Rationale: The U.S. faces a critical shortage of AI expertise—particularly in AI safety, interpretability, and alignment research where few formal training programs exist. International migration of AI talent is both expensive (visa processing) and uncertain (visa rejection rates). Domestically, AI education remains concentrated in elite universities. Reducing visa barriers while investing in accessible training creates redundancy, increases innovation velocity, and reduces dependency on foreign talent.

Implementation Phases:

Budget Estimate: $200M for immigration streamlining and visa processing. $1.5B over 5 years for educational programs ($300M annually). Total: $1.7B over 5 years ($340M annually).

Key Metrics: 10,000 AI-visa holders per year by Year 3; 50% of high schools with AI curriculum by 2030; 50% increase in U.S. AI/ML graduate degrees from underrepresented populations; reduction in H-1B dependency for AI roles from 60% to 40% by 2030.

Recommendation 6: Create Regulatory Sandbox for AI Innovation with Pre-Market Safety Certification

Objective: Establish controlled environments where startups and researchers can develop, test, and deploy AI applications with reduced regulatory burden if they demonstrate compliance with federal safety standards, reducing time-to-market from 3-5 years to 12-18 months for safety-certified systems.

Rationale: Regulatory uncertainty discourages U.S. innovation relative to more permissive jurisdictions (Dubai, Singapore, Estonia). However, fully deregulated innovation poses safety risks demonstrated in fintech failures and algorithmic discrimination cases. A regulatory sandbox provides controlled experimentation path while maintaining safety oversight. International models (UK Financial Conduct Authority, Abu Dhabi Global Market) demonstrate effectiveness.

Implementation Phases:

Budget Estimate: $100M startup, $50M annually. Total 5-year cost: $350M.

Key Metrics: Time-to-market reduction from 3-5 years to 12-18 months; 200+ sandbox projects by Year 5; 50 successfully commercialized applications; zero critical safety incidents within sandbox populations; 30%+ of U.S. AI startups participating in sandbox at some stage.

6. Comparative Scorecard: United States vs. Peer Nations

Policy DimensionUnited States (Current)United States (Recommended)European UnionChina
Regulatory ApproachVoluntary guidance + state fragmentationFederal standards + sectoral specificityComprehensive prescriptive regulationState-directed with mandatory standards
Federal AI Budget$2.8B annual (2025)$5-8B by 2030€2B annually (est.)$10B+ annually
Safety TestingVoluntary NIST RMFMandatory for models >$100MMandatory (high-risk tiers)Mandatory (all systems)
Workforce Retraining$3.3B WIOA (inadequate)$3-5B annually by 2030€5B+ (varied by member state)State-directed job reallocation
Research InfrastructureNAIRR pilot (400 teams)Full NAIRR (5,000 teams, $2B)European AI Initiative (~€1B)State-owned supercomputing centers
Enforcement MechanismFTC authority (limited)Federal AI Safety Board ($1M-$50M penalties)EU regulators ($43M+ fines)Government agencies + penalties
Competitive AdvantageInnovation speed, low compliance costBalance of safety + speedConsumer/rights protection, market sizeSpeed to implementation, coordination
Competitive RiskSafety incidents, state fragmentation, talent shortageImproved (mitigated)Slower innovation, startup exodusInternational sanctions, limited allies

Conclusion: Strategic Imperatives for 2026-2030

Artificial intelligence will shape American economic competitiveness, workforce stability, and geopolitical positioning through 2030. The current U.S. policy approach—characterized by voluntary standards, executive order governance, and regulatory fragmentation—is increasingly inadequate relative to international precedents and emerging domestic demands for safety assurance.

The six recommendations above balance competing imperatives:

Safety vs. Speed: Federal AI Safety Board establishes binding standards (safety) while regulatory sandbox accelerates commercialization (speed). Sectoral frameworks permit risk-appropriate innovation in high-stakes domains without imposing startup-level compliance burdens on research institutions.
Domestic Investment vs. International Talent: Accelerated visa pathways and $1.5B domestic AI education initiatives reduce long-term dependency on foreign talent while expanding opportunity for American workers and underrepresented populations.
Innovation Incentives vs. Worker Protection: Wage insurance and comprehensive retraining for AI-displaced workers create social license for innovation while maintaining labor force participation and economic participation of disrupted communities.

Implementation of these recommendations requires sustained congressional attention, bipartisan coalition-building, and executive agency coordination. The fiscal commitment—approximately $8-10B annually by 2030 across safety, research, workforce, and immigration initiatives—is modest relative to the $28 trillion U.S. economy and justified by returns on innovation investment, competitive positioning relative to China and EU, and prevention of larger social costs from unmanaged workforce displacement.

The window for proactive policy design closes rapidly. Delaying action to 2027-2028 will force reactive approaches addressing crisis (mass displacement, safety incidents, talent brain drain) rather than opportunistic positioning. Congress should act in 2026 to authorize the Federal AI Safety Board, expand NAIRR funding, and establish sectoral frameworks, enabling implementation through 2027-2030.

References

  1. U.S. Small Business Administration. (2025). "United States Small Business Profile 2025." Office of Advocacy. Available at: https://advocacy.sba.gov/2025/06/30/
  2. Stanford University Human-Centered Artificial Intelligence (HAI). "What the CHIPS and Science Act Means for Artificial Intelligence." Available at: https://hai.stanford.edu/policy/what-the-chips-and-science-act-means-for-artificial-intelligence
  3. National Science Foundation. (2024). "NSF Announces Funding to Establish National AI Research Resource Pilot." AI Institutes Program. Available at: https://www.nsf.gov/focus-areas/ai/institutes
  4. National Science Foundation. (2024). "NSF Announces Funding to Establish National AI Research Resource Pilot." Available at: https://www.nsf.gov/news/nsf-announces-funding-establish-national-ai-research
  5. Defense Scoop. (2024). "Pentagon Requesting More Than $3B for AI, JADC2." Available at: https://defensescoop.com/2023/03/13/pentagon-requesting-more-than-3b-for-ai-jadc2/
  6. Defense Scoop. (2025). "DoD-Palantir Maven 'Smart System' Contract Increase." May 2025. Available at: https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/
  7. U.S. Congress. (2024). "Congressional Research Service Report on Federal AI Spending." Congressional Budget Office tracking. Available at: https://www.congress.gov/crs-product/R47843
  8. McKinsey Global Institute. (2024). "Agents, Robots, and Us: Skill Partnerships in the Age of AI." Available at: https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai
  9. Goldman Sachs Economic Research. (2023). "How Will AI Affect the Global Workforce?" Available at: https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce
  10. The Interview Guys. (2025). "The State of AI in the Workplace in 2025." Available at: https://blog.theinterviewguys.com/the-state-of-ai-in-the-workplace-in-2025/
  11. Brookings Institution. (2024). "The EU and US Diverge on AI Regulation: A Transatlantic Comparison." Available at: https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/
  12. White & Case. (2025). "AI Watch: Global Regulatory Tracker—China." Available at: https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china
  13. The White House. (2023). "Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." October 30, 2023. Available at: https://www.mintz.com/insights-center/viewpoints/2191/2023-10-31-bidens-executive-order-artificial-intelligence-ai
  14. Mayer Brown LLP. (2025). "President Trump Issues Executive Order on Ensuring a National Policy Framework for AI." December 2025. Available at: https://www.mayerbrown.com/en/insights/publications/2025/12/president-trump-issues-executive-order-on-ensuring-a-national-policy-framework-for-artificial-intelligence
  15. Pillsbury Law. (2024). "California SB 1047 and AI Laws Overview." Available at: https://www.pillsburylaw.com/en/news-and-insights/sb-1047-california-ai-laws.html
  16. Stack Cyber. (2025). "AI State Laws." Available at: https://stackcyber.com/posts/ai-state-laws
  17. New York Governor's Office. (2025). "Governor Hochul Signs Nation-Leading Legislation on AI Frameworks." December 19, 2025. Available at: https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models
  18. National Institute of Standards and Technology. (2024). "AI Risk Management Framework 2.0." Available at: https://www.nist.gov/itl/ai-risk-management-framework
  19. Brookings Institution. (2024). "AI Labor Displacement and the Limits of Worker Retraining." Available at: https://www.brookings.edu/articles/ai-labor-displacement-and-the-limits-of-worker-retraining/
  20. Government Executive. (2024). "Partnership for Public Service Plans AI Training Center for Federal Employees in 2025." October 2024. Available at: https://www.govexec.com/technology/2024/10/partnership-public-service-plans-ai-training-center-federal-employees-2025/400342/