Artificial Intelligence and the Future of the United States Economy: A 2030 Policy Assessment
Federal policy framework for managing AI's impact on workforce, competitiveness, and economic growth through 2030
1. Economic Exposure Assessment
Current National AI Position
The United States maintains the world's largest economy at approximately $28 trillion in annual GDP, with artificial intelligence emerging as a critical driver of future competitiveness and growth. The federal government has recognized AI's strategic importance through unprecedented investments, policy frameworks, and regulatory approaches that balance innovation with risk mitigation.
Small businesses comprise the backbone of the American economy. As of 2025, there are 36.2 million small businesses operating in the United States, representing 99.9% of all U.S. businesses. These enterprises employ 46% of the private sector workforce—approximately 61.6 million workers—and contribute 43.5% of U.S. GDP and 39% of private sector payroll.[1] The impact of AI on this small business ecosystem carries profound implications for national economic resilience.
Key Finding: Small businesses have dramatically accelerated AI adoption, rising from 39% in 2024 to 55% in 2025—a 41% year-over-year increase. According to the U.S. Chamber of Commerce, 58% of small businesses adopted generative AI in 2025, with 96% planning to adopt emerging technologies.
Federal AI Investment Architecture
The federal government has allocated substantial resources to maintain U.S. AI leadership across civilian and defense sectors:
- CHIPS and Science Act: $280 billion authorized, with $200 billion allocated specifically for AI, quantum computing, and robotics research, and $52 billion in subsidies for semiconductor manufacturing[2]
- National Science Foundation (NSF) AI Budget: $796 million in FY2024, representing a $133 million (20%) increase from FY2023[3]
- National AI Research Resource (NAIRR) Pilot: 14 federal agencies partnering with 28 private and nonprofit organizations to connect 400+ U.S. research teams[4]
- Pentagon AI Investment: $1.8 billion in FY2024, escalating from $1.1 billion in FY2023 and $874 million in FY2022[5]
- Project Maven (DOD/Palantir): Contract increased from $480 million (5-year IDIQ, May 2024) to $1.3 billion (May 2025)[6]
- Chief Digital and AI Officer (CDAO): Appropriations surged from $10.3 million (FY2022) to $320.4 million (FY2023)[7]
Federal AI R&D spending has grown at 6% annually from FY2021-2025, adding $2.8 billion to baseline investments. This trajectory reflects both bipartisan recognition of AI's strategic importance and acknowledgment of intensifying global competition from China and the European Union.
Economic Growth Projections
McKinsey Global Institute research projects that AI will add $13 trillion to the global economy by 2030, with the United States positioned to capture significant value through early-stage innovation and deployment.[8] Current technology can already automate 57% of U.S. work hours, with potential expansion to 30% of all work hours by 2030 under wide adoption scenarios.
2. Workforce Impact by Sector
Job Displacement Projections
Goldman Sachs workforce analysis projects that 6-7% of the U.S. workforce faces exposure to AI-driven displacement under wide adoption scenarios, with a plausible range of 3-14% depending on implementation speed and geography.[9] This translates to direct impacts on millions of American workers, though the analysis emphasizes that displacement is likely transitory as new opportunities emerge.
Net Job Impact Estimate: By 2030, global AI deployment may create 170 million jobs while displacing 92 million positions, yielding a net gain of 78 million positions worldwide. However, this aggregate figure masks significant sectoral and geographic variation.
High-Risk Occupations
Goldman Sachs identifies four occupational categories with elevated displacement risk:
- Computer Programmers: Generative AI assistants (GitHub Copilot, Amazon CodeWhisperer) are demonstrably increasing programmer productivity and accelerating code generation
- Accountants and Auditors: Routine tax preparation, reconciliation, and audit procedures are increasingly automated through AI-powered accounting platforms
- Legal and Administrative Assistants: Document review, contract analysis, and legal research are being displaced by AI tools that perform these functions in seconds rather than hours
- Customer Service Representatives: AI chatbots and voice agents are handling routine inquiries, complaints, and transactions at scale
The U.S. Bureau of Labor Statistics reported in 2025 that AI-related efficiency gains are already affecting employment trends in marketing, graphic design, office administration, and call center operations, though adoption speed remains gradual as organizations integrate new technologies.
Sectoral Workforce Adjustments
Unlike previous technological disruptions, AI automation is spreading across white-collar and professional services simultaneously with blue-collar manufacturing. The discontinuity presents policy challenges: traditional retraining programs designed for manufacturing-sector displacement may not address the skill requirements for AI-adjacent roles requiring more sophisticated education.
The unemployment rate during transition periods could increase by approximately 0.5 percentage points—modest in absolute terms but affecting hundreds of thousands of workers in vulnerable communities and demographic groups lacking AI-adjacent skills.[10]
3. Policy Options: Lessons from Peer Countries
The EU AI Act: Prescriptive Regulation
The European Union adopted a comprehensive, extraterritorial regulatory framework under the AI Act, effective in stages through 2026-2027. The EU approach establishes binding regulations with enforceable penalties without requiring additional legislation.[11] The framework classifies AI systems into risk tiers:
- Unacceptable Risk (prohibited)
- High Risk (extensive compliance requirements)
- Limited Risk (transparency obligations)
- Minimal Risk (light-touch oversight)
- General-Purpose AI (disclosure and transparency)
- General-Purpose AI with Systemic Risk (additional safeguards)
The EU Act applies to any AI system marketed or used within EU borders, regardless of developer location. Penalties for non-compliance can reach 6% of annual global revenue for systemic violations. This prescriptive approach prioritizes public safety and rights protection but imposes substantial compliance costs on developers.
China's State-Directed Innovation Model
China has adopted a layered regulatory approach emphasizing content control, national security, and technological sovereignty:[12]
- Interim Measures for Gen AI Services (August 2023): Content safety requirements for deployed models
- Basic Security Requirements Standard (February 29, 2024): Mandatory technical standards for all AI systems
- AI-Generated Content Labeling (September 1, 2025): Disclosure of synthetic content origin
- Generative AI Security Standards (November 1, 2025): Three new mandatory national standards
- AI Action Plan (July 2025): Accelerated innovation and global leadership initiative
Critically, China's approach combines mandatory security standards with aggressive government investment in domestic AI champions. State ownership and control mechanisms enable faster policy implementation but raise concerns about innovation stagnation and human rights implications.
United States: Decentralized Framework with Executive Dominance
The U.S. has rejected both the EU's prescriptive regulation and China's state-directed model, instead adopting a hybrid approach combining:
- Executive Orders: Presidential directives addressing federal agency governance and private-sector coordination
- Voluntary Standards: NIST risk management frameworks with no statutory enforcement
- Sector-Specific Guidance: Agency-led initiatives for healthcare, finance, and national security
- State-Level Experimentation: Fragmented state laws creating inconsistent compliance landscapes
- Industry Self-Regulation: Reliance on private sector governance with limited statutory penalties
Executive Order Evolution: 2023-2025
President Biden's Executive Order 14110, signed October 30, 2023, titled "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence," established the initial federal framework.[13] Key provisions included:
- Mandated foundation model safety testing for companies developing models posing "serious risk"
- Required companies to share AI safety test results with the federal government
- Established federal agency AI governance through an OMB interagency council
- Directed NIST to establish AI guidelines and standards within 270 days
- Created a government-wide AI talent surge initiative
- Streamlined visa processes for AI expertise immigrants
President Trump's Executive Order 14179, signed January 23, 2025, titled "Removing Barriers to American Leadership in Artificial Intelligence," replaced Biden's order with a deregulation-focused approach emphasizing innovation promotion over risk mitigation.[14] Most significantly, Trump's December 11, 2025 executive order, "Ensuring a National Policy Framework for Artificial Intelligence," directed federal agencies to challenge state-level AI regulations inconsistent with national innovation policy, potentially leveraging federal broadband funding as enforcement mechanism.
Federal-State Tension: The Trump administration's December 2025 order threatens to strip remaining Broadband Equity, Access, and Deployment Program funds from states with "onerous" AI laws, creating pressure for regulatory uniformity around innovation-friendly standards rather than safety-first approaches.
State-Level Policy Divergence
State legislatures have adopted varying regulatory approaches despite federal signals favoring deregulation:
California: Governor Newsom vetoed SB 1047 (Safe and Secure Innovation for Frontier AI Models Act) in 2024, which would have required risk assessments for models costing $100M+ to train. However, SB 53 (2025) requires frontier AI developers to publish safety testing transparency reports, representing a compromise approach. California has enacted 38 AI-related laws overall.[15]
Texas: HB 149 (Texas Responsible AI Governance Act), signed June 22, 2025, prohibits AI use for restricted purposes (self-harm, discrimination, child exploitation) with penalties of $10,000-$12,000 for curable violations and $80,000-$200,000 for uncurable violations.[16]
New York: Governor Hochul signed the RAISE Act on December 19, 2025, requiring large AI developers to publish safety protocols and report incidents within 72 hours, establishing the nation's most prescriptive incident reporting framework.[17]
4. Budget Implications and Federal Spending Trajectories
Five-Year Federal AI Investment Outlook
Federal AI spending through 2030 will likely reach $15-20 billion annually across civilian agencies and defense, with concentrations in:
- Defense AI (40%): $6-8 billion annually for Project Maven, autonomous systems, and intelligence applications
- Research & Development (35%): $5-7 billion across NSF, NAIRR, and national labs
- CHIPS Act Implementation (20%): $4-5 billion annually in semiconductor subsidies and matching investments
- Workforce Reskilling (5%): $750M-1.5 billion for WIOA Title I expansion and federal training programs
CHIPS and Science Act Economic Multiplier
The $280 billion CHIPS Act authorization represents the largest industrial policy investment since the Interstate Highway System. The $200 billion AI and advanced computing allocation assumes federal matching investment of 50-75%, requiring private sector co-investment of $100-150 billion. This leverage ratio generates substantial economic activity in semiconductor manufacturing regions, primarily in Arizona, Texas, New York, and Intel legacy regions.
Estimated fiscal impact through 2030: $50-80 billion in federal outlays generating $300-500 billion in private investment and $1-2 trillion in downstream economic activity through innovation, manufacturing efficiency, and AI-enabled productivity gains.
NIST Risk Management Framework Implementation Costs
NIST released AI Risk Management Framework 1.0 in January 2023, followed by RMF 2.0 in February 2024, with a Generative AI Profile (NIST-AI-600-1) issued July 26, 2024.[18] The voluntary framework applies governance and risk management principles across four core functions: GOVERN (applies to all stages), MAP (system-specific contexts), MEASURE (performance evaluation), and MANAGE (mitigation strategies).
Implementation costs for federal agencies, estimated at $500M-1B through 2027, include:
- Consultant and audit services for framework assessment
- AI governance infrastructure and tools
- Training for 50,000+ federal employees in AI risk management
- Documentation and compliance reporting systems
Workforce Safety Net Expansion
Current U.S. workforce retraining programs inadequately address AI-driven displacement. The Workforce Innovation and Opportunity Act (WIOA) Title I program, administered by the Department of Labor, provided $3.3 billion in FY2023 to serve 700,000 participants with job search assistance and training.[19] These programs are not designed for extensive retraining required for AI displacement affecting millions of workers simultaneously.
Proposed solutions include wage insurance programs, extended unemployment benefits for AI-displaced workers, and accelerated certification programs, with estimated costs of $700M annually under the proposed AAA (Adjustment and Advancement Act) model, scaling to $2-3 billion by 2030 if displacement accelerates.
Federal workforce training initiatives launched in 2024-2025 include:
- GSA AI Training Series (2024): 14,000+ participants from 200 government organizations across 21 sessions
- Center for Federal AI (2025): $10 million Google grant-backed center launched March 2025 to provide AI skill development for federal employees
- U.S. Tech Force: 1,000 annual Fellows program in AI, cybersecurity, data science, and software engineering
- DoD AI Upskilling: Comprehensive workshops and hands-on training for military and civilian personnel[20]
5. Six Policy Recommendations with Implementation Phases
Recommendation 1: Establish a Federal AI Safety Board with Enforcement Authority
Objective: Create a statutory body within NIST with authority to mandate safety testing for foundation models, establish binding technical standards, and levy penalties for non-compliance.
Rationale: Current reliance on voluntary NIST frameworks has been superseded by state-level mandates (New York's RAISE Act, California's SB 53) and international pressure. A federal board would establish uniform standards, reduce compliance costs for multistate companies, and align U.S. approaches with international practice while remaining less prescriptive than the EU's framework.
Implementation Phases:
- Phase 1 (Months 0-6): Congressional authorization and budget appropriation ($50M startup funding). Recruit board chair (recommended: AI safety researcher with government experience) and appoint 7-member board including industry, academia, civil society, and government representatives.
- Phase 2 (Months 6-18): Establish mandatory safety testing standards for foundation models exceeding $100M training cost and 10^26 computing operations. Define breach notification requirements (72-hour reporting timeline). Create compliance certification pathway.
- Phase 3 (Months 18-36): Begin enforcement actions with penalties of $1M-$50M for violations, scaled to company revenue. Require transparency reports on model capabilities, limitations, and known risks.
- Phase 4 (Years 3-5): Expand jurisdiction to systemic risk models and sector-specific applications (healthcare, finance, voting systems, defense).
Budget Estimate: $50M one-time startup, $100M annually for operations, testing infrastructure, and technical staff (50-100 FTE professionals).
Key Metrics: 90% of U.S.-developed foundation models reporting safety test results; zero critical safety incidents escaping federal awareness within 72 hours; $200M+ in compliance spending by AI companies (spillover investment in safety infrastructure).
Recommendation 2: Scale NAIRR Pilot to Full National Implementation with $2B Investment
Objective: Expand the National AI Research Resource pilot (currently connecting 400+ research teams through 14 federal agencies and 28 private/nonprofit partners) into a permanent, fully-funded national infrastructure serving 5,000+ research teams by 2030.
Rationale: The NAIRR pilot, authorized under CHIPS Act, demonstrates effective model for democratizing access to AI compute resources and datasets. Current funding (estimated $69M annually for NSF National AI Institutes) is insufficient for national scale. China's government is investing $10B+ in equivalent infrastructure. Full-scale NAIRR would accelerate American AI competitiveness while supporting smaller institutions, underrepresented researchers, and rural universities currently excluded from advanced AI research.
Implementation Phases:
- Phase 1 (Months 0-12): Congressional authorization of $2B over 5 years through NSF. Establish governance structure including representatives from 27 NSF-supported institutes and private partners (AWS, Google, Meta, Microsoft, OpenAI).
- Phase 2 (Year 1-2): Double compute capacity from 400 to 1,000 research teams. Establish allocation prioritization framework favoring research in AI safety, interpretability, bias mitigation, and climate/health applications.
- Phase 3 (Years 2-4): Expand to 5,000 research teams. Create pathways for graduate student and postdoctoral researcher access through formal application processes. Establish data commons with 500+ high-quality datasets (healthcare, scientific, climate, financial).
- Phase 4 (Year 5): Transition to sustainable funding model with 60% federal, 40% private co-investment. Measure outcomes: IP generated, researcher diversity, time-to-publication, and downstream commercial applications.
Budget Estimate: $2B over 5 years ($400M annually by Year 5). Allocation: 50% compute infrastructure, 30% personnel and operations, 20% data and tool development.
Key Metrics: 5,000+ research teams with resource access by 2030; 50% growth in U.S. AI publications from researchers at non-R1 institutions; $500M in follow-on commercial funding from NAIRR-supported research; 25% increase in women and underrepresented minorities in AI research.
Recommendation 3: Establish Sectoral AI Readiness Assessments and Industry-Specific Compliance Pathways
Objective: Develop sector-specific AI governance frameworks for healthcare, finance, critical infrastructure, and defense, recognizing that uniform approaches are ineffective across industries with vastly different risk profiles and existing regulatory regimes.
Rationale: Healthcare AI applications present different risks (patient safety, liability) than financial AI (market manipulation, discrimination) or defense AI (operational security, civilian casualty prevention). Current federal approach lacks sector specificity. International models (EU sectoral risk classifications, China's targeted requirements) demonstrate that industry-tailored frameworks drive both innovation and safety.
Implementation Phases:
- Phase 1 (Months 0-9): Establish sector working groups for healthcare (HHS-led), finance (Treasury/SEC-led), infrastructure (DHS-led), defense (DoD-led), and election systems (DHS Cybersecurity and Infrastructure Security Agency-led). Convene industry representatives, researchers, and civil society.
- Phase 2 (Months 9-18): Publish draft sector-specific standards addressing: use case restrictions, transparency requirements, accountability mechanisms, liability frameworks, and testing protocols.
- Phase 3 (Months 18-36): Pilot implementation with volunteer institutions in each sector. Conduct compliance readiness assessments for critical providers (hospitals, banks, power utilities).
- Phase 4 (Years 3-5): Formalize standards through rulemaking, executive order, or legislation depending on sector. Establish compliance certification bodies (potentially private, audited by government).
Budget Estimate: $50M one-time for sector workgroups and standards development. $200M annually for compliance assessment infrastructure and auditing.
Key Metrics: Sector-specific standards published for all priority industries by Year 2; 80% of critical infrastructure operators achieving compliance by Year 4; zero high-harm AI incidents in regulated sectors.
Recommendation 4: Implement Wage Insurance and Comprehensive Retraining for AI-Displaced Workers
Objective: Establish a federally-funded wage insurance program and retraining system specifically addressing AI-driven workforce displacement, with particular focus on mid-career workers (ages 35-55) with obsolete skills and limited retraining capacity.
Rationale: Current WIOA programs ($3.3B annually serving 700,000 participants) are inadequate for potential AI displacement of millions. Wage insurance—providing partial income replacement when displaced workers transition to lower-wage roles—is more politically sustainable than direct income support and encourages labor force participation. Retraining investments require substantial co-funding from employers and workers (skin in the game), increasing program effectiveness.
Implementation Phases:
- Phase 1 (Months 0-12): Congressional authorization of $700M annual pilot program in states with highest AI displacement risk (California, Texas, New York, Pennsylvania, Florida). Establish administration through state labor departments under federal guidance (DOL lead).
- Phase 2 (Year 1-2): Enroll 50,000 displaced workers. Provide wage insurance replacing 80% of lost income (up to $25,000 annually) for 2-3 years during retraining. Couple with subsidized tuition at community colleges and bootcamps for high-demand fields (healthcare, skilled trades, software development, infrastructure maintenance).
- Phase 3 (Years 2-4): Scale to all states and regions. Establish regional skills boards convening employers, unions, educational institutions to align retraining with local labor demand. Introduce employer co-investment requirements (25% of costs) to ensure training matches job creation.
- Phase 4 (Year 5): Evaluate outcomes (re-employment rate, wage replacement trajectory, program costs) and adjust accordingly. Target: 70% of participants re-employed within 18 months at 90%+ of prior wage.
Budget Estimate: $700M annually for pilot (50,000 workers × $14,000 average cost), scaling to $3-5B annually by Year 5 as displacement accelerates.
Key Metrics: Re-employment rate within 18 months; average wage replacement ratio at 12 and 24 months; participant satisfaction scores; regional economic resilience (avoiding lasting poverty in AI-disrupted communities).
Recommendation 5: Accelerate AI Talent Immigration and Domestic Workforce Development
Objective: Streamline visa pathways for AI researchers, engineers, and specialists while simultaneously launching federal AI talent pipeline initiatives targeting K-12 education, community colleges, and underrepresented populations.
Rationale: The U.S. faces a critical shortage of AI expertise—particularly in AI safety, interpretability, and alignment research where few formal training programs exist. International migration of AI talent is both expensive (visa processing) and uncertain (visa rejection rates). Domestically, AI education remains concentrated in elite universities. Reducing visa barriers while investing in accessible training creates redundancy, increases innovation velocity, and reduces dependency on foreign talent.
Implementation Phases:
- Phase 1 (Months 0-6): Congressional action on AI-specific visa provisions: create 10,000-person annual visa allocation for AI/ML researchers and engineers (outside standard H-1B caps). Establish startup visa for AI founders and researchers. Streamline Green Card processing (3-year timeline vs. 10+ years current).
- Phase 2 (Months 0-18, concurrent): NSF investment of $500M over 5 years in K-12 AI education programs. Partner with state education departments to develop AI curriculum standards and teacher training. Target: AI literacy in 50% of U.S. high schools by 2030.
- Phase 3 (Months 6-36): Fund 500 community college AI certificate programs through $200M DOL/NSF partnership. Emphasize applied AI for local industries (agriculture in Midwest, healthcare in South, manufacturing in Rust Belt).
- Phase 4 (Years 2-5): Launch "AI Scholars" program providing full scholarships and stipends to 2,000 undergraduate and graduate students annually, prioritizing women and underrepresented minorities in STEM. Partner with industry (Google, Microsoft, Meta) for internships and mentorship.
Budget Estimate: $200M for immigration streamlining and visa processing. $1.5B over 5 years for educational programs ($300M annually). Total: $1.7B over 5 years ($340M annually).
Key Metrics: 10,000 AI-visa holders per year by Year 3; 50% of high schools with AI curriculum by 2030; 50% increase in U.S. AI/ML graduate degrees from underrepresented populations; reduction in H-1B dependency for AI roles from 60% to 40% by 2030.
Recommendation 6: Create Regulatory Sandbox for AI Innovation with Pre-Market Safety Certification
Objective: Establish controlled environments where startups and researchers can develop, test, and deploy AI applications with reduced regulatory burden if they demonstrate compliance with federal safety standards, reducing time-to-market from 3-5 years to 12-18 months for safety-certified systems.
Rationale: Regulatory uncertainty discourages U.S. innovation relative to more permissive jurisdictions (Dubai, Singapore, Estonia). However, fully deregulated innovation poses safety risks demonstrated in fintech failures and algorithmic discrimination cases. A regulatory sandbox provides controlled experimentation path while maintaining safety oversight. International models (UK Financial Conduct Authority, Abu Dhabi Global Market) demonstrate effectiveness.
Implementation Phases:
- Phase 1 (Months 0-12): Congressional authorization and budget ($100M startup, $50M annually). Establish Federal AI Sandbox Board (separate from but coordinating with Federal AI Safety Board under Recommendation 1). Publish application guidelines, safety requirements, and expedited review timeline (60 days maximum).
- Phase 2 (Year 1-2): Accredit 20-30 innovation hubs (universities, corporate labs, civic tech centers) in high-innovation regions. Each hub manages 10-20 sandbox projects, with federal oversight through quarterly audits. Focus sectors: healthcare AI, climate tech, education, criminal justice reform.
- Phase 3 (Years 2-4): Demonstrate successful models through published case studies. Facilitate transition of proven projects from sandbox to full deployment. Establish clear liability frameworks protecting sandbox participants from frivolous litigation while maintaining accountability for genuine harms.
- Phase 4 (Year 5): Evaluate outcomes (projects successfully commercialized, job creation, safety record). Expand to 100+ hubs if demonstrated effective. Integrate learning into sector-specific frameworks (Recommendation 3).
Budget Estimate: $100M startup, $50M annually. Total 5-year cost: $350M.
Key Metrics: Time-to-market reduction from 3-5 years to 12-18 months; 200+ sandbox projects by Year 5; 50 successfully commercialized applications; zero critical safety incidents within sandbox populations; 30%+ of U.S. AI startups participating in sandbox at some stage.
6. Comparative Scorecard: United States vs. Peer Nations
| Policy Dimension | United States (Current) | United States (Recommended) | European Union | China |
|---|---|---|---|---|
| Regulatory Approach | Voluntary guidance + state fragmentation | Federal standards + sectoral specificity | Comprehensive prescriptive regulation | State-directed with mandatory standards |
| Federal AI Budget | $2.8B annual (2025) | $5-8B by 2030 | €2B annually (est.) | $10B+ annually |
| Safety Testing | Voluntary NIST RMF | Mandatory for models >$100M | Mandatory (high-risk tiers) | Mandatory (all systems) |
| Workforce Retraining | $3.3B WIOA (inadequate) | $3-5B annually by 2030 | €5B+ (varied by member state) | State-directed job reallocation |
| Research Infrastructure | NAIRR pilot (400 teams) | Full NAIRR (5,000 teams, $2B) | European AI Initiative (~€1B) | State-owned supercomputing centers |
| Enforcement Mechanism | FTC authority (limited) | Federal AI Safety Board ($1M-$50M penalties) | EU regulators ($43M+ fines) | Government agencies + penalties |
| Competitive Advantage | Innovation speed, low compliance cost | Balance of safety + speed | Consumer/rights protection, market size | Speed to implementation, coordination |
| Competitive Risk | Safety incidents, state fragmentation, talent shortage | Improved (mitigated) | Slower innovation, startup exodus | International sanctions, limited allies |
Conclusion: Strategic Imperatives for 2026-2030
Artificial intelligence will shape American economic competitiveness, workforce stability, and geopolitical positioning through 2030. The current U.S. policy approach—characterized by voluntary standards, executive order governance, and regulatory fragmentation—is increasingly inadequate relative to international precedents and emerging domestic demands for safety assurance.
The six recommendations above balance competing imperatives:
Safety vs. Speed: Federal AI Safety Board establishes binding standards (safety) while regulatory sandbox accelerates commercialization (speed). Sectoral frameworks permit risk-appropriate innovation in high-stakes domains without imposing startup-level compliance burdens on research institutions.
Domestic Investment vs. International Talent: Accelerated visa pathways and $1.5B domestic AI education initiatives reduce long-term dependency on foreign talent while expanding opportunity for American workers and underrepresented populations.
Innovation Incentives vs. Worker Protection: Wage insurance and comprehensive retraining for AI-displaced workers create social license for innovation while maintaining labor force participation and economic participation of disrupted communities.
Implementation of these recommendations requires sustained congressional attention, bipartisan coalition-building, and executive agency coordination. The fiscal commitment—approximately $8-10B annually by 2030 across safety, research, workforce, and immigration initiatives—is modest relative to the $28 trillion U.S. economy and justified by returns on innovation investment, competitive positioning relative to China and EU, and prevention of larger social costs from unmanaged workforce displacement.
The window for proactive policy design closes rapidly. Delaying action to 2027-2028 will force reactive approaches addressing crisis (mass displacement, safety incidents, talent brain drain) rather than opportunistic positioning. Congress should act in 2026 to authorize the Federal AI Safety Board, expand NAIRR funding, and establish sectoral frameworks, enabling implementation through 2027-2030.
References
- U.S. Small Business Administration. (2025). "United States Small Business Profile 2025." Office of Advocacy. Available at: https://advocacy.sba.gov/2025/06/30/
- Stanford University Human-Centered Artificial Intelligence (HAI). "What the CHIPS and Science Act Means for Artificial Intelligence." Available at: https://hai.stanford.edu/policy/what-the-chips-and-science-act-means-for-artificial-intelligence
- National Science Foundation. (2024). "NSF Announces Funding to Establish National AI Research Resource Pilot." AI Institutes Program. Available at: https://www.nsf.gov/focus-areas/ai/institutes
- National Science Foundation. (2024). "NSF Announces Funding to Establish National AI Research Resource Pilot." Available at: https://www.nsf.gov/news/nsf-announces-funding-establish-national-ai-research
- Defense Scoop. (2024). "Pentagon Requesting More Than $3B for AI, JADC2." Available at: https://defensescoop.com/2023/03/13/pentagon-requesting-more-than-3b-for-ai-jadc2/
- Defense Scoop. (2025). "DoD-Palantir Maven 'Smart System' Contract Increase." May 2025. Available at: https://defensescoop.com/2025/05/23/dod-palantir-maven-smart-system-contract-increase/
- U.S. Congress. (2024). "Congressional Research Service Report on Federal AI Spending." Congressional Budget Office tracking. Available at: https://www.congress.gov/crs-product/R47843
- McKinsey Global Institute. (2024). "Agents, Robots, and Us: Skill Partnerships in the Age of AI." Available at: https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai
- Goldman Sachs Economic Research. (2023). "How Will AI Affect the Global Workforce?" Available at: https://www.goldmansachs.com/insights/articles/how-will-ai-affect-the-global-workforce
- The Interview Guys. (2025). "The State of AI in the Workplace in 2025." Available at: https://blog.theinterviewguys.com/the-state-of-ai-in-the-workplace-in-2025/
- Brookings Institution. (2024). "The EU and US Diverge on AI Regulation: A Transatlantic Comparison." Available at: https://www.brookings.edu/articles/the-eu-and-us-diverge-on-ai-regulation-a-transatlantic-comparison-and-steps-to-alignment/
- White & Case. (2025). "AI Watch: Global Regulatory Tracker—China." Available at: https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-china
- The White House. (2023). "Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence." October 30, 2023. Available at: https://www.mintz.com/insights-center/viewpoints/2191/2023-10-31-bidens-executive-order-artificial-intelligence-ai
- Mayer Brown LLP. (2025). "President Trump Issues Executive Order on Ensuring a National Policy Framework for AI." December 2025. Available at: https://www.mayerbrown.com/en/insights/publications/2025/12/president-trump-issues-executive-order-on-ensuring-a-national-policy-framework-for-artificial-intelligence
- Pillsbury Law. (2024). "California SB 1047 and AI Laws Overview." Available at: https://www.pillsburylaw.com/en/news-and-insights/sb-1047-california-ai-laws.html
- Stack Cyber. (2025). "AI State Laws." Available at: https://stackcyber.com/posts/ai-state-laws
- New York Governor's Office. (2025). "Governor Hochul Signs Nation-Leading Legislation on AI Frameworks." December 19, 2025. Available at: https://www.governor.ny.gov/news/governor-hochul-signs-nation-leading-legislation-require-ai-frameworks-ai-frontier-models
- National Institute of Standards and Technology. (2024). "AI Risk Management Framework 2.0." Available at: https://www.nist.gov/itl/ai-risk-management-framework
- Brookings Institution. (2024). "AI Labor Displacement and the Limits of Worker Retraining." Available at: https://www.brookings.edu/articles/ai-labor-displacement-and-the-limits-of-worker-retraining/
- Government Executive. (2024). "Partnership for Public Service Plans AI Training Center for Federal Employees in 2025." October 2024. Available at: https://www.govexec.com/technology/2024/10/partnership-public-service-plans-ai-training-center-federal-employees-2025/400342/
Join leaders from 100+ countries reading the AI 2030 Brief
Weekly insights on how AI is reshaping industries, economies, and careers by 2030.