The pro-startup policy agenda to power AI innovation
Policymakers need to support pro-startup policy if they want a world leading AI ecosystem made up of U.S. companies building AI, building with AI, and using AI to better everyday tasks. AI is a foundational technology that is implicated in a range of policy issues impacting the competitiveness of startups and the speed of innovation. Policy to power AI innovation, then, is not accomplished with one bill or one framework, but instead requires smart policies across all issues impacting AI developers, deployers, and users.
Startups are driving innovation in AI benefitting every corner of the economy—from agriculture to manufacturing, healthcare to education, finance to retail, and more. But policymakers’ approach to regulating AI will determine who is able to participate in the AI ecosystem and the speed at which innovations are disseminated to benefit the public. The U.S. has generally followed a model of permissionless innovation, enabling entrepreneurs to build beneficial new products unencumbered by strict, expensive regulatory regimes antithetical to invention, experimentation, and iteration. This approach has made the U.S. tech sector the envy of the world. But poorly conceived AI regulatory frameworks, that are overbroad and over-reliant on ex ante approaches threaten to undermine U.S. startup competitiveness and innovative capacity.
To address potential harms without burdening innovation, any regulation that policymakers pursue should be outcome-focused, begin from a position of existing law, and target bad actors. Many potential harms associated with AI are already illegal, and enforcing or enhancing those protections is the most straightforward way to address those concerns without new, duplicative and burdensome regulators and frameworks.
AI also doesn’t stop at state or even national borders. Policymakers must act to avoid a state patchwork of varying or conflicting AI rules and take steps to avoid a global patchwork of incongruous, competing approaches. States and localities can (or even should!) take steps to encourage AI investment, research, and development, but must avoid enacting their own unique AI rules. A patchwork of varying rules will burden startups, slow down innovation, and undermine U.S. AI leadership.
Winning the AI race can be accomplished by pursuing a policy agenda that unleashes innovation, encourages investment, creates clear rules, and opens markets, guided by these realities:
Startups need clear rules to be competitive.
Policymakers need to be sure their approach to regulating AI is tailored to discrete harms and is not duplicative of existing law to avoid harming startup competitiveness. For example, employment discrimination is already unlawful—whether that discrimination involves the use of AI or not—meaning a new law specific to hiring and AI is likely to be duplicative. Thanks to existing law, firms offering those services already have market and legal incentives to ensure their products function properly—the new rules will only add cost and burden innovation without delivering justifiable benefit. Moreover, if each state has its own AI rules, the overlapping obligations will create additional costs and further undermine startup competitiveness. Maintaining one set of rules—achieved through federal preemption, if necessary—is imperative to creating the clarity needed to enable startup success.
AI: Startups want to build socially-beneficial AI tools and routinely look to standard setting organizations and industry best practices for guidance. To bolster responsible AI innovation and enhance U.S. global influence, policymakers should support business-led development of voluntary standards. Leveraging safe harbors can help further incent adoption of best practices. Policymakers can also utilize regulatory sandboxes, which foster innovation while enabling startups and regulators alike to learn from each other and find a balanced approach. Sandbox programs must be set up so that it is beneficial for startups to participate, and they require sufficient time, staffing, and resources to properly function.
Capital: Policymakers should avoid forcing startups to expend their few resources following new, overbroad rules—especially where existing law already covers the motivating concern. Overbroad, imprecise definitions in regulatory frameworks can scope-in far too many activities, creating compliance obligations that strain startups’ limited budgets without cognizable benefit. For example, definitions of AI can scope-in common technologies like calculators or spreadsheets. Categorically defining risk creates the same obligations for AI models that diagnose diseases as those that help schedule appointments.
Capital: Poorly conceived regulatory frameworks foist disproportionate compliance burdens upon startups that drains their limited capital, discourages innovation, and undermines economic dynamism. The obligations created by rules themselves, like audit requirements, can be prohibitively expensive. Since development costs and barriers to market steer where startups innovate, these provisions could be net-negative if they discourage socially beneficial innovations.
Startups build off of foundation models.
Startups are building and leveraging AI in a few distinct ways, and to avoid stifling
innovators, policy needs to account for all of them. Many startups are leveraging foundation models, often large language models (LLMs)—both open and closed source—and fine-tuning them to create unique services. This means AI policy needs to support open-source development and avoid policies that will increase costs or disincent both open and closed-source developers to make their models available for others to build upon.
Liability: Foundation models are suitable for a wide range of tasks, but end users often determine how the model will be used. AI rules that incorrectly assign liability to developers of foundation models rather than malign actors will restrict the availability of those models, because developers will not want to be liable for actions of others that they do not have control over. This disincentive will be particularly acute for open-source models, because open source developers lack formal relationships with and awareness of those who use and build with the technology they make widely available.
Startups need data to build AI.
AI innovation is data driven, and startups need data to build, train, and fine-tune AI models. Data acquisition can be expensive, and startups have few resources to acquire vast data sets needed to build accurate and useful models. Policy must enable startups to access the data they need to build, test, and improve AI models.
IP: Policy should recognize that training models with data sets that include copyrighted content is permissible under law. This is imperative to support innovation, deter costly litigation, avoid gatekeeping by large entities, and prevent prohibitively expensive licensing requirements.
Privacy: Startups need one consistent nationwide framework that creates clarity, streamlines costs, and fosters data-driven innovation. At present, a patchwork of data privacy laws risks uncertainty around data use, creates duplicate requirements, and weighs on already-strapped startup budgets. A federal privacy law should ensure that startups can collect and process the data necessary to create new and beneficial products.
Government: Government possesses troves of data useful for AI, especially for tailored, specialized models. Policymakers should ensure that agencies make this data available in AI-ready formats, and where possible, without availability lags that can undermine its utility.
Startups need resources to innovate.
AI innovation can be technical and expensive, and policy should aim to facilitate investment in AI startups, strengthen skilled talent pools, and make available resources directly to startups.
Capital: Policy should enable and incent investment in AI startups. Measures to grow the pool of investors are essential to improve capital access for AI innovators—especially those located outside existing major hubs. Further, pursuit of adjacent policy goals—especially in the competition space—without fully considering downstream consequences will make it harder for startups to get investment or leverage existing investments. Policymakers must avoid restricting investment in AI startups and should enable—not limit—successful startup exits.
Tax: Tax policies should prompt investment in startups and in AI R&D. Preserving and expanding favorable tax treatment of qualified small business stock and introducing federal credits for angel investments will help to de-risk investment in AI startups, aid early-stage AI startups in attracting needed talent, and encourage positive cycles of reinvestment of returns. Restoring immediate expensing of R&D costs will increase startups’ capacity by letting them put more resources toward AI innovation.
Talent: To grow the AI talent pool, policymakers should fund and support AI skilling and upskilling programs, enhance STEM education, and attract and retain talented immigrants. Skilling, upskilling, and STEM education efforts are essential long-term efforts to building a strong base of domestic talent prepared to work in AI companies and with AI tools. Reforming and expanding immigration pathways is essential to winning the talent race. Streamlining the O-1 visa program and creating a startup visa will ensure founders can start and grow AI companies in the U.S. Expanding the H-1B visa program will help to fill present shortages in skilled talent. Finally, many foreign students receive advanced STEM degrees from U.S. institutions, but then return home where they end up competing with U.S. innovators. Instead, those graduates should have the option to remain in the U.S. with the stability—like permanent residency—needed to launch or work at a startup.
Capital: Government grants are important means of capital access that support American R&D, facilitate commercialization of new innovations, and de-risk investment in recipient startups. Flagship programs like Small Business Innovation Research grants should be made permanent, accessible to more startups, and improved to suit startup realities. Further, key resources like compute and data can be very expensive and steer how startups innovate. Providing compute or data sets directly to startups can lower barriers to entry and bolster AI R&D.
Startups can help to improve government.
Policymakers can use levers of government to speed AI adoption and support U.S. AI leadership around the globe.
Government: Government is a large buyer of software and serves many functions that could be made more efficient through the use of AI. Policymakers should ensure acquisition processes work for startups so they can help to improve the provision of government services and through AI.