April 22, 2024
As artificial intelligence (AI) rapidly evolves, regulations struggle to keep pace, creating complex legal challenges. Despite growing calls for comprehensive regulatory oversight, federal legislation has yet to materialize. Natasha Allen, Co-Chair for Artificial Intelligence in the firm’s Innovative Technology Sector, shares insights on the legal and regulatory landscape AI startups face today and what the future may hold.
Natasha describes the current regulatory environment as a “patchwork quilt” rather than a cohesive framework. While states have begun addressing AI-related risks, federal guidelines remain undeveloped. Under the Biden administration, two key themes have emerged: responsible AI and transparency/explainability, with an emphasis on the need for human oversight in AI-generated decisions.
As AI systems become more autonomous, concerns about liability and accountability for AI-driven decisions are on the rise. Natasha explains that traditional legal principles still apply, meaning the responsible party remains accountable, even when AI is involved. She underscores the importance of carefully selecting inputs and monitoring outputs to ensure responsible AI usage. To assess AI-related risks, Natasha points to the National Institute of Standards and Technology’s (NIST) frameworks as valuable resources.
AI startups should also focus on structuring their agreements to address key issues like the use of proprietary data and ownership of AI-generated content. Natasha notes that many companies are now including clauses to specify when proprietary information is used to train large language models (LLMs). It’s crucial for companies to proactively define how their data is used in agreements. Regarding copyright concerns, businesses should document whether content was created entirely by humans, through AI, or with AI assistance.
Given the fast-paced nature of AI advancements and the ever-changing legal landscape, Natasha stresses the importance of staying informed about new regulations and ensuring ongoing compliance. She advises startups to maintain close communication with legal counsel to navigate legislative updates. Additionally, Foley provides a comprehensive resource to track AI-related legislation passed in various states.
Looking ahead, Natasha predicts a heightened focus in 2024 on preventing the misuse of deep fake technology, particularly in influencing elections. She also anticipates further federal efforts to establish comprehensive AI regulations, as well as other countries finalizing their own AI laws. Striking a balance between encouraging innovation and implementing necessary regulations will be critical to ensuring responsible AI development.
For startups and companies deploying AI technologies, staying informed about legislative changes in the U.S. and worldwide will be essential to thriving in this rapidly evolving field.