There’s little doubt that GCs and legal teams will soon see AI-focused rules and requirements added to the mix of privacy regulations their organizations must already follow.

A recent group conversation turned to the upcoming release of new rules and regulations surrounding the use of AI. Around the table, in-house lawyers sighed, groaned, and sat back with slumped shoulders.

“I’m not even close to ready,” one shared. “We’re still trying to get on top of GDPR and other privacy regulations.” Many others are in the same boat.

Nonetheless, there’s little doubt that GCs and legal teams will soon see AI-focused rules and requirements added to the mix of privacy regulations their organizations must already follow. And for many, participation won’t be optional, as their companies must continue experimenting with AI to remain competitive.

Let’s look at recent developments and several challenges AI regulations will create for corporate legal teams. The news is good, as many lawyers already know a lot about how to practice the responsible use of AI, putting them on a ready path to keep clients on the right side of any future AI regulations.

AI Regulations Are A Moving Target, Coming Fast

The European Union and other institutions watched, sitting idly by as social networks destroyed citizens’ privacy. Now, they refuse to make the same passive mistake with AI. As Caterina Rodelli, EU policy analyst at the nonprofit organization Access Now, said, “The European Parliament must enter [future discussions] with the strongest possible position to protect the rights of all people inside and entering the EU.”

The EU’s current AI Act creates stringent obligations for governments and companies that use AI tools such as facial recognition, biometric surveillance, and other applications. The AI Act — and other guidelines, regulations, and eventual litigation outcomes — will no doubt require companies to provide detailed information on how their AI systems work, including how the tools make decisions and what data they use. Many companies will find these disclosures a significant obstacle, especially those that rely heavily on proprietary algorithms and data sets.

In addition, the AI Act creates a new regulatory framework for high-risk AI applications, such as those used in healthcare, public services, and transportation. Companies that make or use these apps will need to adhere to rigorous transparency, accountability, and safety standards.

As with GDPR, violating AI regulations will likely lead to costly penalties. Any missteps by larger companies with higher profiles and resource levels will likely attract more negative attention from regulators, consumers, and the media. However, midsize and small companies can also face significant reputational and financial damage if found in violation.

Courts And Agencies Actively Shape AI Regulations

Italy initially banned ChatGPT, an AI-powered chatbot developed by OpenAI, when the Italian Data Protection Authority suspected ChatGPT collected and processed sensitive personal information without user consent. (ChatGPT has since been reinstated there.) Though you can opt out of ChatGPT’s data-gathering methods, qualms linger over the potential use of generative-AI tools to collect personal data, spread misinformation, and engage in harmful activities.

Regulatory agencies across the globe have expressed concern over AI’s potential impact on privacy and security, driving some to take steps to regulate AI-powered chatbots and other AI applications. According to Stanford University’s 2023 AI Index, 127 countries passed 37 AI-related laws in 2022. Most of the 123 AI-related bills countries have passed since 2016 were in recent years.

Meanwhile, GitHub Copilot, MidJourney, and Stable Diffusion and their generative-AI tools face copyright infringement lawsuits filed on behalf of original creators such as programmers, writers, and artists. IP and other cases will likely increase as companies and individuals grapple with the implications of AI-powered tools that can generate new (but not necessarily original) text, code, images, and videos.

Are We Headed Toward AI Specialization?

The outcomes of lawsuits and regulatory actions are bound to affect whether and how companies build and adopt AI tools. Larger organizations may opt to focus on AI for niche use cases that are less risky and aligned with their core competencies. Some midsize and small companies are already priced out and would need help to secure the talent to compete.

But you never know. Smaller companies and freelancers may find opportunities to fill in the gaps. This is especially likely in markets and industries considered too risky or not profitable enough for big players to invest. We already see more specialized AI tools tailored to the needs of specific industries, companies, groups, and use cases, broadening the AI market and potentially making AI more accessible to more people.

Don’t Wait, Only To Get Left Behind. Explore AI Now.

Do not wait to explore AI! Just be intentional. Implement safeguards to strike the right balance and reap the benefits of AI while avoiding legal and reputational risks. Another step — along with protecting user privacy and preventing bias — is establishing proper governance measures to monitor algorithmic performance, data collection, and usage.

Requirements will evolve as lawmakers, judges, and regulators learn more about AI and its impact. Approach AI with a mindset of responsible experimentation and a commitment to ethical and transparent practices to start a step ahead and on the right foot.

Written by:
By OLGA V. MACK

Source: https://abovethelaw.com/2023/07/todays-responsible-ai-practices-help-legal-teams-meet-tomorrows-ai-regulations/

More Posts