AI Compliance: What It Is and Why You Should Care [2024 update]

An AI-generated painting of a robot lawyer working through some papers

 

As Artificial Intelligence (AI) continues to become more prevalent in the workplace – from recruiting to software development to marketing – organizations are increasingly looking for ways to ensure that their AI systems comply with relevant regulations and standards. The field of AI compliance is complex and ever-changing, and staying up-to-date with the latest developments is essential for any organization that uses artificial intelligence. As a professional – whether you are a manager, developer, security specialist, or simply an AI enthusiast – it is crucial to know what the challenges are in this field and how you can tackle them to save your company money and yourself many headaches.

 

What is AI compliance?

AI compliance is a process that involves making sure that AI-powered systems are compliant with all applicable laws and regulations.
  • It includes checking that companies and individuals do not use AI-powered systems to break any laws or regulations;
  • It ensures that the data used to train AI systems is collected and used legally and ethically;
  • AI compliance guarantees that AI-powered systems are not used to discriminate against any particular group or individual and are not used to manipulate or deceive people in any way;
  • It involves verifying that nobody uses AI-powered systems to invade individuals’ privacy or cause any harm to them;
  • Finally, AI compliance also assures that AI-powered systems are employed responsibly and in a way that benefits society.

 

Why is AI compliance important? 

AI compliance is essential for various reasons: first, it ensures that organizations use AI legally and ethically. Somebody can use AI-powered systems to make decisions that significantly impact individuals. Organizations must ensure that these decisions comply with applicable laws and regulations. Second, AI compliance helps to protect organizations from potential legal and financial risks. Suppose authorities find an AI-powered system to be non-compliant. In that case, organizations may be subject to fines, penalties, or other legal action. Finally, AI compliance helps to protect the privacy and security of individuals. AI-powered systems can collect and process large amounts of personal data. Organizations must ensure that this data is collected and used legally and ethically, or they may face hefty fines.

 

AI has not always been compliant – some examples from the past

Artificial Intelligence has been at the center of discussions about potential bias in its functioning since the first decade of 2000. There are many examples in which artificial intelligence poses an ethical and security threat, such as:

  • AI-based hiring tools. In 2018, Amazon removed a covert AI hiring tool that displayed bias against women. The machine learning model’s tendencies caused most ideal candidates to be created as men, reflecting the predominance of men in the computer industry.
  • Deepfakes. According to Dr. Tim Stevens (Cyber Security Research Group at King’s College London), the use of deepfakes (synthetic media in which the likeness of a different person replaces another in an existing image or video) poses a severe threat to national security since autocracies might make use of them to undermine public confidence in those institutions and organizations.
  • AI-powered photo editing and data protection. There have been different examples of dubious handling of data related to apps that use AI to enhance or transform real pictures, such as a Facebook app that leaked data to a Russian company in 2022. Another worrying example was a popular app (FaceApp) which showed a very oddly written privacy policy, stating that any photographs shared by users are effectively the property of FaceApp.
  • In 2024, Clearview AI, a controversial U.S.-based facial recognition company that reportedly collaborates with government and law enforcement agencies, was fined over $30 million by the Netherlands’ data protection authority. The fine was issued for creating an illegal database containing billions of facial images scraped from social media and other websites, raising significant ethical and privacy concerns when building databases to train AI.

If you want to know more about the different types of bias of Artificial Intelligence – and also those of Machine Learning and Deep Learning, make sure to check out our blog article “Artificial Intelligence, Machine Learning, and Deep Learning: Addressing the Bias“.

So far, the legislation behind these cases has been relatively loose, with relative fines and enforcement. However, as laid out below, things are rapidly changing – and it’s better to be prepared.

 

Man worried at the desk

 

High fines for those who fail to comply – an overview of EU and U.S. legislation

Not being able to use and develop AI in a way that is fully compliant with the applicable law may result in high fines and penalties. Let’s see what is the most up-to-date legislation around it and how your company can avoid hefty fines by knowing the actual, relevant laws in the EU and U.S.

The EU Artificial Intelligence Act (European AI Act)

In April 2021, the European Commission introduced the EU AI Act, the world’s first comprehensive regulatory framework for artificial intelligence. The Act classifies AI systems based on the risk they pose to users, from minimal to high-risk applications, ensuring that higher-risk systems face more stringent regulations. This landmark legislation aims to foster innovation while safeguarding users, setting the standard for responsible AI use in the EU.

2024 has been a central year for AI regulation, given the rapid developments in the field and the role the EU is playing in data protection, leading the wagon of AI-related legislation. In fact, the European AI Act, adopted by the European Parliament in March 2024 and officially approved by the Council in May 2024, marks a significant milestone as the first comprehensive law to regulate artificial intelligence (AI). This legislation categorizes AI systems based on their risk level, with corresponding regulations to ensure user safety and compliance. Although the full implementation will take place in 2026, some aspects will come into effect earlier. For the latest updates, visit the official European Parliament website.

Fines for non-compliant companies under the EU AI Act will rank as high as:

  • 35 million euros or 7% of the company’s total worldwide annual turnover (whichever is higher) – in case of the use of a prohibited AI system according to Art. 5 AI Act.
  • 15 million euros or 3% of annual worldwide turnover (whichever is higher) – If a company fails to comply with obligations for operators, authorized representatives, importers, distributors, deployers, or notified bodies (Articles 16, 22-24, 26, 31, 33, 34, and 50)
  • 7.5 million euros or 1% of annual worldwide turnover (whichever is higher) – if the competent authorities receive inaccurate, insufficient, or deceptive information in answer to their request for information.

Important ethical points oin the European Artificial Intelligence Act (EU AI Act)

Since the conception of the act, the European Parliament hashed out its stance on some pretty important AI issues. They expressed concerns about AI systems that can identify people in public spaces in real-time, or afterwards, with law enforcement being the only exception in serious crime cases, but only if they’ve got a judge’s approval. They’re also worried about systems that categorize people using sensitive details like gender or race, predictive policing that profiles individuals based on location or past behavior, systems that can read emotions in contexts like law enforcement or schools, and the random gathering of personal data from social media or CCTV to build facial recognition databases. They’re worried all of this could breach people’s rights and privacy.

Parliament members also decided that they need to be extra careful about AI that could harm people’s health, safety, fundamental rights, or the environment, as well as AI that could manipulate voters or affect the content folks see on big social media platforms. Companies behind AI technologies like GPT, which is a model that can generate language like a human, need to make sure their tech respects human rights, health, safety, and our environment. These companies need to show that they’re doing everything they can to prevent problems, meet design and information standards, and register in the EU’s database. They also need to be transparent about the fact their content is AI-generated and not created by humans. To support the growth of AI technology while protecting people’s rights, parliament members have proposed exceptions for research and open-source AI components. They’re encouraging the use of “regulatory sandboxes” where AI can be tested in a safe, controlled environment before being released into the wild.

Lastly, they want to give people more power to complain about AI systems and get explanations when these high-risk AI systems have a significant impact on their rights. They also want to strengthen the role of the EU AI Office to ensure these AI rules are followed correctly.

 

U.S. AI legislation and principles

The U.S. has implemented several key legislative measures aimed at regulating AI to ensure safety, fairness, and accountability. While these initiatives are in various stages of development and enactment, they provide a framework for organizations to align with emerging AI regulations.

By adhering to the following legislative measures and principles, organizations can better ensure that their AI systems remain safe, fair, and compliant with U.S. regulations. Here’s an overview of the main pertinent laws.

National Artificial Intelligence Initiative Act of 2020

  • Purpose: Directs AI research, development, and assessment across federal science agencies.
  • Impact: Established the American AI Initiative to coordinate AI activities across federal institutions.

AI in Government Act & Advancing American AI Act

  • Purpose: Direct federal agencies to advance AI programs and policies.
  • Impact: Promotes AI adoption within government operations.

Blueprint for an AI Bill of Rights Principles

  1. Safe and effective systems: Systems must be rigorously tested and monitored.
  2. Protection against algorithmic discrimination: Safeguards against unjustified bias and discrimination.
  3. Protection against abusive data practices: Empowers users with control over their data.
  4. Transparency: Ensures users are informed about AI use and its effects.
  5. Opt-out and human review: Provides options to opt-out of AI systems and seek human intervention.

State-Level Legislation

States like Maryland and California have introduced AI-related regulations to enhance oversight of AI systems. More information about all of the previous legislations can be found at the end of this blog article.

International standards regulating AI: an overview of ISO standards

What is ISO?

The International Organization for Standardization (ISO) brings together experts worldwide to define best practices for a variety of processes, from product creation to management techniques. Established in 1946, ISO is one of the oldest non-governmental organizations, facilitating international trade and cooperation. ISO’s standards aim to improve lives by making processes safer, easier, and more efficient.

Importance of ISO compliance for AI

ISO standards are highly respected across the global business community. Achieving ISO compliance boosts an organization’s reputation, symbolizing quality and regulatory adherence in business practices. Although ISO compliance is not a legal requirement, it provides essential guidance that naturally aligns with industry regulations, helping companies improve and meet global standards.

Is ISO compliance a legal requirement?

No, ISO compliance is voluntary. However, ISO standards offer valuable guidance for businesses looking to enhance their processes. They often align with various legal regulations across different industries. If an organization struggles with paying the relatively small fees associated with ISO/IEC standards, it may indicate a lack of readiness for more advanced processes, such as AI implementation.

 

Overview of relevant ISO standards for AI implementation in business

  • ISO/IEC 5338
    Co-developed by Software Improvement Group, ISO/IEC 5338 is a new global standard for AI lifecycle management. It defines processes to manage, control, and improve AI systems at every stage of their lifecycle. This standard can also be applied to projects or organizations developing or acquiring AI systems. For non-AI software elements, ISO/IEC/IEEE 12207 and ISO/IEC/IEEE 15288 lifecycle processes are available to manage traditional software or system components. Get to know the global standard on AI systems and make sure your AI systems are fully compliant.
  • ISO/IEC 27001
    With the growing threat of cybercrime, ISO/IEC 27001 helps organizations become aware of risks, proactively identify vulnerabilities, and address them effectively. This standard encourages a holistic approach to information security, assessing people, policies, and technology. Implementing this standard supports risk management, cyber-resilience, and operational excellence.
  • ISO/IEC 31700
    This standard sets high-level privacy-by-design requirements to ensure the protection of consumer data throughout a product’s lifecycle, covering all data processed by consumers. It is essential for safeguarding personal data in an increasingly digital world.
  • ISO/IEC 42001
    The first global standard focused on establishing, implementing, maintaining, and improving an Artificial Intelligence Management System (AIMS). This standard is crucial for organizations providing or utilizing AI products and services, ensuring ethical and responsible AI development.
    ISO/IEC 42001 addresses unique challenges such as transparency, ethical considerations, and continuous learning, helping organizations manage AI-related risks and opportunities while balancing innovation with effective governance.

 

How do you ensure AI compliance? It’s simple with the AI readiness guide and the right AI certifications

To ensure full AI compliance, organizations should take into account the following best practices:

  1. Establish clear policies and procedures for AI use.
  2. Develop a comprehensive compliance program.
  3. Monitor AI systems for compliance with applicable laws and regulations.
  4. Create an AI governance framework.
  5. Ensure data privacy and security.
  6. Establish an audit process for AI systems.
  7. Develop a process for reporting and responding to compliance issues.
  8. Implement a risk management program.
  9. Train personnel on AI compliance requirements.
  10. Utilize automated tools to monitor AI compliance.

This list might look easy if you’re an AI and data protection expert. But what if you’re not? What if you need to learn how to establish clear policies and procedures – and develop a comprehensive compliance program?

No worries, we got you covered: check out this free AI readiness guide for organizations, written by Rob van der Veer – leading author of AI standards like ISO/IEC 4338 and the EU AI Act Security Standard, and Senior Principal Expert at Software Improvement Group, company leader in AI and software excellence. A simple, pragmatic, and comprehensive guidance document for business and IT leaders to fully prepare their organization for AI, covering the key aspects of:

  • AI Governance
  • AI Risk Management
  • AI Software Development
  • AI Security

Furthermore, a simple and straightforward way to become and expert in AI compliance is to obtain an AI and Data Protection certification from an internationally recognized organization. Together, these certifications cover many topics – including data privacy and security– and help develop a full grasp of artificial intelligence’s nuances and relative regulatory standards. They help professionals and the organizations they work for understand the implications of using AI in various operations and ensure that their systems comply with relevant regulations and standards. Curious to know how to get one? Keep on reading!

 

Become an expert in AI compliance today with the proper tools

EXIN certifications span from the domain of privacy and data protection to AI – becoming an invaluable tool for lawyers, developers, managers, and many other roles in the digital space to provide their organizations and clients with the tools and guidance they need to ensure that their AI systems comply with relevant regulations and standards. You can help protect your organization and clients from potential legal and financial risks by obtaining an internationally-valid certification.

In this article, we’ve seen how important it is to be prepared to be fully compliant when it comes to AI. Are you ready to face the challenges on this topic and save your company money and legal expenses? Discover more about our AI & Data Protection certifications and ensure full AI compliance in your work and your company by checking the links below!