Yesterday, the California Department of Justice, Attorney General’s Office (AGO), issued an advisory to provide guidance to consumers and entities that develop, sell, and use AI about their rights and obligations under California law. The "Legal Advisory on the Application of Existing California Laws to Artificial Intelligence" outlines: 1) Unfair Competition Law (Bus. & Prof. Code, § 17200 et seq.): Requires AI systems to avoid deceptive practices such as false advertising of capabilities and unauthorized use of personal likeness, making violations of related state, federal, or local laws actionable under this statute. 2) False Advertising Law (Bus. & Prof. Code, § 17500 et seq.): Prohibits misleading advertisements about AI products' capabilities, emphasizing the need for truthfulness in the promotion of AI tools/services. 3) Competition Laws (Bus. & Prof. Code, §§ 16720, 17000 et seq.): Guard against anti-competitive practices facilitated by AI, ensuring that AI does not harm market competition or consumer choice. 4) Civil Rights Laws (Civ. Code, § 51; Gov. Code, § 12900 et seq.): Protect individuals from discrimination by AI in various sectors, including employment and housing. 5) Election Misinformation Prevention Laws (Bus. & Prof. Code, § 17941; Elec. Code, §§ 18320, 20010): Regulate the use of AI in elections, specifically prohibiting the use of AI to mislead voters or impersonate candidates. 6) California's data protection laws ensuring oversight of personal and sensitive information: The California Consumer Privacy Act (CCPA) and the California Invasion of Privacy Act (CIPA) set strict guidelines for transparency and the secure handling of data. These regulations extend to educational and healthcare settings through the Student Online Personal Information Protection Act (SOPIPA) and the Confidentiality of Medical Information Act (CMIA). In addition, California has enacted several new AI regulations, effective January 1, 2025: Disclosure Requirements for Businesses: - AB 2013: Requires AI developers to disclose training data information on their websites by January 1, 2026. - AB 2905: Mandates disclosure of AI use in telemarketing. - SB 942: Obligates AI developers to provide tools to identify AI-generated content. Unauthorized Use of Likeness: - AB 2602: Ensures contracts for digital replicas include detailed use descriptions and legal representation. - AB 1836: Bans use of deceased personalities’ digital replicas without consent, with hefty fines. AI in Elections: - AB 2355: Requires disclosure for AI-altered campaign ads. - AB 2655: Directs platforms to identify and remove deceptive election content. Prohibitions on Exploitative AI Uses: - AB 1831 & SB 1381: Expand prohibitions on AI-generated child pornography. - SB 926: Extends criminal penalties for creating nonconsensual pornography using deepfake technology. AI in Healthcare: - SB 1120: Requires licensed physician oversight on AI healthcare decisions.
Compliance Requirements for AI Developers
Explore top LinkedIn content from expert professionals.
Summary
Compliance requirements for AI developers are the legal and ethical rules that guide how artificial intelligence systems are built, deployed, and managed to protect users, ensure fairness, and meet local and international laws. These include following specific regulations on data privacy, transparency, and risk management, with evolving standards across regions like the EU, California, and other jurisdictions.
- Understand regional rules: Make sure you study and follow the unique compliance and data protection laws for every location where your AI system is used, such as the EU AI Act, GDPR, CCPA, and others.
- Document and disclose: Keep clear records about how your AI models are trained, what data is used, and the intended purposes, and be transparent about any AI-generated content or decisions.
- Prioritize oversight and security: Build in ways for humans to supervise critical AI functions, test for fairness and bias, and put cybersecurity controls in place to prevent misuse or unauthorized access.
-
-
𝐀𝐈 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 & 𝐃𝐚𝐭𝐚 𝐏𝐫𝐨𝐭𝐞𝐜𝐭𝐢𝐨𝐧 𝐋𝐚𝐰𝐬 𝐟𝐨𝐫 𝐆𝐞𝐧𝐀𝐈 𝐀𝐩𝐩𝐬 Building GenAI Apps for a Global Audience? Understanding Regional Data Protection and AI laws is not optional, it is foundational. Here is what you need to know: 1. UNDERSTANDING GLOBAL REGULATORY VARIANCE Building GenAI for a global audience requires understanding regional data protection and AI laws. Key Regulations by Region: • EU AI Act: Risk-based AI obligations for certain AI systems and transparency use cases • GDPR (EU): Transparency & Consent • DPDP (India): Digital Personal Data Protection • PIPL (China): Strict Data Localization • CCPA (California): Data Access & Opt-Out • LGPD (Brazil): Local Compliance Rules 2. IMPACT OF THESE REGULATIONS ON YOUR AI TRAINING DATA To build compliant GenAI apps, Ensure that data used for training AI models follows the regional rules: Data Collection → Processing → Model Training → Deployment Three Core Requirements: a. User Consent: Obtain explicit consent for data collection and use b. Data Minimization: Collect only necessary data for the intended purpose c. Anonymization: Remove personally identifiable information from training data 3. MITIGATING AI ETHICS AND BIAS RISKS AI systems must be fair and ethical, particularly in high-risk areas: a. Fairness: Ensure your AI models don't discriminate, especially in areas like recruitment or finance. b. Bias Mitigation: Regularly test and adjust your models to reduce bias in the outputs. 4. ENSURING TRANSPARENCY IN AI MODEL DEVELOPMENT Transparency is a cornerstone of compliance, especially when your AI impacts users directly: a. Explainability: Protect data in transit and at rest. b. Consent Management: Collect, track, and manage user consent. c. Privacy by Design: Embed privacy into every system layer. 5. MANAGING CROSS-BORDER DATA FLOW GenAI apps often rely on data from various regions, so it's critical to understand data sovereignty laws: a. Data Sovereignty: Follow local laws on where data is stored and processed. b. Data Transfer Agreements: Use SCCs or BCRs for compliant cross-border transfers. THE COMPLIANCE CHECKLIST Before launching GenAI globally, verify: 1. Regional Compliance: • GDPR for EU? (Transparency & Consent) • DPDP for India? (Data Protection) • PIPL for China? (Data Localization) • CCPA for California? (Access & Opt-Out) • LGPD for Brazil? (Local Rules) 2. Training Data: • User consent obtained? • Data minimized? • PII anonymized? 3. Ethics & Bias: • Fairness tested? • Bias mitigation in place? 4. Transparency: • Explainability documented? • Consent management system? • Privacy by design? 5. Cross-Border: • Data sovereignty compliance? • Transfer agreements (SCCs/BCRs)? Each region has different requirements. Build for the strictest, adapt for the rest. Which regulation applies to your GenAI app?
-
Yesterday, the AI Office published the third draft of the General-Purpose AI Code of Practice, a key regulatory instrument for AI providers seeking to align with the EU AI Act. Developed with input from 1,000 stakeholders, the draft refines previous versions by clarifying compliance requirements and introducing a structured approach to regulation. GPAI providers must meet baseline obligations on transparency and copyright compliance, while models classified as having systemic risk face additional commitments under Article 51 of the AI Act. The final version, expected in May 2025, aims to facilitate compliance while ensuring AI models adhere to safety, security, and accountability standards. The Code introduces the Model Documentation Form, requiring AI providers to disclose key details such as model architecture, parameter size, training methodologies, and data sources. Transparency obligations include specifying the provenance of training data, documenting measures to mitigate bias, and reporting compute power and energy consumption. GPI providers must also outline their models’ intended uses, with additional requirements for systemic-risk models, including adversarial testing and evaluation strategies. Documentation must be retained for twelve months after a model is retired, with copyright compliance mandatory for all providers, including open-source AI. GPAI providers must establish formal copyright policies and comply with strict data collection rules. Web crawlers cannot bypass paywalls, access piracy sites, or ignore the Robot Exclusion Protocol. The Code also requires providers to prevent AI-generated copyright infringement, mandate compliance in acceptable use policies, and implement mechanisms for rightsholders to submit copyright complaints. Providers must maintain a point of contact for copyright inquiries and ensure their policies are transparent. For AI models with systemic risk, the Code introduces a Safety and Security Framework, aligning with the AI Act’s high-risk requirements. Providers must assess risks in areas such as cyber threats, manipulation, and autonomous AI behaviours. They must define risk acceptance criteria, anticipate risk escalations, and conduct assessments at key development milestones. If risks are identified, development may need to be paused while safeguards are implemented. GPAI providers must introduce technical safeguards, including input filtering, API access controls, and security measures meeting at least the RAND SL3 standard. From 2 November 2025, systemic-risk models must undergo external risk assessments before release. Providers must maintain a Safety and Security Model Report, report AI-related incidents within strict timeframes, and implement governance structures ensuring responsibility at all levels. Whistleblower protections are also required. With the final version expected in May 2025, AI providers have a short window to prepare before the AI Act takes full effect in August.
-
A new paper dropped today that deserves serious attention from anyone building or deploying AI agents in Europe. Nannini, Smith, Tiulkanov and colleagues have produced the first systematic regulatory mapping for AI agent providers under EU law. Not a policy commentary. An actual compliance architecture, integrating the draft harmonised standards under M/613, the GPAI Code of Practice, the CRA standards programme, and the Digital Omnibus proposals. The core insight is deceptively simple: the regulatory trigger for an AI agent is determined by what the agent does externally, not by its internal architecture. The same LLM with tool-calling generates radically different compliance obligations depending on deployment. → Screen CVs? Annex III high-risk, full Chapter III → Summarise meeting notes? Article 50 transparency only. The technology is identical. The regulatory consequence diverges completely. The paper identifies four agent-specific compliance challenges that current frameworks address in principle but not yet in practice. 1️⃣ 𝗖𝘆𝗯𝗲𝗿𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆: a system prompt telling the model "do not delete files" is not a security control. Article 15(4) compliance requires privilege enforcement at the API level, outside the generative model. 2️⃣ 𝗛𝘂𝗺𝗮𝗻 𝗼𝘃𝗲𝗿𝘀𝗶𝗴𝗵𝘁: LLMs trained via RL may have learned to evade oversight as an emergent strategy. Oversight must be external constraints, not internal instructions. 3️⃣ 𝗧𝗿𝗮𝗻𝘀𝗽𝗮𝗿𝗲𝗻𝗰𝘆: when an agent sends an email, the recipient is an affected person who may not know they are interacting with AI. 4️⃣ 𝗥𝘂𝗻𝘁𝗶𝗺𝗲 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝗿𝗮𝗹 𝗱𝗿𝗶𝗳𝘁: agents that accumulate memory or discover novel tool-use patterns may leave their conformity assessment boundaries undetected. The paper's conclusion is stark: high-risk agentic systems with untraceable behavioral drift cannot currently be placed on the EU market. Not future risk, but current legal position. For anyone building AI governance infrastructure, this confirms what we have been arguing at Modulos: compliance for agentic AI must be continuous and architectural, not periodic and checklist-based. The provider's foundational task is an exhaustive inventory of the agent's external actions, data flows, connected systems, and affected persons: that inventory is the regulatory map. 👉 https://lnkd.in/e_zk3R6B
-
Europe just made AI governance non-negotiable. prEN 18286 (EU AI Act QMS) is out, once cited, it grants presumption of conformity. Reality check: ISO/IEC 42001 ≠ EU AI Act compliance. Translation: for high-risk AI providers, you’ll need evidence, not promises, design controls, data governance, risk management, and post-market monitoring that auditors can verify. Do these 5 moves now: - Map every AI system to EU AI Act risk tiers. - Implement controls aligned to the new harmonized standards. - Show your work: tech docs, eval evidence, audit trails. - Challenge vendors—model cards, data lineage, red-team results. - Monitor in production like safety-critical software. Simplifying it , your fast path: risk-map → standardize controls → prove with evidence → vendor due diligence → live monitoring. Simple to say, and hard to fake. If you’re “waiting to see,” you’re already late. Presumption of conformity will favor the prepared. #EUAIAct #AICompliance #AIStandards #CENCENELEC #ISO42001 #GPAI #ResponsibleAI #EUAIAct #AIGovernance #AICompliance #AIStandards #RiskManagement
-
The EDPS - European Data Protection Supervisor has issued a new "Guidance for Risk Management of Artificial Intelligence Systems." The document provides a framework for EU institutions acting as data controllers to identify and mitigate data protection risks arising from the development, procurement, and deployment of AI systems that process personal data, focusing on fairness, accuracy, data minimization, security and data subjects’ rights. Based on ISO 31000:2018, the guidance structures the process into risk identification, analysis, evaluation, and treatment — emphasizing tailored assessments for each AI use case. Some highlights and recommendations include: - Accountability: AI systems must be designed with clear documentation of risk decisions, technical justifications, and evidence of compliance across all lifecycle phases. Controllers are responsible for demonstrating that AI risks are identified, monitored, and mitigated. - Explainability: Models must be interpretable by design, with outputs traceable to underlying logic and datasets. Explainability is essential for individuals to understand AI-assisted decisions and for authorities to assess compliance. - Fairness and bias control: Organizations should identify and address risks of discrimination or unfair treatment in model training, testing, and deployment. This includes curating balanced datasets, defining fairness metrics, and auditing results regularly. - Accuracy and data quality: AI must rely on trustworthy, updated, and relevant data. - Data minimization: The use of personal data in AI should be limited to what is strictly necessary. Synthetic, anonymized, or aggregated data should be preferred wherever feasible. - Security and resilience: AI systems should be secured against data leakage, model inversion, prompt injection, and other attacks that could compromise personal data. Regular testing and red teaming are recommended. - Human oversight: Meaningful human involvement must be ensured in decision-making processes, especially where AI systems may significantly affect individuals’ rights. Oversight mechanisms should be explicit, documented, and operational. - Continuous monitoring: Risk management is a recurring obligation — institutions must review, test, and update controls to address changes in system performance, data quality, or threat exposure. - Procurement and third-party management: Contracts involving AI tools or services should include explicit privacy and security obligations, audit rights, and evidence of upstream data protection compliance. The guidance establishes a practical benchmark for embedding data protection into AI governance — emphasizing transparency, proportionality, and accountability as the foundation of lawful and trustworthy AI systems.
-
The Oregon Department of Justice released new guidance on legal requirements when using AI. Here are the key privacy considerations, and four steps for companies to stay in-line with Oregon privacy law. ⤵️ The guidance details the AG's views of how uses of personal data in connection with AI or training AI models triggers obligations under the Oregon Consumer Privacy Act, including: 🔸Privacy Notices. Companies must disclose in their privacy notices when personal data is used to train AI systems. 🔸Consent. Updated privacy policies disclosing uses of personal data for AI training cannot justify the use of previously collected personal data for AI training; affirmative consent must be obtained. 🔸Revoking Consent. Where consent is provided to use personal data for AI training, there must be a way to withdraw consent and processing of that personal data must end within 15 days. 🔸Sensitive Data. Explicit consent must be obtained before sensitive personal data is used to develop or train AI systems. 🔸Training Datasets. Developers purchasing or using third-party personal data sets for model training may be personal data controllers, with all the required obligations that data controllers have under the law. 🔸Opt-Out Rights. Consumers have the right to opt-out of AI uses for certain decisions like housing, education, or lending. 🔸Deletion. Consumer #PersonalData deletion rights need to be respected when using AI models. 🔸Assessments. Using personal data in connection with AI models, or processing it in connection with AI models that involve profiling or other activities with heightened risk of harm, trigger data protection assessment requirements. The guidance also highlights a number of scenarios where sales practices using AI or misrepresentations due to AI use can violate the Unlawful Trade Practices Act. Here's a few steps to help stay on top of #privacy requirements under Oregon law and this guidance: 1️⃣ Confirm whether your organization or its vendors train #ArtificialIntelligence solutions on personal data. 2️⃣ Validate your organization's privacy notice discloses AI training practices. 3️⃣ Make sure organizational individual rights processes are scoped for personal data used in AI training. 4️⃣ Set assessment protocols where required to conduct and document data protection assessments that address the requirements under Oregon and other states' laws, and that are maintained in a format that can be provided to regulators.
-
The 𝗔𝗜 𝗗𝗮𝘁𝗮 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 guidance from 𝗗𝗛𝗦/𝗡𝗦𝗔/𝗙𝗕𝗜 outlines best practices for securing data used in AI systems. Federal CISOs should focus on implementing a comprehensive data security framework that aligns with these recommendations. Below are the suggested steps to take, along with a schedule for implementation. 𝗠𝗮𝗷𝗼𝗿 𝗦𝘁𝗲𝗽𝘀 𝗳𝗼𝗿 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 1. Establish Governance Framework - Define AI security policies based on DHS/CISA guidance. - Assign roles for AI data governance and conduct risk assessments. 2. Enhance Data Integrity - Track data provenance using cryptographically signed logs. - Verify AI training and operational data sources. - Implement quantum-resistant digital signatures for authentication. 3. Secure Storage & Transmission - Apply AES-256 encryption for data security. - Ensure compliance with NIST FIPS 140-3 standards. - Implement Zero Trust architecture for access control. 4. Mitigate Data Poisoning Risks - Require certification from data providers and audit datasets. - Deploy anomaly detection to identify adversarial threats. 5. Monitor Data Drift & Security Validation - Establish automated monitoring systems. - Conduct ongoing AI risk assessments. - Implement retraining processes to counter data drift. 𝗦𝗰𝗵𝗲𝗱𝘂𝗹𝗲 𝗳𝗼𝗿 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻 Phase 1 (Month 1-3): Governance & Risk Assessment • Define policies, assign roles, and initiate compliance tracking. Phase 2 (Month 4-6): Secure Infrastructure • Deploy encryption and access controls. • Conduct security audits on AI models. Phase 3 (Month 7-9): Active Threat Monitoring • Implement continuous monitoring for AI data integrity. • Set up automated alerts for security breaches. Phase 4 (Month 10-12): Ongoing Assessment & Compliance • Conduct quarterly audits and risk assessments. • Validate security effectiveness using industry frameworks. 𝗞𝗲𝘆 𝗦𝘂𝗰𝗰𝗲𝘀𝘀 𝗙𝗮𝗰𝘁𝗼𝗿𝘀 • Collaboration: Align with Federal AI security teams. • Training: Conduct AI cybersecurity education. • Incident Response: Develop breach handling protocols. • Regulatory Compliance: Adapt security measures to evolving policies.
-
New Dutch DPA Guidance on Generative AI: A GDPR Wake-Up Call for AI Innovators The Dutch Data Protection Authority (Autoriteit Persoonsgegevens) just dropped its May 2025 consultation on GDPR Preconditions for Generative AI—offering a direct and detailed regulatory vision for how foundational AI models can (and cannot) comply with EU data law. Why this matters for AI legal professionals and industry leaders: 🔍 Key Finding: The AP estimates that the vast majority of existing foundation models are currently noncompliant with GDPR due to unlawful scraping, especially of sensitive data. Yet, it offers a structured path toward responsible, lawful deployment—if actors take concrete steps. 📊 Core GDPR Precondition Areas: ✅ Lawful data collection, not just post hoc curation 🔄 Differentiation between controllers A (foundation model developers) and controllers B (fine-tuners & deployers) 🚫 Strict scrutiny of “special categories” (e.g. health, political beliefs) under Article 9 📤 Requirements for purpose limitation, transparency, and subject rights facilitation ❌ Regurgitation of personal data = a GDPR issue, hallucination = a technical failure 🔄 How does this compare to US regimes? 🇺🇸 California’s ADMT regulation (still in draft) focuses on automated decision-making tools, requiring pre-deployment notices and opt-outs for consumers. But it lacks the training data legality thresholds now demanded in the Netherlands. 📍 Colorado's AI Act creates AI risk tiers and imposes documentation duties, but—unlike the Dutch AP—doesn’t draw sharp legal lines between model developers and downstream deployers, nor does it impose comparable prior compatibility analysis under GDPR Article 6(4). 🇪🇺 Meanwhile, this Dutch guidance aligns tightly with EDPB Opinion 28/2024 and anticipates harmonization with the AI Act. It's part GDPR enforcement, part AI compliance blueprint 💡 Why this should spark real dialogue: If you're fine-tuning a model on data you didn’t collect: are you liable for past scraping? (Answer: requires analysis!) Can you “untrain” unlawfully obtained data? (Not yet.) Does anonymization truly exempt you from GDPR? (Only if it’s verifiable, says the CJEU.) 🔗 This is more than guidance. It's a playbook for AI governance. 📬 Open for consultation until June 27, 2025 – and if you're advising, deploying, or building AI in Europe, you should be responding. #AIRegulation #GDPR #GenerativeAI #DataPrivacy #AIGovernance #LegalTech #ColoradoAI #CaliforniaPrivacy #EDPB #AICompliance #FoundationModels #PrivacyLaw #AIEthics #DutchDPA #AIAudit #ResponsibleAI
-
🚨 𝐁𝐢𝐠 𝐀𝐈 𝐆𝐨𝐯𝐞𝐫𝐧𝐚𝐧𝐜𝐞 𝐍𝐞𝐰𝐬 𝐟𝐫𝐨𝐦 𝐂𝐚𝐥𝐢𝐟𝐨𝐫𝐧𝐢𝐚 🚨 Governor Gavin Newsom has just signed SB 53, a landmark law requiring large AI companies (>$𝟓𝟎𝟎𝐌 𝐫𝐞𝐯𝐞𝐧𝐮𝐞) to: ✅ Disclose safety protocols ✅ Report incidents & risk mitigation plans ✅ Provide whistleblower protections 💡 𝐏𝐞𝐧𝐚𝐥𝐭𝐢𝐞𝐬? Up to $1M per violation. Voluntary “best practices” are no longer enough. 𝐖𝐡𝐲 𝐢𝐭 𝐦𝐚𝐭𝐭𝐞𝐫𝐬 🌍 This is one of the first state-level AI safety laws in the U.S. It could set a blueprint for nationwide (or even global) governance. Even if your org isn’t based in California, the ripple effects will be hard to ignore. 𝐖𝐡𝐚𝐭 𝐭𝐡𝐢𝐬 𝐦𝐞𝐚𝐧𝐬 𝐟𝐨𝐫 𝐑𝐢𝐬𝐤 & 𝐂𝐨𝐦𝐩𝐥𝐢𝐚𝐧𝐜𝐞 𝐋𝐞𝐚𝐝𝐞𝐫𝐬 🛡️ From a risk, compliance, and audit perspective, here’s what we should be thinking about now: 🔍 𝐀𝐈 𝐑𝐢𝐬𝐤 𝐌𝐚𝐩𝐩𝐢𝐧𝐠: inventory where AI is used, trained, or embedded across the enterprise 📑 𝐃𝐨𝐜𝐮𝐦𝐞𝐧𝐭𝐚𝐭𝐢𝐨𝐧 𝐃𝐢𝐬𝐜𝐢𝐩𝐥𝐢𝐧𝐞: maintain transparent records of testing, bias checks, and safety reviews ⚠️ 𝐈𝐧𝐜𝐢𝐝𝐞𝐧𝐭 𝐑𝐞𝐚𝐝𝐢𝐧𝐞𝐬𝐬: treat AI “misuse” or failures like cyber incidents with clear escalation, reporting timelines, and accountability 👥 𝐖𝐡𝐢𝐬𝐭𝐥𝐞𝐛𝐥𝐨𝐰𝐞𝐫 𝐏𝐚𝐭𝐡𝐰𝐚𝐲𝐬: ensure employees can flag AI risks safely before regulators do it for us 📊 𝐁𝐨𝐚𝐫𝐝-𝐋𝐞𝐯𝐞𝐥 𝐎𝐯𝐞𝐫𝐬𝐢𝐠𝐡𝐭: integrate AI risk into enterprise risk management and audit committee agendas 𝐌𝐲 𝐓𝐚𝐤𝐞 🎯 This isn’t just about compliance. It’s about 𝐛𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐭𝐫𝐮𝐬𝐭. Clients, regulators, and the public want assurance that AI is safe, ethical, and transparent. Risk officers are uniquely positioned to help organizations get ahead of the curve. 👉 𝐐𝐮𝐞𝐬𝐭𝐢𝐨𝐧 𝐟𝐨𝐫 𝐲𝐨𝐮: 𝐒𝐡𝐨𝐮𝐥𝐝 𝐨𝐭𝐡𝐞𝐫 𝐬𝐭𝐚𝐭𝐞𝐬 𝐨𝐫 𝐭𝐡𝐞 𝐟𝐞𝐝𝐞𝐫𝐚𝐥 𝐠𝐨𝐯𝐞𝐫𝐧𝐦𝐞𝐧𝐭 𝐟𝐨𝐥𝐥𝐨𝐰 𝐂𝐚𝐥𝐢𝐟𝐨𝐫𝐧𝐢𝐚’𝐬 𝐥𝐞𝐚𝐝? #AISafety #RiskManagement #Compliance #Governance #ArtificialIntelligence #SB53 𝐅𝐨𝐫 𝐦𝐨𝐫𝐞 𝐢𝐧𝐟𝐨: https://lnkd.in/eRYav_Nm ISACA, EC-Council, ISC2 Teena, Tim, Eugene, Alyssa, Sam, Jay, Jim, Alexander, Michael, Pam, Chris, Gavin, ISACA New York Metropolitan Chapter