Trends in Data Protection

Explore top LinkedIn content from expert professionals.

  • View profile for James Dempsey

    Managing Director, IAPP Cybersecurity Law Center, and Senior Policy Advisor, Stanford Program on Geopolitics, Technology and Governance

    5,861 followers

    Privacy isn't just about privacy anymore (and maybe never was). That's my takeaway from a fascinating new report from IAPP - International Association of Privacy Professionals. As regulations related to privacy, AI governance, cybersecurity, and other areas of digital responsibility rapidly expand and evolve around the globe, organizations are taking a more holistic approach to their values and strategies related to data. One indicator: over 80% of privacy teams now have responsibilities that extend beyond privacy. Nearly 70% of chief privacy officers surveyed by IAPP have acquired additional responsibility for AI governance, 69% are now responsible for data governance and data ethics, 37% for cybersecurity regulatory compliance, and 20% for platform liability. And, in my opinion, if privacy teams don't have official responsibility for other areas of data governance (AI, data ethics, cybersecurity), they should surely be coordinating with those other teams. https://lnkd.in/gM8WGx9T

  • View profile for Katharina Koerner

    AI Governance I Digital Consulting I Trace3 : All Possibilities Live in Technology: Innovating with risk-managed AI: Strategies to Advance Business Goals through AI Governance, Privacy & Security

    44,177 followers

    This new white paper by Stanford Institute for Human-Centered Artificial Intelligence (HAI) titled "Rethinking Privacy in the AI Era" addresses the intersection of data privacy and AI development, highlighting the challenges and proposing solutions for mitigating privacy risks. It outlines the current data protection landscape, including the Fair Information Practice Principles, GDPR, and U.S. state privacy laws, and discusses the distinction and regulatory implications between predictive and generative AI. The paper argues that AI's reliance on extensive data collection presents unique privacy risks at both individual and societal levels, noting that existing laws are inadequate for the emerging challenges posed by AI systems, because they don't fully tackle the shortcomings of the Fair Information Practice Principles (FIPs) framework or concentrate adequately on the comprehensive data governance measures necessary for regulating data used in AI development. According to the paper, FIPs are outdated and not well-suited for modern data and AI complexities, because: - They do not address the power imbalance between data collectors and individuals. - FIPs fail to enforce data minimization and purpose limitation effectively. - The framework places too much responsibility on individuals for privacy management. - Allows for data collection by default, putting the onus on individuals to opt out. - Focuses on procedural rather than substantive protections. - Struggles with the concepts of consent and legitimate interest, complicating privacy management. It emphasizes the need for new regulatory approaches that go beyond current privacy legislation to effectively manage the risks associated with AI-driven data acquisition and processing. The paper suggests three key strategies to mitigate the privacy harms of AI: 1.) Denormalize Data Collection by Default: Shift from opt-out to opt-in data collection models to facilitate true data minimization. This approach emphasizes "privacy by default" and the need for technical standards and infrastructure that enable meaningful consent mechanisms. 2.) Focus on the AI Data Supply Chain: Enhance privacy and data protection by ensuring dataset transparency and accountability throughout the entire lifecycle of data. This includes a call for regulatory frameworks that address data privacy comprehensively across the data supply chain. 3.) Flip the Script on Personal Data Management: Encourage the development of new governance mechanisms and technical infrastructures, such as data intermediaries and data permissioning systems, to automate and support the exercise of individual data rights and preferences. This strategy aims to empower individuals by facilitating easier management and control of their personal data in the context of AI. by Dr. Jennifer King Caroline Meinhardt Link: https://lnkd.in/dniktn3V

  • View profile for Shea Brown
    Shea Brown Shea Brown is an Influencer

    AI & Algorithm Auditing | Founder & CEO, BABL AI Inc. | ForHumanity Fellow & Certified Auditor (FHCA)

    21,518 followers

    The Future of Privacy Forum (FPF) analyzes trends in U.S. state legislation on AI regulation in areas impacting individuals' livelihoods such as healthcare, employment, and financial services. 🔎 Consequential Decisions - Many state laws target AI systems used in "consequential decisions" that affect essential life opportunities. These include sectors like education, housing, and healthcare. 🔎 Algorithmic Discrimination: Legislators are concerned about AI systems leading to discrimination. Some proposals outright ban discriminatory AI use, while others impose a duty of care to prevent such bias. 🔎 Developer and Deployer Roles: Legislation often assigns different obligations to AI developers (those who create AI systems) and deployers (those who use them). Both may be required to ensure transparency and conduct risk assessments. 🔎 Consumer Rights: Commonly proposed rights for consumers include the right to notice, explanation, correction of errors, and appeals against automated decisions. 🔎 Technology-Specific Regulations: Some laws focus on specific AI technologies like generative AI and foundation models, requiring transparency and safety measures, including AI-generated content labeling. This report can help companies look at what obligations might be seen as 'trends' that they can use to forecast future requirements. e.g. 🔹 Obligations 🔹 ----------------- 👉 Transparency: Developers and deployers are often required to provide clear explanations about how AI systems work. 👉 Assessments: Risk assessments and audits are used to evaluate potential AI biases and discrimination risks. 👉 Governance Programs: AI governance programs are encouraged to oversee AI systems, ensuring they meet legal and ethical standards. #airegulation #responsibleai Future of Privacy Forum, Ryan Carrier, FHCA, Khoa Lam, Jeffery Recker, Jovana Davidovic, Borhane Blili-Hamelin, PhD, Dr. Cari Miller, Heidi Saas, Patrick Sullivan

  • View profile for Dev Stahlkopf

    Executive Vice President and Chief Legal Officer at Cisco

    7,639 followers

    In today's digital age, the importance of safeguarding personal information cannot be overstated. In recognition of #DataPrivacyDay, here are three key insights from Cisco's Consumer Privacy Survey, highlighting the global trends in privacy, trust, and data that are shaping consumer behavior.   🔍 Privacy awareness is growing: 53% of respondents are now aware of their local privacy laws. Notably, this awareness leads to a significant boost in consumer confidence. 81% of respondents who are aware of their country's privacy law report feeling more confident in their ability to protect their data.   🔍 Privacy impacts buying decisions: 75% of respondents will not purchase from organizations they do not trust with their data.   🔍 Governance builds trust in the AI era: 78% of respondents believe organizations have a responsibility to use AI ethically, and 59% feel more comfortable sharing information in AI tools with strong privacy laws in place.   As legal and policy professionals, we play a critical role in ensuring data privacy remains foundational to responsible innovation. Let's continue to champion thoughtful governance, embedding trust and safety in everything we do.    Explore these insights and more ➡️ https://lnkd.in/gGNqV8nj     #DataPrivacyDay #PrivacyAwareness #AI #Governance #Trust

  • View profile for Glen Cathey

    Advisor, Speaker, Trainer; AI, Human Potential, Future of Work, Sourcing, Recruiting

    66,232 followers

    Check out this massive global research study into the use of generative AI involving over 48,000 people in 47 countries - excellent work by KPMG and the University of Melbourne! Key findings: 𝗖𝘂𝗿𝗿𝗲𝗻𝘁 𝗚𝗲𝗻 𝗔𝗜 𝗔𝗱𝗼𝗽𝘁𝗶𝗼𝗻 - 58% of employees intentionally use AI regularly at work (31% weekly/daily) - General-purpose generative AI tools are most common (73% of AI users) - 70% use free public AI tools vs. 42% using employer-provided options - Only 41% of organizations have any policy on generative AI use 𝗧𝗵𝗲 𝗛𝗶𝗱𝗱𝗲𝗻 𝗥𝗶𝘀𝗸 𝗟𝗮𝗻𝗱𝘀𝗰𝗮𝗽𝗲 - 50% of employees admit uploading sensitive company data to public AI - 57% avoid revealing when they use AI or present AI content as their own - 66% rely on AI outputs without critical evaluation - 56% report making mistakes due to AI use 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀 𝘃𝘀. 𝗖𝗼𝗻𝗰𝗲𝗿𝗻𝘀 - Most report performance benefits: efficiency, quality, innovation - But AI creates mixed impacts on workload, stress, and human collaboration - Half use AI instead of collaborating with colleagues - 40% sometimes feel they cannot complete work without AI help 𝗧𝗵𝗲 𝗚𝗼𝘃𝗲𝗿𝗻𝗮𝗻𝗰𝗲 𝗚𝗮𝗽 - Only half of organizations offer AI training or responsible use policies - 55% feel adequate safeguards exist for responsible AI use - AI literacy is the strongest predictor of both use and critical engagement 𝗚𝗹𝗼𝗯𝗮𝗹 𝗜𝗻𝘀𝗶𝗴𝗵𝘁𝘀 - Countries like India, China, and Nigeria lead global AI adoption - Emerging economies report higher rates of AI literacy (64% vs. 46%) 𝗖𝗿𝗶𝘁𝗶𝗰𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻𝘀 𝗳𝗼𝗿 𝗟𝗲𝗮𝗱𝗲𝗿𝘀 - Do you have clear policies on appropriate generative AI use? - How are you supporting transparent disclosure of AI use? - What safeguards exist to prevent sensitive data leakage to public AI tools? - Are you providing adequate training on responsible AI use? - How do you balance AI efficiency with maintaining human collaboration? 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝘀 - Develop clear generative AI policies and governance frameworks - Invest in AI literacy training focusing on responsible use - Create psychological safety for transparent AI use disclosure - Implement monitoring systems for sensitive data protection - Proactively design workflows that preserve human connection and collaboration 𝗔𝗰𝘁𝗶𝗼𝗻 𝗜𝘁𝗲𝗺𝘀 𝗳𝗼𝗿 𝗜𝗻𝗱𝗶𝘃𝗶𝗱𝘂𝗮𝗹𝘀 - Critically evaluate all AI outputs before using them - Be transparent about your AI tool usage - Learn your organization's AI policies and follow them (if they exist!) - Balance AI efficiency with maintaining your unique human skills You can find the full report here: https://lnkd.in/emvjQnxa All of this is a heavy focus for me within Advisory (AI literacy/fluency, AI policies, responsible & effective use, etc.). Let me know if you'd like to connect and discuss. 🙏 #GenerativeAI #WorkplaceTrends #AIGovernance #DigitalTransformation

  • View profile for Jeffrey Cohen
    Jeffrey Cohen Jeffrey Cohen is an Influencer

    Chief Business Development Officer at Skai | Ex-Amazon Ads Tech Evangelist | Commerce Media Thought Leader

    27,203 followers

    To cookie or not to cookie, we no longer have to ask that question! Before moving on, we must thank Google for a few things. Thank you, Google, for finally resetting this debate. It only took you several years and multiple delays to finally come to an answer. Whether the cookie was to stay or go, the concept of consumer privacy isn’t going away. Users still have the ability to opt in/out of cookies, and many choose to opt-out. Sharing data is all about trust; we can’t expect our customers to trust us with their data blindly, and as marketers, we should remember that trust is hard to build and lose. This leads us back to the fact that first-party data strategies remain crucial and will define success in the future. Retail Media Networks that have the ability to connect data signals and match them to brand signals are providing a new path forward to reach new audiences and measure the impact of advertising. In the end, we all owe the cookie a big thank you! The cookie served us well for many years. As we look to the future, here are a few things to remember. Invest in First-Party Data: Focus on collecting and utilizing first-party data through direct customer interactions. This can include data from email subscriptions, website analytics, CRM systems, and customer feedback. Explore Alternative Targeting Methods: Experiment with contextual advertising, which targets ads based on the webpage's content rather than user behavior. This method respects privacy while still delivering relevant ads. Collaborate with Industry Partners: Work closely with industry partners to stay ahead of trends and share best practices. Collaboration can lead to innovative solutions and more effective strategies. Stay Agile and Adaptable: The digital advertising landscape is rapidly evolving. Stay agile and be prepared to adapt your strategies as new technologies and regulations emerge. Continuous learning and flexibility will be key to success.

  • 🤔 Midweek Reflection 🔍 Why We Need to Broaden the Data Governance Conversation and Toolbox: A few years ago, we developed the 4Ps of Data Governance framework: ➡️ Purpose; ➡️ Principles; ➡️ Processes; ➡️ Practices. Since then, we’ve seen meaningful progress...: ✅ There is growing convergence around shared principles, such as those outlined in our recent paper on Universal Principles for Data Governance. 💻 Read: https://lnkd.in/ezuKbqJD ✅ The recognition of data stewardship as a key role has helped build the necessary people infrastructure within institutions and governments. 💻 Read: https://lnkd.in/ewPXMA5U ➡️ But when it comes to practices —how we actually implement principles across the lifecycle of data—the conversation remains far too narrow. Most dialogues often default to legal mechanisms, particularly data protection laws. ➡️ That’s why, in recent conversations with policymakers we encouraged them to think more expansively. 📊 Below is a framework of 10 Data Governance Mechanisms that can be used to determine the portfolio of data governance practices (note that no single mechanism is sufficient on its own): 1️⃣ Contractual Mechanisms Legally binding agreements defining access, use, and third-party responsibilities. Examples: Data Sharing Agreements, SLAs, API Terms of Use 2️⃣ Policies & Guidelines Institutional or governmental rules that operationalize principles. Examples: Open Data Policies, AI Ethics Guidelines 3️⃣ Technology & Governance by Design Embedding governance into digital systems and infrastructure. Examples: Differential privacy, federated learning, access controls 4️⃣ Standards and Vocabulary Shared protocols and terminologies for interoperability and quality. Examples: ISO 27001, DCAT, FAIR principles 5️⃣ Codes of Conduct Agreed-upon norms for ethical and responsible data use. Examples: EU Code of Practice on Disinformation 6️⃣ Procurement & Vendor Management Ensuring governance requirements are built into procurement processes. Examples: Data clauses in RFPs, public sector data-sharing mandates 7️⃣ Licensing Setting clear conditions for data reuse and redistribution. Examples: Creative Commons Licenses, SocialLicenses 8️⃣ Data Stewardship & Institutional Arrangements Roles and structures that enable accountable data use. Examples: Chief Data Stewards, Data Commons, Independent Auditors 9️⃣ Audit & Compliance Mechanisms Methods for monitoring and enforcing governance rules. Examples: Algorithmic Impact Assessments, Transparency Reports 🔟 Training & Cultural Change Initiatives Developing literacy and a governance-minded culture within organizations. Examples: Privacy trainings, data ethics workshops ➡️ Any mechanisms that should be added? 🙏 Thanks to Begoña Glez. Otero for review of earlier list - #DataGovernance #DataStewardship #ResponsibleAI #DigitalGovernance #DataPolicy #OpenData #SocialLicense #DataForGood

  • View profile for Prukalpa ⚡
    Prukalpa ⚡ Prukalpa ⚡ is an Influencer

    Founder & Co-CEO at Atlan | Forbes30, Fortune40, TED Speaker

    45,601 followers

    Data governance is hitting a critical tipping point - and there are three big problems (and solutions) you can’t ignore: 1️⃣ Governance is Always an Afterthought: Often, governance only becomes important once it's too late. Fix: Embed governance from the start. Show quick wins so it's viewed as an enabler, not just cleanup. 2️⃣ AI Exposes - and Amplifies - Flaws: AI governance introduces exponential complexity. Fix: Proactively manage risks such as bias and black-box decisions. Automate data lineage and compliance checks. 3️⃣ Nobody Wants to ‘Do’ Governance: Mention "governance" and expect resistance. Fix: Make it invisible. Leverage AI to auto-document metadata and embed policies directly into everyday workflows, allowing teams to confidently consume data without friction. Bottom Line: → Plan governance early - late-stage fixes cost significantly more. → Use AI to do the heavy lifting - ditch manual spreadsheets. → Tie governance clearly to business outcomes like revenue growth and risk mitigation so it’s championed by leaders. Governance done right isn’t just compliance; it’s your strategic advantage in the AI era.

  • View profile for Colin Levy
    Colin Levy Colin Levy is an Influencer

    General Counsel @ Malbek - CLM for Enterprise | Adjunct Professor and Author of The Legal Tech Ecosystem | Legal Tech Speaker, Advisor, and Investor | Fastcase 50 2022 Winner

    44,511 followers

    As a lawyer who often dives deep into the world of data privacy, I want to delve into three critical aspects of data protection: A) Data Privacy This fundamental right has become increasingly crucial in our data-driven world. Key features include: -Consent and transparency: Organizations must clearly communicate how they collect, use, and share personal data. This often involves detailed privacy policies and consent mechanisms. -Data minimization: Companies should only collect data that's necessary for their stated purposes. This principle not only reduces risk but also simplifies compliance efforts. -Rights of data subjects: Under regulations like GDPR, individuals have rights such as access, rectification, erasure, and data portability. Organizations need robust processes to handle these requests. -Cross-border data transfers: With the invalidation of Privacy Shield and complexities around Standard Contractual Clauses, ensuring compliant data flows across borders requires careful legal navigation. B) Data Processing Agreements (DPAs) These contracts govern the relationship between data controllers and processors, ensuring regulatory compliance. They should include: -Scope of processing: DPAs must clearly define the types of data being processed and the specific purposes for which processing is allowed. -Subprocessor management: Controllers typically require the right to approve or object to any subprocessors, with processors obligated to flow down DPA requirements. -Data breach protocols: DPAs should specify timeframes for breach notification (often 24-72 hours) and outline the required content of such notifications, -Audit rights: Most DPAs now include provisions for audits and/or acceptance of third-party certifications like SOC II Type II or ISO 27001. C) Data Security These measures include: -Technical measures: This could involve encryption (both at rest and in transit), multi-factor authentication, and regular penetration testing. -Organizational measures: Beyond technical controls, this includes data protection impact assessments (DPIAs), appointing data protection officers where required, and maintaining records of processing activities. -Incident response plans: These should detail roles and responsibilities, communication protocols, and steps for containment, eradication, and recovery. -Regular assessments: This often involves annual security reviews, ongoing vulnerability scans, and updating security measures in response to evolving threats. These aren't just compliance checkboxes – they're the foundation of trust in the digital economy. They're the guardians of our digital identities, enabling the data-driven services we rely on while safeguarding our fundamental rights. Remember, in an era where data is often called the "new oil," knowledge of these concepts is critical for any organization handling personal data. #legaltech #innovation #law #business #learning

Explore categories