The Future of Privacy and AI Risks: A Q&A with Peter Lefkowitz
Peter Lefkowitz, Kris Comeaux, and Mihran Yenikomshian — 4 min read
What privacy and data risks do companies need to account for in the new age of AI?
When Peter Lefkowitz began his career in data privacy and digital risk in 2003, Congress was rolling out regulations for unsolicited commercial emails, California had started requiring companies to publicly disclose data breaches, and the social networking platform MySpace had just launched.
In the decades since, Mr. Lefkowitz has watched the integration of AI into daily life, followed Europe’s implementation of the General Data Protection Regulation (GDPR), and experienced the evolution of his own role from advisor to crisis responder to risk officer to data privacy executive. In a conversation with Managing Principals Kris Comeaux and Mihran Yenikomshian, Mr. Lefkowitz shared some of his expert insights on the future of privacy and AI risks.
One of your roles is that you regularly brief corporate boards and C-suite executives on data privacy and data protection. How do you translate abstract AI risk into metrics they can follow?
What I’ve heard from clients in the last couple of years is that CEOs want to be able to say they’re an “AI company” – not just that their products or services include some AI or generative AI element, but that AI is used and deployed across teams and departments. For those companies trying to get their arms around AI, I suggest starting with first principles. What are you actually trying to do with AI and can you explain those plans clearly and concisely to stakeholders? What data are going into and coming out of the AI engine? What are the risks, particularly in the areas of privacy, security, IP [intellectual property], and bias? This exercise often helps hone the business model as well as set up companies to manage risk. When thinking about selling this externally, consider setting up a Trust Center, which is a central web portal to store information about digital products and services, including security and governance, privacy, data location, and compliance with laws such as the GDPR and the EU AI Act. Importantly, set up the Trust Center after the standards are in place, and risk assessments and remediation are underway.
When a company is developing new AI tools, what are the risk and privacy checkpoints you advise putting in place when going from pilot to launch?
One of the first things to do is set policies and technical limits for employees’ use of AI. Whether the AI tools are private or public, the company must clearly specify the tools’ approved applications, and the type and amount of data that can be put into the AI tools – particularly if the data are highly sensitive, such as personal data, proprietary R&D and code, and confidential financial data. Policies and processes must be in place for the vetting and use of output generated from AI tools. Companies also need to ensure that deployment of the tools and surrounding systems will be secured. Finally, companies have to provide training and resources, and establish a feedback system that encourages employees to share their experiences using the tools – particularly if the company hopes to embed AI deeply in its operations, products, or services.
The FTC [Federal Trade Commission] has declared a new focus on “facilitating” the growth of AI and not putting up roadblocks to innovation. What does this mean for companies developing AI-based products and platforms in the US?
Taking a lesson from privacy deliberations over the past 10–15 years, I don’t think we’re going to have a comprehensive federal AI law just yet. But we do have laws in place and under development which impact AI at the state level. These include laws addressing deepfakes, bias, and discrimination; the collection and use of sensitive PII [personally identifiable information], including biometrics and personal genetic data; unfairness and deception; and the collection and use of data across specific industries like banking, insurance, and health care. It’s a complex scenario because all of these laws, even within one state, may be written differently and include different terminology. For example, what is the definition of the “sale” of data? Who is a “third party?” How does one ensure that AI models have properly accounted for bias? Companies need to figure out which laws apply to them and how they interlace. Companies that operate beyond the US will also need to evaluate international laws, including the EU AI Act and the GDPR.
How do the regulatory models in the EU differ from the US?
The US and the EU follow broadly different paths with respect to data privacy and risk. The US model is sectoral and tends to look backward to ask what has gone wrong and how do we prevent it from happening again. By contrast, EU privacy and data laws tend to be comprehensive, establishing layers of risk categorization, assessment, and reporting obligations from the outset. For example, the EU AI Act uses a scale of risk levels based upon the sensitivity of data and intended uses, and the amount of computing power involved in the application. While a subset of highly sensitive uses are prohibited, many more fall into a “high-risk” bucket that requires formal assessment and documentation, as well as registration with regulatory authorities.
What new skills must tomorrow’s privacy leaders master?
In the age of AI, companies have come to realize that systems hold and process valuable data, which may include personal data but also a lot of other valuable and sensitive information. To keep up, privacy professionals will need to have sufficient fluency across disciplines to establish baselines for assessing and remediating a range of risks. The good news is that the skills honed as a privacy professional – including assessing privacy risk, managing incidents, and regulatory reporting – supply the baseline for this broader analysis. With this broadening, I expect to see more “digital risk” and “privacy, security, and AI” officers with roles steeped in privacy risk management practices and policies, but with broader application to the AI world.
What are you actually trying to do with AI and can you explain those plans clearly and concisely to stakeholders? What data are going into and coming out of the AI engine? What are the risks, particularly in the areas of privacy, security, IP, and bias?”
Peter Lefkowitz
Principal, Amity Digital Risk, LLC
Managing Principal, Analysis Group
Managing Principal, Analysis Group
The Future of Privacy and AI Risks: A Q&A with Peter Lefkowitz
4 min read
What privacy and data risks do companies need to account for in the new age of AI?
When Peter Lefkowitz began his career in data privacy and digital risk in 2003, Congress was rolling out regulations for unsolicited commercial emails, California had started requiring companies to publicly disclose data breaches, and the social networking platform MySpace had just launched.
In the decades since, Mr. Lefkowitz has watched the integration of AI into daily life, followed Europe’s implementation of the General Data Protection Regulation (GDPR), and experienced the evolution of his own role from advisor to crisis responder to risk officer to data privacy executive. In a conversation with Managing Principals Kris Comeaux and Mihran Yenikomshian, Mr. Lefkowitz shared some of his expert insights on the future of privacy and AI risks.
One of your roles is that you regularly brief corporate boards and C-suite executives on data privacy and data protection. How do you translate abstract AI risk into metrics they can follow?
What I’ve heard from clients in the last couple of years is that CEOs want to be able to say they’re an “AI company” – not just that their products or services include some AI or generative AI element, but that AI is used and deployed across teams and departments. For those companies trying to get their arms around AI, I suggest starting with first principles. What are you actually trying to do with AI and can you explain those plans clearly and concisely to stakeholders? What data are going into and coming out of the AI engine? What are the risks, particularly in the areas of privacy, security, IP [intellectual property], and bias? This exercise often helps hone the business model as well as set up companies to manage risk. When thinking about selling this externally, consider setting up a Trust Center, which is a central web portal to store information about digital products and services, including security and governance, privacy, data location, and compliance with laws such as the GDPR and the EU AI Act. Importantly, set up the Trust Center after the standards are in place, and risk assessments and remediation are underway.
When a company is developing new AI tools, what are the risk and privacy checkpoints you advise putting in place when going from pilot to launch?
One of the first things to do is set policies and technical limits for employees’ use of AI. Whether the AI tools are private or public, the company must clearly specify the tools’ approved applications, and the type and amount of data that can be put into the AI tools – particularly if the data are highly sensitive, such as personal data, proprietary R&D and code, and confidential financial data. Policies and processes must be in place for the vetting and use of output generated from AI tools. Companies also need to ensure that deployment of the tools and surrounding systems will be secured. Finally, companies have to provide training and resources, and establish a feedback system that encourages employees to share their experiences using the tools – particularly if the company hopes to embed AI deeply in its operations, products, or services.
The FTC [Federal Trade Commission] has declared a new focus on “facilitating” the growth of AI and not putting up roadblocks to innovation. What does this mean for companies developing AI-based products and platforms in the US?
Taking a lesson from privacy deliberations over the past 10–15 years, I don’t think we’re going to have a comprehensive federal AI law just yet. But we do have laws in place and under development which impact AI at the state level. These include laws addressing deepfakes, bias, and discrimination; the collection and use of sensitive PII [personally identifiable information], including biometrics and personal genetic data; unfairness and deception; and the collection and use of data across specific industries like banking, insurance, and health care. It’s a complex scenario because all of these laws, even within one state, may be written differently and include different terminology. For example, what is the definition of the “sale” of data? Who is a “third party?” How does one ensure that AI models have properly accounted for bias? Companies need to figure out which laws apply to them and how they interlace. Companies that operate beyond the US will also need to evaluate international laws, including the EU AI Act and the GDPR.
How do the regulatory models in the EU differ from the US?
The US and the EU follow broadly different paths with respect to data privacy and risk. The US model is sectoral and tends to look backward to ask what has gone wrong and how do we prevent it from happening again. By contrast, EU privacy and data laws tend to be comprehensive, establishing layers of risk categorization, assessment, and reporting obligations from the outset. For example, the EU AI Act uses a scale of risk levels based upon the sensitivity of data and intended uses, and the amount of computing power involved in the application. While a subset of highly sensitive uses are prohibited, many more fall into a “high-risk” bucket that requires formal assessment and documentation, as well as registration with regulatory authorities.
What new skills must tomorrow’s privacy leaders master?
In the age of AI, companies have come to realize that systems hold and process valuable data, which may include personal data but also a lot of other valuable and sensitive information. To keep up, privacy professionals will need to have sufficient fluency across disciplines to establish baselines for assessing and remediating a range of risks. The good news is that the skills honed as a privacy professional – including assessing privacy risk, managing incidents, and regulatory reporting – supply the baseline for this broader analysis. With this broadening, I expect to see more “digital risk” and “privacy, security, and AI” officers with roles steeped in privacy risk management practices and policies, but with broader application to the AI world.
What are you actually trying to do with AI and can you explain those plans clearly and concisely to stakeholders? What data are going into and coming out of the AI engine? What are the risks, particularly in the areas of privacy, security, IP, and bias?”
Peter Lefkowitz