Privacy issues sit at the forefront of online activity, business actions, and government decisions. This is largely in response to the breaches, scandals, and personal data leaks that have eroded confidence in technology and information systems.
The National Security Telecommunications Advisory Committee’s (NSTAC) Report to the President on a Cybersecurity Moonshot says that privacy is a crucial component of cybersecurity and that we must flip the narrative to restore the trust Americans place in information systems. To achieve this, by 2028, Americans need to be “guaranteed” that technological advancements will no longer threaten privacy but will instead enhance privacy assurance through the safety and security of their personal data.
One critical element in future technology advancements and online security is the increased development of artificial intelligence (AI). However, privacy principles must be considered early on in the AI development process to balance technological benefits while preserving privacy.
The 2019 Gartner Security and Risk Survey, which was conducted from March 2019 through April 2019, showed that over 40% of privacy compliance technology will rely on AI by 2023, up from 5% in 2019.
The result is that privacy leaders are under pressure to ensure that all personal data processed is brought in scope and under control, which is difficult and expensive to manage without technology aid.
In fact, it is these very considerations that are driving enterprise leaders to act and adapt AI. Speed, scale and automation are the key reasons why AI has become attractive for businesses and customers, said Ben Hartwig, the chief security officer at InfoTracer. The quantity of data that AI can raise is bigger than what human analysts are capable of. This is the only way to process big data in a reasonable time frame. “One of the reasons why privacy is a big concern here is the fact that people are not familiar with the measures they can use to protect it even if there are some principles that can help with protecting ourselves,” he said.
With data being the lifeblood of the modern enterprise, having solid data protection strategies is not a “nice to have” but a need to have. Especially with laws like General Data Protection Regulation (GDPR), California Consumer Privacy Act (CCPA), and the more recent Washington Privacy Act, it’s more critical than ever for IT teams to have a clear understanding of the organizations’ backup and recovery plan. What can IT teams do to curb the impact and confusion that AI is having on data protection and privacy strategies?
According to Geoff Webb, VP of strategy at PROs, there are three areas where AI is impacting privacy and data governance, now and in the future.
Privacy Concierge: First, AI bots can provide a “privacy concierge” function in which they can recognize, route and service privacy data requests faster and more cheaply than humans, in much the same way that other AI bots handle increasingly complex requests today.
Data Classification: AI has already shown itself to be highly effective at identifying and classifying data that could take a human operator’s significant time and effort to review. This means that much of the existing data businesses hold that could fall within privacy regulations (and therefore need to be available to consumers on request) can be identified and aggregated by AIs doing continual sweeps through disparate data stores. “We already see AI’s performing the role of central manager, consumer, and analyzer of siloed data stores in other parts of the business, so the AI ‘data bridge’ is a natural fit for privacy and compliance tasks,” he said.
Managing Sensitive Data: AI can also provide a role in handling sensitive data itself. Specifically, tasks in which sensitive data might be exposed to a human operator unnecessarily. For example, routing requests for healthcare records between providers in which there is a need to aggregate data but a desire to provide an additional layer of privacy. AIs are extremely effective at consuming and analyzing data yet are essentially impervious to the implications of the information they see. It’s simply not possible to bribe an AI into leaking a celebrity’s healthcare records, as an obvious example. “This means that AIs could, in the near future, be used to handle much larger amounts of sensitive data in ways that remove humans from the chain, and thus simplify the process of keeping that data secure,” he added.