Getting to Know You: Privacy and Artificial Intelligence

Artificial Intelligence (AI) has a great potential to improve our lives and make it more efficient. It can automate the temperature in our house, help banks spot fraudulent activities, diagnose cancer and make vehicles autonomous. AI can also automate decision- making such as whether we are approved for a loan, what newsfeeds we see, and what our auto insurance premiums will be.

For AI to achieve accurate results, it requires a huge volume of data. By analyzing the data and recognizing patterns, algorithms improve upon themselves to get better at achieving the desired outcome without human intervention. This is also known as machine learning. The only way for the system to improve is by inputting more data, often personal data, into it.

As the algorithms process more data, the neural networks evolve and become more complex. We continue to rely on AI systems to make decisions on our behalf even though we know very little about how these decisions are made and whether they are accurate.

There are various concerns with the development of AI. One concern is the protection of personal data, upon which algorithms feed. Personal data is any information which can identify an individual, whether directly or indirectly. Although most personal data is anonymized, sophisticated algorithms can reidentify individuals. It is important to safeguard privacy rights and allow individuals to determine how they share their personal information and for what purpose.

Another concern with AI is its lack of transparency in its decision-making ability. It is difficult to determine if the system reached an unbiased and accurate decision if we do not know how it even reached its outcome.

The more we rely on AI, the more we need to ensure that we have systems that are trustworthy and which protect personal data.

Although Canada lacks explicit AI privacy legislation, we can rely on general privacy principles and seek further guidance through the European Union’s General Data Protection Regulation (GDPR).

Collecting data with a purpose

Before collecting personal data, organizations need to inform data subjects why they are collecting their data and what purpose it will serve. For example, Facebook should not use personal data it collects through its algorithms to determine whether someone is approved for a mortgage. If the original purpose of collecting personal data changes, new consent should be obtained, otherwise there is no informed consent.

Minimizing data collection

Only adequate and relevant data should be collected to achieve the system’s intended purpose. It is difficult to reconcile the paradox between needing a huge data set to develop accurate AI and the data minimization privacy principle.

Data minimization is not only about limiting the quantity of data but also the nature of data. Limited data may still reveal personal data. Pseudonymization or encryption techniques can be used to protect individuals’ identities. Continuously assessing the risk of irrelevant information is also necessary.

Transparency in AI

Transparency is one of the most important privacy principles, yet so difficult to achieve with AI. First, individuals have the right to know how their data is being processed. Second, individuals have the right to know how, through its automated decision-making process, AI systems reach their outcomes.

Gaining transparency into the process will provide transparency into the outcome. The “black box” of AI is nearly impossible to explain because algorithms naturally evolve beyond the knowledge of the developer. Machine learning, as opposed to algorithms with rules, results in greater accuracy, however less transparency.

The GDPR doesn’t require one to dissect the black box but to explain “why” the decision was reached. The data subject should have enough information to execute his or her rights to challenge the system’s outcome.

Is there a middle ground?

When organizations are entrusted with personal data, they must be held accountable to handle it in a responsible and ethical manner. Organizations developing AI must be bound by the same data protection regulations as any other organization collecting and processing personal data.

Organizations developing AI should implement privacy by design, ensuring privacy protection is built into their system. Ongoing audits need to be conducted by the organizations to ensure that personal data is protected and used lawfully. Audits should also ensure that the outcome of the algorithm is correct and non- discriminatory; this is a difficult task as indicated above.

Differential Privacy, currently used by Apple, Google and Uber, is a new technique used to protect privacy without reducing the data set. The encrypted personal data is injected with “noise” so that sensitive personal information is obscured. The AI system picks up on patterns and learns information about a group as opposed to an individual. The system cannot extract personal information about a particular person, which also allows for a more secure network. While differential privacy provides security and protection to personal data, it may encourage data collectors to obtain even more data to analyze.

Ironically, the one way to solve the privacy and transparency problem with AI is by using AI. In a recent Globe and Mail article, Dr. Ann Cavoukian, former privacy commissioner of Ontario, suggested that if we had our own AI personal agent, which contained our personal data, than the agent could start learning our preferred privacy settings and conditions. If an application or device needed access to our personal data, our AI agent would transfer only the data required for that particular purpose, thereby minimizing the data collected. If our data is used for a different purpose, than our AI agent would retrieve the data and a new request would have to be negotiated with our agent.

To increase public confidence in AI decision-making systems, perhaps what we need are algorithms that can engage in visible decision-making and promote transparency.

Regulating AI need not limit innovation. Instead, it can increase public trust in the system. AI has allowed us to fly drones, engage in online banking and have a personal assistant in our own home. Surely, we can use AI to find a solution that will encourage innovation and protect privacy.

This article was originally published on the Lawyer’s Daily.

Previous
Previous

Privacy Culture: The New Corporate Redesign

Next
Next

Digital Disruptor: The Legal Challenges of "Open Banking"