Use of AI in the Workplace
With the increased use of AI, particularly in the workplace, and legislation coming in (irrespective of whether legislation governs the use of AI right now), there is a risk to your business if you do not regulate how your employees use it.
Employees are using AI tools for a variety of tasks like simple automation; efficient completion of deliverables and not to forget advanced Large Language Models (LLMs) and Generative AI techniques (for example, automation of suggestions or production of code based on user-provided input specifying the desired functionality). There is a risk that some input into AI Tools may contain customer or employee personal information or even company-sensitive confidential information (Intellectual Property, company secrets etc).
With these use cases in mind, many organizations are revisiting internal policies and procedures relating to privacy and the use of AI. A good AI Policy mandates responsible, legal, and ethical employee use of AI and any personal information used within those tools should be regulated. With the rapid advancement of technology, organizations accept that AI Tools are being used directly or indirectly to supplement daily tasks or even more complex ones as mentioned before. Many organizations have elected not to necessarily approve or promote the use of any particular AI Tool, within the workplace, and rather have elected to govern the general use of AI across the organization instead. Organizations will then permit the general use of simple AI Tools if they are used in compliance with the organization’s AI Policies that ensure confidentiality, privacy, and data protection of customer and employee data.
Canada has proposed the Artificial Intelligence and Data Act (AIDA), introduced as part of the Digital Charter Implementation Act 2022, which will set the foundation for the responsible design, development and deployment of AI systems that impact the lives of Canadians. AIDA is set to ensure that AI systems are safe and non-discriminatory, and the regulator will have the power it needs to hold businesses accountable for how they develop and use these technologies. Interestingly, Canada was the first country to propose a law to regulate AI. However, it was coupled with other amendments relating to data protection legislation, which seem to have delayed it being put into effect. The EU AI Act on the other hand has been published with a scaled efficacy period within which companies must comply.
While the use of AI is not specifically governed by AI legislation within Canada currently, there is still privacy legislation, both federally and provincially, which governs the use of personal information, including within AI Tools. Organizations should therefore be looking to leverage existing privacy programs and regulate the use of AI subject to the organization’s appetite.
When considering an AI program, organizations should also be conscious of the complete reliance on AI Tools. Tesla and SpaceX Founder, Elon Musk, put a pause in 2023 on large AI experiments citing that the use of AI technology can “pose profound risks to society and humanity.” Earlier this year, Microsoft also put a pause on a Co-Pilot release due to security concerns. There are several guardrails which need to be put in place when using more complex AI Tools and organizations need to properly evaluate their use of AI to determine how to manage the risks.
When implementing a good AI Program, organizations would place themselves in a better position by considering how employees are using AI tools, how sophisticated the use is, and whether an assessment (like a Privacy Impact Assessment) may be required. In addition, organizations should consider establishing boundaries for the use of AI that the organization is comfortable with. For example, some organizations will only allow simple automation with no use of personal information, while others are comfortable with tools like CoPilot or have already started to embed the use of Chat GPT4 within their technology but require approval for these more complex AI tools. From an AI governance perspective, the first step is assessing the current landscape and organizational roadmap for AI. More importantly, irrespective of the maturity of AI within the organization, unless you have outlawed the use of AI completely, there should at the very least be an AI Policy indicating the parameters of use.
An Acceptable Use of Artificial Intelligence Tools and Software Policy (“AI Policy”) should aim to ensure the responsible and ethical use of AI Tools within the workplace while promoting productivity and safeguarding the interests of the organization. The policy should include:
the types of acceptable AI Tools or permissions required;
accountability for the use of AI;
principles of fairness and non-harm when using AI;
transparency when using AI;
data privacy violations (including reference to other legislative prohibitions);
reliability;
information security practices;
how -intellectual property over AI Tools created by staff is managed;
explainability and human supervision required when developing or using AI Tools;
strategies to mitigate risks associated with the use of AI (i.e. when a PIA may be required);
tiered risk tolerance demonstrating permissions required for different levels of AI Tools; and
internal contact point responsible for handling inquiries concerning AI compliance topics.
When implementing a successful AI Program – there are several tools to utilize:
Designating leads to create awareness within the organization and ensure that employees are systematically informed of the implications and consequences of using generative AI tools in the workplace;
Leveraging your existing privacy program (if available);
Providing training and resources on responsible use, risk, ethics, and bias of AI Tools;
Providing employees with clear guidance on when and whether to use organizational accounts for generative AI tools, as well as policies regarding permitted and prohibited uses of those tools in the workplace;
Creating a quality assurance process internally to vet and validate outputs when generative AI is used for coding;
Assessing the need to obtain consent for the use of personally identifiable data within the AI Tools (including the possibility of automated decision-making;
Assessing the need for notice to customers and the public for the use of AI Tools and how it may affect them;
Where possible, encouraging the use of de-identified or anonymized data to limit the collection, use and disclosure of personal information, which will also minimize the need to seek consent and provide notice;
Organizing your data to ensure that permissions, data classification labels, and sensitivity label management can be established and effectively managed;
Ensuring regular auditing practices;
Regular review of policies; and
Conducting proper vendor due diligences where AI vendors are used.
Questions to ask yourself:
Have you started evaluating AI tools for your business?
Conducting a Privacy Impact Assessment of your AI Model or Tool is a good way to assess any privacy risks that may arise in your use of AI within your organization.
Have you looked at AI Threat Modelling?
In some ways similar to Privacy by Design, AI threat modelling allows for proactive risk management by identifying potential threats to your already established AI systems in early phases, enabling mitigation strategies to be implemented before issues arise. AI threat modelling identifies threats in all the phases of the development life cycle (i.e., design, input, modelling, and output).
Have you published an Acceptable Use of Artificial Intelligence Tools and Software Policy (Use of AI Policy)?
This is a good way to regulate the use of AI early on, to avoid bad habits and unnecessary risks to your business. Your Use of AI Policy should include general guidelines establishing compliance and acceptable use, ethical considerations, transparency and accountability, privacy considerations, security, interaction and use of vendor tools. It should also include prohibited activities which may include Invasion of Privacy; Manipulation and Deception; Discriminatory Practices; Unethical Monitoring and types of AI System Deployment.