AI requires treating with care and significant thought. First ensure you are fully versed in the general use policy/guidance.
Today's AI models are able to learn and adapt based on what (text/content) is fed in by millions of daily users. And because of this, special care needs to be taken to understand how each AI product works, where the data might be located, whether input data can ever be output again, plus any management controls that might exist.
Care must also be taken into account when it comes to AI strategies. The best AI may not be the one that performs the best in a given task but the one that comes on top in a broader set of capabilities, such as integrating with existing AI models used within SSAFA so as to avoid AI silos and skewed results.
This policy should serve as the minimum that takes place when seeking out software, services, platforms or websites with modern AI or machine learning capabilities.
Note: The term "system" from hereon in can mean:
- software application that has to be downloaded and installed on a device,
- online software run from a web browser, such as Adobe Express
- an online SaaS tool such as Mentimeter, SurveyMonkey or Kahoot!
- a platform delivered as a service (PaaS) such as Office 365, Asset Bank content management, or a mass mailer
1. Scope
Not all AI is created or designed equally. And as AI is quite a broad term, the in-scope systems that this policy relates to can (currently)be defined as:
- Systems where the data fed in by users can be used to train the system, perhaps becoming part of a learning set.
- Systems where the data fed in by users might be regurgitated in whole or part to other users of the system who might or might not be SSAFA users.
- Systems where SSAFA content needs to be uploaded for analysis and pushed through one or more AI models.
- Systems where SSAFA content is analysed either remotely through integration (API) or via a URL, like a Tweet or YouTube hosted video.
1.1. Out of scope examples
- Real-time green-screen or chroma keying software.
- Where AI features are part of pre-existing systems and cannot be disabled or otherwise restricted.
2. Policy
When it comes to AI systems with one or more AI models, it is SSAFA policy that:
- We seek to understand and document in proportionate detail the data protection and privacy implications of the AI system:
- The legal and regulatory requirements of the jurisdiction or jurisdictions in which the AI system, operates.
- The AI provider's role in relation to relevant data under the applicable regime - controller or processor.
- The type of data that might be involved in the AI system - whether the data is personal or anonymised (and there is low risk of re-identification).
- The extent to which the AI system will accumulate and aggregate data - the system may be incentivised to increase the volume of data it processes, with the risk of violating privacy in the process.
- A mandatory DPIA and PIA will take place before purchasing or subscribing to any AI system.
- Those wishing to procure spend considerable time to document valid and non-valid use cases and scenarios of the system, thereby providing users with clear examples of how to use or interact with said system as well as what must be avoided, all before the system is made available.
- Where personal data may be used in conjunction with the AI system, relevant privacy polices and consent documents will be reviewed and updated, ensuring a transparent and fair process.
- All reasonable steps to protect confidential data and minimise the use of personal data, or the extent to which personal data can be attributed to particular persons.
- Strategy is carefully defined and evaluated: Data flows visualised and sample questions users might ask are highlighted in order to surface the level of integration/connection to:
- existing AI models
- data that might sit outside the confines of the procured system
2.1. Strategy and its importance in the procurement process
New systems with embedded AI often see only their own data. Copilot, for example sees Microsoft 365 stored data; other tools see their own world so cross-visibility is limited. Ask Copilot something, and it might not be able to take into account any relevant data stored in this new system leading to skewed responses. The reverse is also true and even more dangerous because the vast majority of SSAFA data resides in the 365 cloud, which can lead to very biased and skewed responses when only taking into account data stored in a single system. Blind spots lead to “dumb” answers and poor trust.
- Always assess data visibility: where will the platform store data and can enterprise AI access it?
- Prefer platforms with robust connectors/APIs and clear information governance.
- Avoid tool sprawl; design for integration and shared knowledge.
- Include AI access & data portability in procurement checklists.
3. Considerations and risk mitigation examples
- Risk can be reduced if controls are available. For example, Chat GPT learns from interactions with users. A document uploaded by one person for summarising might be displayed to another user days, weeks or months later who is doing research on the same subject and is wishing a detailed response. A control method in this case might be the ability to administer your own cohort of users and specify that uploads and aspects of conversations cannot be reused, thus reducing the risk.
- Risk can be mitigated if the system or the manner in which it is implemented permits the creation of rules on data access, generation, retention, and destruction.
- Risk can be reduced by privacy policy. Perhaps few or no controls exist for a cohort of users but the system's privacy policy explains in detail that the AI model is not consumption based.
- Risk is reduced when the AI model's servers and technology stack is UK or EU based and compliant with GDPR.
- Risk is considered as "managed" if the AI processing occurs within the Microsoft 365 and/or Microsoft Azure cloud. Processing ring-fenced to the EU or UK is preferred.
4. Non-compliance
- If a system is found to be non-compliant, it may be taken offline or access prohibited "in writing" with little to no notice.
- If a system is purchased, subscribed to or otherwise made live without proper process and due diligence being followed, access to the system may be revoked and relevant HR or disciplinary process started.