Please configure
Artificial Intelligence (AI) technologies are evolving and being deployed at scale. These technologies have increasingly sophisticated capabilities, some of which can directly impact people or influence their behaviours, opinions, and choices. These principles are a public commitment and set out the requirements for all Spark people when Spark AI technologies are designed, deployed, and operated within our business.
Spark will take a responsible and ethical approach to the design and operation of AI technologies. We will continue to monitor and evolve these Principles with the development of new tools and processes related to the use of AI technologies.
When using AI technologies, we will put people at the heart of our thinking – considering people’s needs and respecting the human rights, autonomy and diversity of individuals.
All Spark employees involved in the creation and design of AI technologies, including technologies that leverage third-party software, must
AI technologies rely on data sets to detect patterns and train machine learning algorithms to predict outcomes. Data sets and algorithms that contain intentional and unintentional biases can lead to outcomes that impact people unfairly. We will implement processes to help identify and minimise bias in the data sets, AI technologies and algorithms we use. This includes overlaying a human lens to review output for potential bias before use.
Deployers of AI technologies are accountable for ensuring that the AI technology is fit for purpose and operating safely, responsibly, reliably and effectively, at implementation and throughout the AI technology lifecycle.
Users of AI technologies are accountable for ensuring that they:
When designing AI and automated solutions we will adhere to Spark’s Privacy Policy and our Privacy Values. See the Your Privacy at Spark section on our website for more information.
As for all decisions at Spark, when our AI technologies relate to sensitive or impactful topics an appropriately skilled human must be accountable for informed and responsible decision making. In this situation the limits of our solution must be clearly understood and articulated to the user. The data used, the automated decisioning process, and the automated logic behind the recommendation should be clear and transparent to the individual responsible for the human decision and action.
Our AI technologies and algorithms must be explicable by humans and transparent. We must be able to clearly articulate and explain how the solution works and how any automated decisions are made. Spark AI technologies must have a means of logging or capturing data sets used, automated decisions and actions which can be used by humans to review and understand the automated process and actions that led to the outcome.
We will inform customers in advance if they are communicating directly with an AI technology (rather than a real person), and enable customers to easily request contact with a human if needed.
We will inform customers if we have used AI to generate content sent to them without human review.
Training and support
We will ensure that Spark people using AI technologies have appropriate training and support to use the AI technologies available to their role competently and for the purposes intended, and in alignment with these AI Principles as well as Spark’s values, policies and legal obligations.
Review and oversight
We regularly review AI technologies, prediction, and decision-making processes to ensure that they are functioning as designed and expected, including reviewing for bias, accuracy and performance and intervening to address any issues found. Where any Spark employee feels that an AI technology is not delivering an outcome, or is not operating in accordance with these Principles, then their concerns should be raised with their manager, the appropriate accountable team, or through the anonymous Spark ‘Honesty Box’ whistleblowing process.
Spark’s Legal and Compliance Policy