Salesforce’s Adam Spearing on building trustworthy technology through an “ethics-by-design” approach
COVID-19 has accelerated the digital transformation of all aspects of our society. Companies have had to fundamentally rethink their operating models in response and many have used automation and AI-powered technologies to help them become more resilient and competitive. AI has the power to enhance the customer experience by solving problems faster and more efficiently, facilitating remote working, and empowering employees to take on more strategic roles that the digital economy demands. As we approach the Fifth Industrial Revolution in this post-pandemic world, the surge in demand for AI technologies is joined by the need for companies to ensure the development of responsible technology. The costs of creating, selling and implementing technology without a holistic understanding of its implications are far too great to ignore. For example, we’ve seen this in instances where voice recognition software has produced biases against female voices, or crime-prediction tools have reinforced discriminatory policing. AI cannot be implemented without ethics. For businesses wanting to build and deploy AI with long-lasting confidence, we must focus on inclusive measures and ethical intent. This means taking measures to transparently explain the impact and rationale of its actions and recommendations.
As ethical and responsible technology becomes an organisational imperative, here are three ways organisations can earn trust, ensuring accountability, transparency, and fairness. Using AI at scale Ironically, investment in ethical AI means investment in creating and sustaining a culture of critical thinking among employees. It’s not feasible to ask a single group to effectively take sole responsibility for identifying ethical risks during development and AI cannot do it alone. Instead, ethics-by-design requires involvement from diverse perspectives from different cultures, ethnicities, gender identities, and areas of expertise. Creating an environment that embraces input from a broader audience can help organisations eliminate blind spots leading to bias. By offering training programmes that help employees put ethics at the core of their respective workflows, organisations can also empower their workforce to more critically identify potential risks. One measure, for example, is training new hires, from day one, to understand their role in the process to better develop an ethics-by-design mindset. This gives every employee a sense of responsibility to everyone in the company and their customers. Cultivating this mindset requires systematic engagement, with all employees serving as advisors to product and data science teams on practical ways to identify and address ethical issues associated with their projects. It also means that datasets can become increasingly representative and useful for organisations.
Understanding the degree of bias Although there is a lot of potential for AI to make a positive impact on businesses and society, we must also be mindful of how these technologies can be problematic, particularly in relation to reinforcing biases. It is one thing to build AI in a lab, but it is another to accurately predict how it will perform in the real world. Throughout the product lifecycle, questions of accountability should be top of mind. Teams need to understand the nature and degree of bias associated with the datasets they are using and the models trained on those datasets, as well as their own. It’s essential that ethical AI teams facilitate questions about how to make AI models more explainable, transparent or auditable. Establishing well-defined, externally validated methods and practices for supporting decision making will ensure clarity for everyone involved. Ethics doesn’t stop after development. Whereas developers provide AI platforms, AI users effectively own and are responsible for their data. And whilst developers can provide customers with training and resources to help identify bias and mitigate against harm, if retrained inadequately or left unchecked, algorithms can perpetuate harmful stereotypes. This is why it is important that organisations provide customers and users with the right tools to use technologies safely and responsibly, to know how to identify problems and address them. With appropriate guidance and training, customers will better understand the impact of their data handling. Making transparency the priority Being open about the process and gaining feedback about how teams collect data can avoid unintended consequences of algorithms, both in the lab and future real-world scenarios as well as ensuring that the end user has a better sense of the safeguards in place to minimise bias. This can be done, for example, by publishing model cards which describe the intended use and users, performance metrics, and any other ethical consideration. This will help build trust not just among prospective and existing customers, but also regulators and wider society. Ultimately, to trust AI, relevant audiences need to understand why AI makes certain recommendations or predictions. AI users approach these technologies with different levels of knowledge and expertise. Data scientists or statisticians, for instance, will want to see all the factors used in a model. Alternatively, sales reps without a background in data science or statistics might be overwhelmed by this level of detail. To inspire confidence and avoid confusion, teams need to understand how to communicate these themes and explanations appropriately for different users. Making sure that ethics are considered in every part of the digital transformation journey will require focus from organisations and input at every level. This is not a one-size-fits-all tick box exercise, but rather involves a cultural shift, evolving processes, increased engagement with employees, customers and stakeholders and equipping users with the tools they need to use the technology responsibly and more. By putting these core ideas at the heart of the process, making sure your AI is ethical becomes second nature.