Artificial intelligence (AI) is already impacting our daily lives in ways we never imagined just a few years ago, and in ways we are unaware of now. From self-driving cars to voice-assisted devices to predictive text messaging, AI has become a necessary and inevitable part of our society, including in the workplace.
Data shows that the use of AI in business is increasing. In 2019, a Gartner report stated that 37% of organizations had implemented AI in some way. More recently, Gartner predicted that the global market for artificial intelligence software would pay off. $62.5 billion at the end of this year, an increase of 21% over the previous year.
While the impact of AI is undeniable, consumer concerns about the ethics and safety of AI technology persist. Because of this, companies should strive to alleviate these concerns by always protecting customer data when employing AI-enabled technology.
The need for responsible AI
Any consumer-facing organization that employs AI technology must act responsibly, especially when it comes to customer data. Technology leaders using AI must pay equal attention to two responsibilities at all times: reducing model biases and preserving data confidentiality and privacy.
In addition to ensuring data security, responsible AI practices must remove embedded biases in the models that drive it. Companies should regularly assess biases that may be present in their vendors’ models and then advise customers on the most appropriate technology for them. This monitoring should also correct for biases with pre- and post-processing rules.
While companies can’t eliminate the biases inherent in AI systems trained on big data, they can work to minimize the adverse effects. Here are some best practices:
1. Put people first
AI can be beneficial in reducing the amount of repetitive work done by humans, but humans should still be prioritized. Create a culture that does not involve a stage between AI and humans. Harness the creativity, empathy, and dexterity of human teams and let AI create more efficiencies.
Harness the creativity, empathy, and dexterity of human teams and let AI create more efficiencies.
2. Consider data and privacy goals
Once the goals, long-term vision, and mission are established, ask yourself: What does the company own? There are many basic models or solutions that can be used without training data, but in some cases the degree of accuracy could be much higher.
Tailoring AI systems to company goals and data will produce the best results. If done correctly, data preparation and cleaning can remove biases during this step. Removing bias from data is key to developing responsible AI solutions. You can remove features that affect the overall outcome and further perpetuate existing biases.
On the privacy front, commit to protecting all data you collect, no matter how massive the amount. One way to do this is to only work with third-party providers that strictly follow the stipulations within crucial laws, such as GDPR, and maintain critical security certifications, such as ISO 27001. Complying with these regulations and obtaining these certifications takes a long time. effort, but demonstrate that the organization is qualified to protect customer data.
3. Implement active learning
Once a system is in production, provide human feedback on technology performance and biases. If users find that the output differs depending on the scenario, create guidelines to report and fix those issues. This can be done in the core of the AI system as a fix to the output.
In recent years, some of the largest organizations in the world, including Google, Microsoft, and the European Comission they have created frameworks and shared knowledge about their responsible AI guidelines. As more organizations embrace the business language around responsible AI, it will become the expectation of partners and customers.
When a mistake could cost your brand millions of dollars or ruin your reputation and relationship with employees and customers, additional support helps. No one wants to work with an organization that doesn’t care about its customer data or uses biased AI solutions. The sooner your organization addresses these issues, the more consumers will trust you and the benefits of using AI will begin to appear.
[ Check out our primer on 10 key artificial intelligence terms for IT and business leaders: Cheat sheet: AI glossary. ]