As technology continues to become a larger part of our daily lives, the amount of data collected on individuals (customers) increases in scope and depth. Data analytics is all about analyzing and interpreting this data with the ultimate goal of enhancing business growth and revenue. While data analytics offers significant opportunities, the misuse and mishandling of data present serious ethical concerns and potentially devastating legal risk.
“Ethics and law shouldn’t be afterthoughts in your analytics initiatives. They should be considerations throughout your development and implementation process,” says Kevin Werbach, professor of legal studies and business ethics at the Wharton School, who teaches in the Advanced Business Analytics Program.
While organizations are becoming more aware of these issues, ethical responsibility in analytics is still not clearly understood or implemented uniformly. Moreover, laws governing data collection, privacy, and storage are still in a nascent phase. This is where responsible analytics comes into the picture.
Even big companies that are leaders in the artificial intelligence (AI) and analytics space face challenges. Google found that its Vision tools were more accurate only with some skin tones, while Amazon had to scrap its AI-enabled recruiting tool as it disadvantaged women from applying to the company.
According to a 2020 global survey by the Capgemini Research Institute, 59 percent of executives say they’ve experienced legal scrutiny of their AI systems and data handling procedures in just the last two to three years. More importantly, 22 percent had already faced customer backlash over their practices due to ethical or legal concerns.
Data ethics is key for responsible analytics
For responsible analytics to prevail, data ethics must be at the forefront. It covers the moral obligations of collecting, protecting, and using personally identifiable information. At every stage, business leaders should be asking the question: “Is this the right thing to do?”
According to ADP, a global provider of cloud-based human capital management (HCM) solutions, there are five basic principles for using data and AI ethically.
- Transparency — disclose what data is being collected, how it is being used, and what decisions are being made with the assistance of AI. Taking express and informed consent is vital for data collection.
- Fairness — look for bias in the data being collected and find ways to reduce and manage it during decision making.
- Accuracy — data should be accurate and up to date, and there should be ways to correct it.
- Privacy — protect individual privacy by ensuring data is anonymized and stored in a secure database.
- Accountability — understand and evaluate risks of using data and AI, and implement processes to make sure that new analytics systems are created ethically.
Responsible leadership in the age of AI
Analytics has played an important role in various areas of business — from digital advertising to content recommendations to supply chain decisions. Increasingly, it is moving into new areas like human resources, health care, and real estate. So, what is different this time and why is data analytics becoming such an important leadership issue?
According to Professor Werbach, responsible analytics means three things:
- Know the applicable legal and regulatory requirements
- Understand the implications of your technical choices, which can range from data collection practices to how you specify the objectives for an algorithm
- Implement measures such as ethics principles, review mechanisms, and auditing processes to evaluate legal and ethical risks in your analytics initiatives throughout their life cycles
The AI-enabled systems that companies are building now generate decisions, not data — decisions that impact people directly. Effective designing of these analytics models requires leadership that is accountable and has a holistic view of the organization. Analytics, AI, and big data can be incredibly powerful tools. If something goes wrong, it is not good enough to say that the algorithm did it. As a business leader, you need to consider possible problems and mitigation strategies as you would with any other business risk.
How to build an ethically robust analytics system
Ethically sound analytics requires a strong foundation of leadership and internal practices around audits, training, and operationalization of ethics. In 2019, the European Commission put forward seven key requirements that AI systems should meet in order to be deemed trustworthy. Drawing on these requirements, the Capgemini report highlights seven actions that organizations can take while building their AI systems:
- Clearly outline the intended purpose of AI systems, and assess their overall potential impact
- Proactively deploy AI for the benefit of society and the environment
- Embed diversity and inclusion principles proactively throughout the life cycle of AI systems
- Enhance transparency with the help of technology tools
- Humanize the AI experience, and ensure human oversight of AI systems
- Ensure technological robustness of AI systems
- Protect people’s individual privacy by empowering them and putting them in charge of AI interactions
Responsible analytics means that businesses need to consider the ethical and legal aspects of the collection and storage of data and the use of analytics and AI. The Advanced Business Analytics Program from Wharton Executive Education explores these concerns surrounding analytics and the limitations of algorithmic decision-making systems. It also examines how companies can put in place a framework for ethically sound analytics models.
This article first appeared on the Wharton Executive Education Advanced Business Analytics blog.