As technology evolves at a rapid rate – especially technology that incorporates artificial intelligence (AI) capabilities – so too does the potential for bias, disconnect, misuse of data, and the automation of impersonal actions or decisions. With the vast amounts of data collected, stored, and exchanged, capitalist societies risk the commoditization of personal data at the expense of the individual, instead of using personal data to foster valuable individual and societal relationships.
In business, AI and machine learning are increasingly used as part of smart systems that analyze large amounts of data to identify trends that will benefit the business, like capturing more consumers and increasing profits, as opposed to building long-lasting relationships. AI shouldn’t only be focused on the business’ bottom line. In fact, a recent AI and empathy survey by our company of 6,000 consumers from North America, the United Kingdom, Australia, Japan, Germany, and France found that 69% of consumers think businesses have a moral obligation to do what’s right for the consumer, beyond what is legally required. Doing “what’s right” includes actions that factor into consideration for the individual or community. This can involve using AI to understand that a customer with large amounts of high-interest debt should not be aggressively marketed a new high-interest credit card, or leveraging AI-based quality analyses to identify production line flaws that need to be fixed to help ensure defective products don’t go to market.
AI makes decisions based on hundreds or thousands of propensity models and algorithms, but that data-centric decisioning, especially when put into action, should be informed by human-centric considerations. This way of operating requires empathy for the people and communities connected to a business. By building empathy into AI-based tools and operating them within an ethical framework, we can provide powerful insights that are more focused on the person, and not just the person’s data.
An ethical framework is one created and guided by responsible humans, where AI-based analyses and decisions are transparent and understandable. It’s a framework that values human input as much as data analyses and may use human decisions as fail-safes. This type of human-machine partnership gives businesses the power to leverage infinite sources of big data plus ensure the knowledge extracted from this data is understandable and used in an appropriate context. It’s an empathetic approach that is beneficial on an individual, business, and societal level.
For example, one concern leaders in business and government continue to address is how to identify, understand, and improve circumstances for underserved populations. A 2018 Pew Research Center study on “Artificial Intelligence and the Future of Humans” suggests that one of the ways to help AI-based technology enhance human capacities, instead of lessening them, is to build empathy into AI-based systems so technology is in line with the society’s social and ethical responsibilities.
An emerging application in the U.S. of such an approach is using AI to analyze disparate amounts of social determinants of health (SDoH) data. Public and private organizations are analyzing contextual data such as neighborhood, environment, education, economic stability, and access to health and healthcare to identify communities with higher health-related risks. The data is then used to determine the best way to engage with these community members to improve individual and population health.
Societal and ethical applications of empathetic AI are also being used by research institutions like MIT to study what they call Deep Empathy. MIT’s program studies the use of AI-based tools to help people visualize and contextually understand suffering that may be happening in other areas of the world affected by a range of disasters.
Using AI within an ethical and empathetic framework is also important for helping to establish trust in our increasingly digital and tech-powered economy and grow trust for the business and government organizations that use these evolving technologies. In our aforementioned AI and empathy study, for example, 54% surveyed believe it’s possible for AI to be biased in decision making and 27% were concerned about the rise of robots and the potential enslavement of humanity. Both concerns illustrate different types of distrust in AI. By operating in a transparent, empathetic, and ethical framework, where humans help guide the application and evolution of AI technology, organizations can help allay fears of robot overlords or concerns about bias. The human element in the empathetic and ethical framework exists to help reduce bias and build inclusive analytical models. It also helps support and foster understanding and trust between humans and machines.
Being empathetic doesn’t mean compromising the evolution or application of technology nor does it mean sacrificing the bottom line. Greater, more long-term and beneficial economic success can be created by looking beyond the blinders of shareholder profit. If we are concerned about accelerating entrepreneurial, educational, and employment opportunities in the 4IR economy, we need to ensure that the person is valued as equally as their data.