Artificial Intelligence: The fallacy of neutral technology and the responsibility of tech companies

Artificial Intelligence shapes our everyday lives. AI-powered algorithms manage tools, software and products that influence how we communicate, who we listen to, whose voice gets to be heard, which products we buy, and how and where we buy them. AI-powered algorithms control our social media feed, our Amazon recommendations, and almost all targeted advertisement software. And yet, ensuring that ethical considerations drive the development of AI systems is still a major challenge.       

Why? Because, as we all know, technology is neutral. 

Or is it?

Some of the most common narratives include: 

(1)  Technology is neutral (you can replace technology with AI,  algorithms, social media…)

(2)  It’s the use of technology that brings ethical questions. It’s people that weaponize technology or misuse it.

These narratives are particularly interesting for two reasons: first, because when applied to Artificial Intelligence, they are far from true. Second, because they shift the debate away from the companies who design the systems, to the people that use AI-powered algorithms. This clever trick allows companies to put the burden of responsibility on the users of the technology.

But artificial intelligence is not neutral, and technology companies do have a responsibility.

The academic literature is unanimous: AI is value-laden: it reflects the particular set of values of the individuals involved in the design and thus is far from being neutral. Artificially Intelligent systems are developed to achieve specific outcomes, and to achieve these outcomes, a number of choices are made. Developers of AI systems shape algorithms by making these choices, reflecting the particular set of values of the individuals and companies involved in the design.

Let’s take Facebook’s newsfeed algorithm as an example. Before Brexit and the 2016 US election, the algorithm used to put a lot of focus on news articles, and content from pages that an individual user had liked. Early 2018, following growing calls from regulators to consider Facebook as a news distributor, and in an apparent attempt to curtail polarization, Facebook changed the parameters of its newsfeed algorithm, prioritizing content from the users’ friends and groups. Facebook made a design choice here: it changed the parameters of its main algorithm, choosing to favor content from specific sources.

While this choice was very visible to users, it is only one of the countless choices that Facebook has made while developing its AI-powered newsfeed algorithm. Facebook chose which parameters to maximize for, the number of ads, the time spent on the platform, the number of likes, shares, views. It chose which type of content to promote, which type to demote, it chose the moderation mechanisms… Facebook’s news feed AI isn’t neutral: it reflects all the choices made by the developer of the system. And these choices give Facebook, and all the developers of AI systems certain responsibilities that cannot be simply shifted to the users of these systems. This is not to say users have no responsibility. They do, but developers do design all the parameters of the news feed and have a direct influence on the type of content being promoted.

Some companies have actively chosen to integrate ethical considerations in the development of AI systems. Microsoft has developed a set of tools for AI governance, and IBM has created some of the first guidelines for AI ethical design. Some other companies have actively chosen not to, either by discouraging internal discussions, preventing external researchers from accessing key datasets, or simply by disregarding the research and recommendations of their teams.

But why would companies choose to include ethical considerations in the development of AI systems in the first place? Should we consider that because the AI systems they create shape our societies they have a responsibility to think about the consequences of the design choices they make? It would probably be wise to argue so, but it would also be foolish to simply state that tech companies should consider ethical questions because it’s good for society. Even with the best goodwill, the current system states that companies’ main responsibility is not to society, it is to their shareholders. For all the discussions over the shareholder economy, eventually, companies always have to answer to their shareholders. And that means that their incentives are not automatically aligned with what is best for society.

In 2020, Facebook’s own research team found that the 2018 modification in its news feed algorithm didn’t help to decrease polarization: it actually increased it, contributing to the wave of polarization seen across the globe. Yet, Facebook leadership decided against changing the algorithm again as it was likely to decrease engagement on the platform (meaning less revenue). In other words, Facebook’s interests are misaligned with the interests of society. 

Where do we go from here? How do we realign these incentives? First, a whole chain of accountability is needed. Companies need to be held accountable for the design choices they make. The time where companies could put all the responsibility on the shoulders of users is gone.

But to really see change, we need a societal shift. It will take governments, employees, citizens, and users to call for change and get all businesses to include ethical questions in the development of AI systems. We need societal awareness at a large scale to align financial incentives.

For a long time, the chain of accountability was broken. Regulators didn’t understand enough about the technology to hold tech companies accountable, the wider societal impact of systems wasn’t clear, and the public wasn’t educated enough. And to be clear, there is no easy path to creating ethical AI systems, there is no global playbook on AI. But it is time to open the conversation and to get citizens, regulators, and companies to ask these questions. That journey has already started. Lately, producers of AI systems have come under intense scrutiny, with growing calls from regulators, the general public, and even their own employees to ensure that their systems take into account a number of ethical considerations. It will take a societal shift to bring everyone on board. And while it might seem overwhelming, it is worth remembering that such shifts have happened in the past: the climate movement is a great example of the power of citizens to call for change.

It’s time to accelerate the movement and call for the sustainable development of technology. It’s time to ask regulators to do their job. It’s time to hold tech companies accountable for the systems they create. It’s time to ask companies to integrate ethical questions in the development of AI systems. And it’s time to open the conversation on what that would look like.

We hope you will join the conversation.

About the Author 

Julia Guillemot is an advocate for the sustainable development of technology, and a co-founder of the Better Tech Network. Her work focuses on artificial intelligence governance and the integration of ethical concerns in the development and deployment of artificially intelligent systems. She is currently working at Ubisoft on fostering better connections in online ecosystems.

Julia graduated from IÉSEG School of Management where she wrote her graduate thesis on AI Ethics and has previously studied business, computer sciences and philosophy at Harvard, Shanghai Jiao Tong University, and Yale. She is also an early admit at Stanford GSB.

Besides her work in technology, Julia serves as a Senior Advisor to the Global Schools Program. As a firm believer of the importance of youth empowerment and education for sustainable development, she has been working since 2017 to create and scale the program into a global network of 1,000+ schools in 85+ countries, working locally to integrate the SDGs into school settings for almost a million students.

Leave a Reply