Do we need an algorithm police?

Facial recognition algorithm

Think about the digital services you use personally and the services your business could not operate without. From collaboration tools to recommendations in online stores, they all rely upon sophisticated algorithms to work – and these algorithms are about to take a quantum leap forward in sophistication.

With massive datasets now available and intelligent systems able to interrogate them, do we need to pay much more attention to how algorithms are regulated to avoid unfairness, and potential ethical issues when algorithms are applied to personal data?

Algorithms are increasingly entrenched in every aspect of our lives, to an extent people may not even realise; McDonald’s, for example, recently purchased AI company Dynamic Yield to analyse the habits of its customers, yet many will be completely unaware this is happening. With concerns the widespread use of algorithms and AI could lead to an increase in problems such as inherent bias and discrimination, there are growing calls for more visible regulation.

Some countries and regions have already started to take action. Late last year, Denmark began development of a labelling scheme to deliver a new seal of approval for IT systems to ensure data is used ethically.

Not everyone believes such steps are necessary, however. Martin Schallbruch, deputy director of the Digital Society Institute at German business school ESMT Berlin, says trying to regulate the use of algorithms is unnecessary and quite possibly pointless.

“We do not need algorithm police,” he tells IT Pro. “A general algorithm regulator would fail. Algorithms are used in railways as well as in pacemakers, by the police as well as by the education system. A single regulator cannot set the framework for innovation and the use of algorithms for all areas of life.

“We already have a high degree of overlap between data protection authorities, information security agencies and various sectoral regulators. An algorithm regulator would overlap massively with all these bodies – either the effectiveness would evaporate, or the innovation would be stifled by overregulation.”

However, the rapid development of AI is driving much of the debate around how algorithms are being used. Indeed, across the EU, algorithm accountability is driving regulators to consider the concept of “Trustworthy Artificial Intelligence” where businesses using these algorithms would self-regulate having obtained approval from a future EU regulator.

Transparent code

The use of algorithms will only expand and become even more entrenched in the business processes and services used by enterprises and individuals alike. As DARQ – distributed ledger, AI, extended reality and quantum computing – develops, will the need for some form of regulation become impossible to ignore?

Felix Hufeld, president of the Federal Financial Supervisory Authority, says: “What happens if something goes wrong and errors are made? Can a board member say: ‘it wasn’t us, it was the algorithm’? I say No. Ultimate responsibility must remain with management, meaning people.”

Automation is clearly on the development roadmap for all businesses. The use of automated systems will become critical for many companies. Speaking at the Ethical Governance session of the Zebra Project, deputy CEO of techUK, Anthony Walker, noted: “Why is transparency so important? So that we are innovating in a way that creates broad trust, and that companies are seen as trustworthy by customers and regulators. Companies need to be able to explain why things are being done. We need to build a culture of thoughtful, reflective innovation where people are thinking about the precise objective we want to get to, and what do we not want to happen.”

RELATED RESOURCE

The IT Pro Podcast: Can AI ever be ethical?

As AI grows in sophistication, how can we make sure it’s being developed responsibly?

FREE DOWNLOAD

Ensuring algorithms are not used as black box technologies will be critical. As Walker points out, trust is at the heart of these systems and unethical, or at best dubiously ethical, use of data is rife. This is exacerbated by the likelihood that as these technologies become more complex and pervasive, they could become more challenging to control. This is the space where regulators will likely have to step in to protect the liberties of populations and businesses alike.

However, this may be easier said than done, and in some areas policing algorithms may be impossible. Dr Mike Lloyd, CTO of RedSeal, tells IT Pro: “Some algorithms can usefully explain how they came up with an answer, while others cannot. If we don't pay attention to this distinction, the future is going to be a lot harder to navigate.

“One inconvenient truth is some of the best artificial intelligence and machine learning algorithms are the ones that lack transparency – they do not offer any way for humans to get a handle on why they came up with the answers they did.”

In this scenario, it will be vital to get the initial parameters the algorithm uses right. Here, human intervention will be needed, but this opens up the potential for intrinsic human bias and discrimination being unwittingly built in by the creators, which leads back to one of the greatest arguments for the need for transparent AI in the first place.

In June 2019, James Proudman, then the Bank of England's executive director for UK Deposit Takers Supervision, told the UK Financial Conduct Authority's (FCA) conference on governance in banking: “You cannot tell a machine to ‘do the right thing’ without somehow first telling it what ‘right’ is, nor can a machine be a whistleblower of its own learning algorithm. In a world of machines, the burden of correct corporate and ethical behaviour is shifted further in the direction of the board. Also, it may become harder and take longer to identify the causes of problems and to attribute accountability to individuals in a workplace dominated by big data and AI.”

Controlling the machines

The UK's government is already investigating how human bias can influence the datasets being interrogated by algorithms, particularly in areas such as law enforcement. The Centre for Data Ethics and Innovation (CDEI) and the Cabinet Office’s Race Disparity Unit are assessing algorithmic discrimination. The Durham Constabulary, meanwhile, began testing HART (Harm Assessment Risk Tool) in 2017 that uses AI to help decide whether a suspect should be kept in custody. Given black people are three times more likely to be arrested than their white counterparts, despite having the lowest conviction ratio, the risks for potential discrimination and the misallocation of risk are clear.

In their interim report, the CDEI concludes that algorithms and human decision-makers should always be present together when technology is being used to make highly sensitive decisions. They acknowledge that “it is critical that governance approaches cover this broader context and do not focus exclusively on the algorithmic tools themselves”.

Hannah Fry, associate professor in the Mathematics of Cities at University College London and the author of Hello World, believes we need something akin to the US Food and Drug Administration (FDA) for algorithms

Speaking to science website Nautilus, Fry said: “You can roll out an algorithm that genuinely makes massive differences to people's lives, both good and bad, without any checks and balances. To me, that seems completely bonkers. So, I think we need something like the FDA for algorithms. A regulatory body that can protect the intellectual property of algorithms, but at the same time ensure that the benefits to society outweigh the harms.”

Nadun Muthukumarana, lead partner for data analytics and cognitive AI within the transport and public sector at Deloitte, agrees, telling IT Pro: “Bias, fairness and ethics will depend on the guidance that is given to an algorithm that would create another. Moderation techniques will need to be built in for the removal of bias and the preservation of fairness and ethics within algorithms.

“Regulation, however, will still be the backstop that will ultimately prevent the application of algorithms, man or machine-made, to do harm. In this case, regulatory measures set a precedent for how these algorithms are created and applied.”

For many, though, regulation can create a stranglehold on innovation and it may be impossible to make some black box systems fully transparent. As AI in particular develops, businesses, consumers and governments will all have their part to play to ensure these systems work with the minimum level of bias and discrimination.

David Howell

David Howell is a freelance writer, journalist, broadcaster and content creator helping enterprises communicate.

Focussing on business and technology, he has a particular interest in how enterprises are using technology to connect with their customers using AI, VR and mobile innovation.

His work over the past 30 years has appeared in the national press and a diverse range of business and technology publications. You can follow David on LinkedIn.