How does the Microsoft Office of Responsible AI ensure compliance?

The event at the Microsoft EU Innovation Centre in Brussels came in the wake of the European Commission’s whitepaper on AI regulation.

Speaking at the forum, Crampton opened her remarks on AI use within the EU by stating: “We welcome the Commission’s proposals.

“The scope of regulation should take into account the severity of potential harm. We also think a focus on higher-risk applications is important, an incremental approach that takes into account that there are hard questions out there.”

Determining high risk

A prominent talking point on the Commission’s AI whitepaper was what constitutes high risk in regards to the technology.

Regulation of AI risk within the EU is currently considered to be split into two categories: danger to personal interests or the public at large, and any other cases.

On determining what qualifies for the first, higher risk category, Microsoft’s head of office of responsible AI commented: “We find that we do impact assessments in order to gauge high-risk.

“We might make available facial recognition technology, and we need to think about the stakeholders and their benefits and harms.”

It’s not my face – it’s 83 data points

The potential of facial recognition technology is huge, but it must be balanced with privacy concerns and use the correct underlying infrastructure. Read here

She later went on to identify three categories of sensitive cases being looked into at Microsoft:

Denying consequential services:  “Here, we’re concerned about AI systems that can be used in a way that could result in someone not getting a service.

“A practical example of that is, if AI was being used in the course of a lone application to identify the applicant, a misidentification could lead to the denial of that loan application. So that, for us, would fall into this first bucket of sensitive uses.”

Risk of harm: “We’re thinking about AI applications that have a high risk of physical psychological, material or psychological harm. So you can think about safety-critical uses of AI.”

Infringement on human rights: “We consider privacy to be a human right at Microsoft, but also the more classical human rights, or fundamental rights as the European Commission would talk about them; things like freedom of assembly and freedom of speech.

“So, if we have an AI application that may infringe upon those, for example, city-wide surveillance projects in a country that doesn’t have a strong track record on human rights, they would consider that sort of case.”

Ensuring transparency with accountability

In terms of clarifying the so-called ‘black box’ that is AI, Crampton said that from her experience at Microsoft, holding companies that use AI accountable and explaining to customers what the technology’s constraints are would improve this.

As IT spend increases, how will technology’s influence impact business strategy in the year ahead

As global IT spend rises and innovation thrives, Henrik Nilsson — vice president EMEA at Apptio — explores how technology’s increasing influence will impact IT departments and businesses as a whole. Read here

“I think there are interesting conversations about designing accountability, and what steps you can take to assume accountability,” she explained. “For example, we can design accountability within technology, including limitations, such as platform products.

“We need to tell customers what the limitations are, but we can design accountability; we could have products not be able to capture images unless the lighting is efficient enough.

“Transparency hasn’t been addressed much in the development of products, but accountability can help empower people further down the chain.”

Addressing bias

Bias has been a recurring issue with AI technology, a notable example being when implemented into talent searching by HR departments.

This problem commonly occurs during the training process, which is overseen by human employees, but Crampton spoke about ways in which companies can minimise this risk.

AI bias: It is the responsibility of humans to ensure fairness

Fact is, bias has been introduced into AI by humans, says Dinesh Singh from Aricent. Minimising AI bias relies on three factors. Read here

“The first thing that I’d recommend is that as much as possible, you try and have a diverse group of people actually building the technology,” she said, “because one way of accounting for the use of the technology in a range of different circumstances is to bring a range of perspectives to the actual work. So I’d say that that’s really important.

“Second, you want to carefully think about the context in which your technology is going to be deployed, because that then helps you try and avoid a situation where your training data is a mismatch for the real world conditions in which the technology is used.

“In trying to mitigate bias issues, you want to take a deliberate approach to composition if you’re training data, and also composition of your testing data as well, so that you can try and address and spot issues at that end.

“The other thing I’d say is that bias detection and mitigation tools are revolving, still nascent, but as they improve, that would also be a good resource to try and help with some issues.”

Avatar photo

Aaron Hurst

Aaron Hurst is Information Age's senior reporter, providing news and features around the hottest trends across the tech industry.