BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Challenges of Responsible AI Development

Following
This article is more than 3 years old.

Artificial Intelligence and Machine Learning (ML) form the building block of next generation technology. Their innovative capabilities like computer vision, natural language processing, advanced analytics, etc. enable schools and businesses to create insightful data-driven solutions and contribute to the advancement of the global economy. On top of that, AI is increasingly becoming a part of social initiatives focused on solving the world’s most complex problems. As a result, schools, governments and businesses are starting to become more receptive towards AI. At this rate, AI will soon become a central focus of development for several countries. Even so, we cannot disregard the new challenges it will create, like cybersecurity risk, data privacy concerns, data misuse, accidental ramifications, and so on.

Modern customers prefer businesses that offer customized solutions for simple convenience. At the same time, they expect companies to be fair and transparent about how they’re using their personal information. And when things go wrong they are hopeful that their government will help them with laws and policies that regulate data protection and privacy. Tim Cook, CEO of Apple, once said, "people have entrusted us with their most personal information. We owe them nothing less than the best protections that we can possibly provide." Businesses are experimenting with different AI opportunities, while trying their best to be fair with their customers. Companies must fulfill certain criteria before they implement AI and data in their solutions. The criteria should be ethically sound and should be set up by an end-to-end governance authority. Finally, responsible AI and data policies should be formulated and enforced by governments to ensure their ethical implementation across all initiatives in their respective regions.

Biggest Concerns In AI Development:

No Transparency 

Artificial Intelligence involves complex programming of products that cannot be explained to the common people. Moreover, algorithms of most of the AI-based products or applications are kept secret to avoid security breaches and similar threats. Due to these reasons, there is no transparency about the internal algorithms of AI products —making it difficult for customers to trust such products.

Privacy 

The difficult thing is that companies love data and they like to keep it. The privacy of citizens is constantly put at risk when companies collect consumer data without taking any prior permission — and this is made easy with the use of AI. Facial recognition algorithms are widely used across the world to support the functionality of different applications and products. Such products are collecting and selling huge amounts of customer data without consent.

Biased Systems

AI algorithms can show biased results when written by developers with biased minds. Since there isn’t any transparency about how the decision-making processes run in the background, the real users cannot be sure about its fairness. So, this can result in algorithms that yield biased results. For example, court systems may use AI algorithms to evaluate the defendants’ risks including the possibility of another crime. Moreover, they depend upon data to make decisions of bail, parole, and sentencing. Court authorities or governments may not have knowledge about how the algorithm was built. Private businesses that develop such algorithms prefer to keep them black-boxed which can put the judiciary at risk — thus lacking the necessary oversight to ensure the AI isn’t biased.

Lack of Governance & Accountability

When an AI system or product does something unethical, it’s challenging to assign blame or accountability. Earlier governance functions had to deal with static processes, but AI and data processes are iterative. Thus we need a governance process that can similarly adapt and change.

Responsible AI Development Toolkits:

Tech firms are addressing the AI and data challenges by creating responsible AI development toolkits that enable the creation of unbiased AI systems. These toolkits help companies develop AI applications that are transparent, explainable, and build trust among customers, employees, business leaders, and other stakeholders.   

IBM has released its open-source AI toolkit named AI Fairness 360 that identifies biases in datasets and models. Facebook and Google have also released similar toolkits named Fairness Flow and What-If Tool respectively. 

Need for Open Discussion on Responsible Data:

There is a dire need for conferences and thought leadership sessions to drive a discussion on how data and AI can be leveraged with fairness and openness. One such example is the online Speaker Series called Responsible Data Summit run by the University of California Berkeley’s Professor and CEO and Founder of Oasis Labs Dawn Song. The event featured Turing Award Winners, Fortune 500 industry leaders and other privacy thought leaders and advocates.

Conclusion:

It is necessary to understand that AI is heavily reliant on massive amounts of data. To ensure appropriate utilization of that data, companies need to embrace techniques that help them achieve fairness, security, and explainability. Responsible implementation of AI and data must reflect the ethics and values of an organization, thus, building trust among its customers, employees, and other stakeholders. 

There is no doubt that the benefits of advanced technologies like AI are endless, but in seeking these new opportunities, we risk compromising the privacy and integrity of our society and its members. Instead, we must enact policies that require a responsible application of AI technology — ultimately this can achieve even greater success and make the world a better place to live in.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here