BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Why Companies Need Their Own AI Code Of Conduct

Following
This article is more than 3 years old.


Over the last year, I have been immersing myself in a lot of Artificial Intelligence research, including reading multiple books on AI and taking an online class from Stanford on the fundamentals of Artificial Intelligence. 

FYI, this class was taught by an Adjunct Professor at Stanford, Andrew Ng, a co-founder of Coursera.org, and he has a new related class on Coursera entitled "Deep Learning with Andrew Ng." 

All of this study and research has given me a much better understanding of AI, what it can and can't do, and its potential impact on our world. Although I am not an engineer and come from the marketing research side of the tech market, after nearly 40 years dealing with technology at all levels, my depth of understanding of technology and its impact on our world has always been present in my work and research.

AI has been around for decades but is even more prevalent in our tech world today. That is why I wanted and needed to delve deeper into AI at the design level to be more informed on how AI can and will impact our world. 

One book, in particular, has been essential to my own understanding of AI from a global and political perspective, and it comes from author, Kai-Fu Lee, entitled, AI Super Powers; China, Silicon Valley, and the New World Order

One other country that comes into the AI global picture is Russia, whose leader, Vladimir Putin, has said on record that "the nation that leads in AI, will rule the world."

During the Stanford class, Professor Ng touched on one topic of great interest to me, and that is ethics in AI. The more I have studied AI and Machine Learning, it has become apparent to me that AI can be used for good as well as evil. I believe that developing guidelines or principles for AI and ML in use by companies will become one of the most important initiatives companies of all nature have to put in place soon and live by in this next decade.

In an editorial recently for the Financial Times, CEO of Google and Alphabet, Sundar Pichai, said that he believes "AI must be regulated to prevent the potential consequences of tools including deep fakes and facial recognition." ($)

His suggestions include "international alignment between the UK and the EU, agreement on "core values," using open-source tools (such as those already being developed by Google) to test for adherence to written principles and using existing regulation, including Europe's GDPR, to build out broader regulatory frameworks."

While Pichai is pushing for government regulation, he is not waiting around for any government to drive what Google and Alphabet believe should be their position on AI and AI Ethics.

As he states in the Financial Times editorial, "We need to be clear-eyed about what could go wrong."

Echoing Pichai's view, Microsoft President, Brad Smith, speaking at Davos earlier this year said that "the time to regulate AI is now."

I agree that AI is going to need some governmental regulations, and Pichai has suggested a starting point for the US is to go down the path of regulating AI and ML. 

I find Pichai's and Brad Smiths's remarks critical. CEOs of any company need to begin thinking about putting in place their own company's AI Code of Ethics they plan to follow in their use of AI and ML technology. 

Google has put in place its own AI principles and guidelines and state objectives in its quest to create AI-based provisions and AI Ethics on the protection of human rights. 

Similarly, Microsoft has written commentary on their AI principles and objectives and even included guidelines for responsible bots.

And Salesforce has created a very concise AI Ethics objectives document, including committing to all they do in AI to be Responsible, Accountable, transparent, empowering, and inclusive. 

IEEE recently posted in Forbes their projected guideline recommendations for their members about AI compliance that include commentary and principles to follow in AI and IoT, AI and Work, AI and Healthcare, AI and ethics, and a broader view on AI strategies. 

I recently asked some of the major companies in tech and telecom if they have published their own AI Principles and Guidelines and have been surprised that very few of them are even in the process of doing this. They admit that it is vital to have their own AI guidelines in place and are doing some work on it but revealed that they are nowhere close to having a comprehensive AI Ethics strategy ready to publish. 

Companies having an AI Principles and AI Ethics Code of Conduct strategy in place should become a priority. AI is becoming a technology that is quickly becoming an intricate part of any business. In the not-too-distant future, their customers are going to demand to know how the company they deal with handles AI-based personal data and what their AI Ethics code will be. If companies are smart, they will craft their AI Principles and Ethics position soon and be ready for the demands the market and their customers will expect from them in the age of AI.

Follow me on TwitterCheck out my website