Lexology GTDT Market Intelligence provides a unique perspective on evolving legal and regulatory landscapes. This interview is taken from the Artificial Intelligence volume featuring discussion on various topics including, government national strategy on AI, ethics and human rights, AI-related data protection and privacy issues, trade implications for AI and more, within key jurisdictions worldwide.

1 What is the current state of the law and regulation governing AI in your jurisdiction? How would you compare the level of regulation with that in other jurisdictions?

Currently, the United States does not have any comprehensive federal laws or regulations that specifically regulate AI. However, as in other jurisdictions, a range of existing US laws, regulations and agency guidance may apply (or may come into effect to apply) to AI, including the following:

  • the United States Federal Trade Commission (FTC) has issued guidance with respect to AI and algorithms, and this guidance highlights existing US laws, regulation and guidance that apply to these technologies;
  • the Department of Defense (DOD) has adopted Ethical Principles for Artificial Intelligence;
  • the Department of Transportation (DOT) and the Food and Drug Administration (FDA) have initiatives aimed at addressing specific AI applications;
  • the National Institute for Standards and Technology (NIST) has launched efforts to develop AI standards;
  • the Department of Commerce and the Committee on Foreign Investment in the United States (CFIUS) have various requirements applicable to AI; and
  • various states and local governments have begun turning their attention to AI regulation, particularly of facial recognition technologies.

While there have been various AI legislative proposals introduced in Congress, the United States has not embraced a horizontal broad-based approach to AI regulation as proposed by the European Commission. Instead, on 10 January 2020, the Trump Administration issued draft guidance to federal agencies for adopting regulatory and non-regulatory actions for the private sector use of AI (the Draft AI Regulatory Guidance). This draft guidance sets forth 10 principles for agency consideration. Among other things, it states that ‘agencies should consider new regulation only after they have reached the decision . . . that Federal regulation is necessary’. The draft guidance also encourages federal agencies to consider non-regulatory alternatives.

2 Has the government released a national strategy on AI? Are there any national efforts to create data sharing arrangements?

On 11 February 2019, President Trump signed an executive order (EO) ‘Maintaining American Leadership in Artificial Intelligence’, which launched a coordinated federal government strategy for AI. The EO sets forth the following five pillars for AI:

  • empowering federal agencies to drive breakthroughs in AI research and development;
  • establishing technological standards to support reliable AI systems;
  • establishing governance frameworks to foster public confidence in AI;
  • training an AI-ready workforce; and
  • engaging with international partners.

Pursuant to the EO, the Trump administration released the Draft AI Regulatory Guidance, discussed in question 1, and NIST released a plan for developing AI standards.

The EO also directs agencies to explore opportunities for collaboration with the private sector, including by making government data sets and computing resources available. The Trump administration highlighted some of its data sharing efforts in its February 2020 American Artificial Intelligence Initiative’s One Year Annual Report. Most notably, the report cites the National Science Foundation (NSF) and DOT collaboration on researching privacy techniques that would allow DOT’s data sets to be accessed remotely, as well as NSF and National Institute of Health pilot programmes allowing enhanced access to their computing resources and data.

These efforts align with actions by Congress, which in 2019 passed the Foundations for Evidence-Based Policymaking Act and OPEN Government Data Act. The OPEN Government Data Act requires agencies to publish certain data online in machine-readable formats and the United States Office of Management and Budget (OMB) has been charged with coordinating efforts to make US government data sets available. In late 2019, OMB released the finalised version of its 2020 Action Plan for the Federal Data Strategy, setting the key actions that agencies will need to undertake over the next year to implement these mandates. These actions include publishing data governance materials online, identifying priority assets for open data plans, improving open data availability and quality throughout the year, and regularly updating comprehensive data and metadata inventories.

Following this trend, the DOT recently announced the National Highway Traffic Safety Administration’s Automated Vehicle Transparency and Engagement for Safe Testing (AV TEST) Initiative, a voluntary data-sharing initiative intended to improve safety, testing and transparency of automated vehicle technology.

3 What is the government policy and strategy for managing the ethical and human rights issues raised by the deployment of AI?

The United States adopted the Organisation for Economic Co-operation and Development (OECD) AI Principles in May 2019, which also were embraced by the G20, focusing on:

  • using AI to stimulate inclusive growth, sustainable development and well-being;
  • human-centred values and fairness;
  • AI transparency and explainability;
  • making AI secure, robust and safe throughout its life cycle; and
  • accountability.

Additionally, the Trump administration has framed AI policy in the context of the high level values of ‘freedom, guarantees of human rights, the rule of law, stability in our institutions, rights to privacy, respect for intellectual property and opportunities to all to pursue their dreams’. The administration has identified the safety, explainability and workforce impacts of AI as top ethical priorities and highlights programmes at the Defense Advanced Research Projects Agency, including the Explainable AI and AI Next programmes, as well as NSF’s Program on Fairness in Artificial Intelligence in collaboration with Amazon, as efforts to manage these issues.

The DOD has formally adopted its own ethical AI principles leveraging the Defense Innovation Board’s 2019 report proposing high-level recommendations for ethical use of AI by the DOD. Additionally, the National Security Commission on Artificial Intelligence released its own highly anticipated interim report this year that, consistent with the DOD’s principles, centred the importance of reliability, auditability and fairness of AI systems used in the defence context.

4 What is the government policy and strategy for managing the national security and trade implications of AI? Are there any trade restrictions that may apply to AI-based products?

Trade controls are an important and evolving component of AI regulation in the United States and increasingly are being used to manage the cross-border flow of AI technologies. To pursue national security and foreign policy objectives, the United States employs a number of regulatory systems to govern international trade in hardware, software and technology. These regulations are becoming increasingly complex and difficult to navigate, as the United States and China heighten their competition in the technology sector.

The Department of Commerce’s Bureau of Industry and Security regulates certain defence and dual-use items and, in late 2018, published a representative list of 14 categories of emerging technologies, including AI and machine learning, over which it may exercise trade controls. The very first control was issued early in 2020, capturing certain geospatial imaging software using ‘deep convolutional neural networks’. Many more controls are expected, on a rolling basis.

The Department of Commerce also is authorised to ban the supply of US items – or of foreign-made items that contain or that were produced based on US-origin content or technology – to designated foreign end-users, if those end-users pose risks to US interests. Huawei, the largest telecommunications equipment manufacturer in the world, as well as 28 other Chinese entities, have been designated under this authority, including China’s leading AI companies.

Separately, inbound controls are increasingly focused on AI technologies. CFIUS, an interagency regulator within the US Treasury, administers rules governing foreign investments in US businesses whose activities implicate US national security interests. CFIUS’s authorities are particularly robust with respect to any US businesses with activities that implicate ‘critical technologies’, such as AI and sensitive personal data. Additionally, by virtue of the May 2019 EO on Securing the Information and Communications Technology and Services Supply Chain, the Secretary of Commerce can prohibit or restrict certain transactions involving information and communications technology or services with foreign parties of concern.

5 How are AI-related data protection and privacy issues being addressed? Have these issues affected data sharing arrangements in any way?

There is no comprehensive federal privacy legislation in the United States, and US federal policy has not focused specifically on the data protection and privacy impacts of AI technologies to date. However, there is federal sector-specific privacy legislation regulating, for instance, health data and financial data. Additionally, the FTC has broad jurisdiction to enforce deceptive and unfair business practices, including privacy and data security practices. In the absence of comprehensive federal privacy legislation, various states have enacted privacy legislation, most notably the California Consumer Privacy Act, which broadly regulates privacy and data security practices for companies processing California residents’ information. There likely will continue to be more state privacy laws so long as there is no federal privacy legislation pre-empting such state laws. The lack of federal legislation and the need to comply with a patchwork of state and local rules can make compliance more challenging.

In addition to broad privacy legislation, states also are considering technology­ specific regulation, particularly in the area of facial recognition. Washington state recently enacted legislation that creates a legal framework by which agencies may use facial recognition technologies to the benefit of society – for example, by assisting agencies in locating missing or deceased persons – but prohibits uses that ‘threaten our democratic freedoms and put our civil liberties at risk’. Additionally, California enacted legislation that creates a three-year moratorium on law enforcement agencies’ use of any biometric surveillance system in connection with police-worn body cameras. Maryland’s state legislature also recently passed legislation that would prohibit the use of facial recognition technologies during job interviews without the applicant’s consent. In addition to these examples of enacted legislation, several states have proposed legislation detailed in response to question 10.

6 How are government authorities enforcing and monitoring compliance with AI legislation, regulations and practice guidance? Which entities are issuing and enforcing regulations, strategies and frameworks with respect to AI?

While there has not been comprehensive US AI legislation, agencies are focusing on how existing laws, regulations and guidance might apply to AI, including in the enforcement context. For example, on the federal level, the FTC released a guidance document on 10 April 2020 (the FTC AI Guidance), which discusses existing FTC guidance that already applies to AI and algorithms and outlines five principles for AI and algorithm use. The FTC AI Guidance mentions that certain AI applications must comply with the Fair Credit Reporting Act, the Equal Credit Reporting Act, and Title VII of the Civil Rights Act of 1964. The FTC AI Guidance also cautions that the manner in which data is collected for AI use could potentially give rise to liability. The FTC also has demonstrated its role in this area by hosting hearings and workshops, such as its workshop on the benefits and potential misuses of voice-cloning technologies in January 2020.

Other agencies are considering sector-specific regulation. For example, the FDA is exploring a new Proposed Regulatory Framework for AI to allow for modifications to approved algorithms. Currently, the FDA approves therapeutic AI technologies only as ‘locked’ algorithms that do not continually adapt, but the agency is developing a framework to allow for modifications to algorithms based on real-world learning and adaptation subject to an approved change control plan developed by the manufacturer.

7 Has your jurisdiction participated in any international frameworks for AI?

As noted above, the United States joined the ‘Principles of Artificial Intelligence’ adopted by the OECD and the G20. On 15 June 2020, the United States announced its participation in the Global Partnership on AI (GPAI), an effort launched during 2020’s G7 ministerial meeting on science and technology, which aims to enhance multi-stakeholder cooperation in the advancement of AI reflecting shared democratic values, with an initial focus on responding to covid-19. The GPAI will initially be comprised of four working groups focused on responsible AI, data governance, the future of work, and innovation and commercialisation.

8 What have been the most noteworthy AI-related developments over the past year in your jurisdiction?

The most noteworthy AI developments at the federal level include the Trump administration’s February 2019 EO and January 2020 draft agency guidance, the FTC’s AI Guidance and related actions, and trade controls regulations.

It also is noteworthy that the US Patent and Trademark Office, the US Copyright Office and the World Intellectual Property Organization are looking at issues pertaining to the protection of AI-related or generated intellectual property.

In addition to these federal developments, states and localities have also taken important steps, including with respect to privacy and facial recognition technology, as discussed, and other actions discussed in question 10.

9 Which industry sectors have seen the most development in AI-based products and services in your jurisdiction?

As a result of the covid-19 pandemic, efforts within the healthcare industry to develop AI-based products and services have accelerated. On 16 March 2020, the Trump administration issued a call to action to the country’s artificial intelligence experts to develop new text and data mining techniques that can help the science community answer high-priority scientific questions related to covid-19. These efforts would seek to leverage data sets (the CORD-19) made available by the Allen Institute for AI, the Chan Zuckerberg Initiative, Georgetown University’s Center for Security and Emerging Technology, Microsoft and the National Library of Medicine at the National Institutes of Health. During the covid-19 pandemic, a wide range of AI-enabled tools have been developed to help manage or combat the disease.

In addition to the covid-19 response, many other US industries are actively engaging in AI development, including for healthcare financial services, logistics and transportation. In healthcare, for example, digital therapeutics, such as clinical-grade sensors paired with AI-driven predictive analytics are a major area of growth. In the financial sector, large banks report success in implementing AI to improve processes for anti-money laundering and know-your-customer regulatory checks. Additionally, paired with developments in mobile devices and biometrics, financial institutions reportedly are investing in more robust multifactor authentication measures using technologies such as facial recognition. AI also has tremendous potential to assist with supply chain and inventory management and other logistics.

10 Are there any pending or proposed legislative or regulatory initiatives in relation to AI?

While various federal legislative proposals have been introduced, it is unlikely that any will pass in the near term given the upcoming US federal elections and the disruptions caused by covid-19. Notably, one area of emerging consensus is support of private sector development of AI technologies through grant-making and increased availability of federal data sets. Two bipartisan, bicameral bills were recently introduced seeking, among other things, to create a task force to propose a road map for developing and sustaining a national research cloud for AI. The Endless Frontier Act (HR 6978 / S 3832) would, among other things, invest US$100 billion over five years in technology research, including AI and machine learning. The National AI Research Resource Task Force Act (HR 7096 / S 3890) would create a task force to propose a road map for developing and sustaining a national research cloud for AI. The cloud would provide researchers with access to computational resources and large-scale data sets to foster the growth of AI.

A growing body of state and federal proposals address algorithmic accountability, facial recognition technology, and mitigation of unwanted bias and discrimination. Federal proposals directed at algorithmic accountability increasingly seek to require impact assessments when AI tools produce legal or similarly significant legal effects. Some recent congressional proposals, like the Algorithmic Accountability Act (HR 2231 / X 1108) introduced by Democrats in both the House of Representatives and the Senate, also have included heightened testing requirements to monitor for bias. Other federal bills, like the Bot Disclosure and Accountability Act (S 2125), address interactions with AI systems, including interactions with ‘bots’ as well as ‘deepfake’ content. Some proposals also call for clear notice and the opportunity to opt out of algorithmic content curation. In regard to facial recognition technology, Congress has shown bipartisan interest in both government and commercial uses, as evidenced by the introduction of legislation regulating commercial uses and several hearings on potential threats posed by public and private use of these technologies.

States are considering their own slate of related proposals. For example, the California State Assembly is considering the Automated Decision Systems Accountability Act of 2020, which would require monitoring and impact assessments for California businesses that provide ‘automated decision systems’, defined broadly as products or services using artificial intelligence or other computational techniques to make decisions. A Washington state bill (SB 5527) would direct the state’s chief privacy officer to adopt rules regarding the development, procurement and use of automated decision systems by public agencies. More broadly, facial recognition technology has attracted renewed attention from state lawmakers, with wholesale bans on state and local government agencies’ use of facial recognition gaining steam.

11 What best practices would you recommend to assess and manage risks arising in the deployment of AI?

Companies developing or deploying AI applications in the United States should be mindful that a number of existing laws, regulations and regulatory guidances may apply to their AI application – including, but not limited to, those discussed above. Companies should seek to ensure compliance with these existing requirements and guidances, and review decisions of any governmental authorities that may be relevant to their offering. Companies should also closely monitor state and federal legal developments and consider engaging with policymakers on AI legislation and regulatory developments to inform legal efforts in this area. To the extent that companies are offering services outside the United States, they should expand these practices to other jurisdictions.

Although the legal landscape with respect to AI is still evolving, companies can take steps now to help manage potential risks that may arise when developing or deploying AI, as we discuss our article ‘10 Steps To Creating Trustworthy AI Applications’ (www.covingtondigitalhealth.com/2020/05/7415/). These steps involve, among other things, adopting a governance framework to help build on and operationalise the applicable AI principles and help ensure compliance with laws and applicable practices.

Ms Tiedrich extends sincere thanks to Doron Hindin, Christina Kuhn and Jonathan Wakely for their support in drafting this article.

The Inside Track

What skills and experiences have helped you to navigate AI issues as a lawyer?

At Covington, we take a holistic approach to AI that integrates our deep understanding of technology matters and our global and multi-disciplinary expertise. We have been working with clients on emerging technology matters for decades and we have helped clients navigate evolving legal landscapes, including at the dawn of cellular technology and the internet. We draw upon these past experiences as well as our deep understanding of technology and leverage our international and multi-disciplinary approach. We also translate this expertise into practical guidance that clients can apply in their transactions, public policy matters and business operations.

Which areas of AI development are you most excited about and which do you think will offer the greatest opportunities?

The development of AI technology is affecting virtually every industry and has tremendous potential to promote the public good, including to help achieve the UN Sustainable Development Goals by 2030. For example, in the healthcare sector, AI may continue to have an important role in helping to mitigate the effects of covid-19 and it has the potential to improve outcomes while reducing costs, including by aiding in diagnosis and policing drug theft and abuse. AI also has the potential to enable more efficient use of energy and other resources and to improve education, transportation, and the health and safety of workers. We are excited about the many great opportunities presented by AI.

What do you see as the greatest challenges facing both developers and society as a whole in relation to the deployment of AI?

AI has tremendous promise to advance economic and public good in many ways and it will be important to have policy frameworks that allow society to capitalise on these benefits and safeguard against potential harms. Also, as this publication explains, several jurisdictions are advancing different legal approaches with respect to AI. One of the great challenges is to develop harmonised policy approaches that achieve desired objectives. We have worked with stakeholders in the past to address these challenges with other technologies, such as the internet, and we are optimistic that workable approaches can be crafted for AI.