The image shows a holographic sphere hovering over a hand. In the center of the sphere it says compliance. On the outside edges of the sphere is says requirements, regulations, policies, rules, standards

The Quest For Responsible AI (A Regulatory Sampler)

At this point, you’ve probably heard a lot of talk about artificial intelligence. Much has transpired since ChatGPT debuted over a year ago. Commensurate with this rapid pace of technology is an urgency to safeguard AI before things advance beyond human control. The novelty of AI coupled with such urgency invites conjecture. The global community is left to ponder what could go wrong if AI is left unchecked. That can certainly be a scary prospect. But to quote the English poet Thomas Hardy, “Fear is the mother of foresight.” 

Perhaps the need for sensible, practical regulation may focus the collective discourse just long enough to realize the opportunity that lies ahead—to see human efficiency, productivity, and creativity reach unimaginable heights. Foresight is certainly needed if we’re to harness AI’s full potential. Concern is understandable, but the questions derived from such concerns may prove more valuable. So rather than obsess over comparisons to The Matrix or other doomsday scenarios, let’s take a look at some proposed regulations for AI here in the United States. 

Common themes for AI regulation tend to center around discussions of transparency, control, ownership, privacy, and security. People want to be made aware of the fact that they’re engaging with AI. They want to ensure that AI remains within the realm of human control. They’re concerned about the privacy, security, and proprietary implications posed by systems that consume billions of data points for training purposes. The reach and scope of AI is seemingly limitless. When it comes to matters of civil and criminal action, AI stands to alter the landscape of copyright law, intellectual property, patent law, civil rights, national security, fraud, and privacy—just to name a few. Indeed, AI has taken center stage in our national, and international, dialogue. And the legal world is no exception. So it’s no wonder legislators throughout the United States, from state capitols to Washington D.C., have shifted gears to discuss proposed regulations. 

In late 2022, The White House published its Blueprint for an AI Bill of Rights to serve as a guide for responsible implementation. The blueprint centers on five principles: 1) safe and effective systems, 2) algorithmic discrimination protections, 3) data privacy, 4) notice and explanation, and 5) human alternatives, consideration, and fallback. This blueprint applies where AI may “meaningfully impact the American public’s rights, opportunities, or access to critical resources or services,” a focus often replicated in state-level proposals. President Biden later announced a series of voluntary actions adopted by seven leading tech companies to utilize AI in a safe, responsible, and transparent manner:

“First, the companies have an obligation to make sure their technology is safe before releasing it to the public. That means testing the capabilities of their systems, assessing their potential risks, and making the results of these assessments public. Second, companies must prioritize the security of their systems by safeguarding their models against cyber threats, managing the risk to our national security, and sharing the best practices and industry standards that are necessary. Third, the companies have a duty to earn the people’s trust and empower users to make informed decisions labeling content that has been altered or AI-generated, rooting out bias and discrimination, strengthening privacy protections, and shielding children from harm. And finally, companies have agreed to find ways for AI to help meet society’s greatest challenges, from cancer to climate change, and invest in education and new jobs, to help students and workers prosper from the opportunities, and there are enormous opportunities, of AI.”

Low angle view of the east entrance to United States Capitol building in Washington DC with marble dome and stairs

The president has since issued an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. As with the AI Bill of Rights blueprint, President Biden’s executive order focuses on themes of transparency, responsibility, security, privacy, and civil rights. 

As for Congress, the Senate Judiciary Committee has been particularly active, hosting a series of hearings to consider AI’s impact in the legal realm. In May of 2023, the Judiciary Committee invited representatives from the tech industry to share their thoughts on regulation at a hearing for Oversight of A.I.: Rules for Artificial Intelligence. Sam Altman, CEO of OpenAI, the company behind ChatGPT, made headlines with his stated concern that the industry could “cause significant harm to the world.” However, beyond this statement was an appeal by Altman to work with Congress on AI legislation:

“[W]e think that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models. For example, the US government might consider a combination of licensing and testing requirements for development and release of AI models above a threshold of capabilities. There are several other areas I mention in my written testimony where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere to a set of safety requirements, facilitating processes to develop and update safety measures and examining opportunities for global coordination.”

In June of 2023, the Judiciary Committee held another hearing to examine AI with respect to human rights, discussing the threat of bad actors and invasive practices that result in exploitation, discrimination, and misinformation both domestically and abroad. Additionally, the Judiciary Committee held separate hearings to consider the many ways AI affects patent law, and copyright, subjects that impact innovation, credit, and compensation for creative works. 

On the state level, legislative proposals seek to learn more about AI and the ways it might impact government services. These bills exist in various stages of the legislative process. So the fate for many of these proposals remains unclear. However, a look at the current landscape may provide better insight as to the thought process legislators are engaging in to harness AI.

California has seen considerable activity on the subject of AI:

  • The state enacted SCR 17 in support of the principles raised in President Biden’s Blueprint for an AI Bill of Rights
  • SB 721 calls for the creation of a workgroup on artificial intelligence.
  • ACR 96 supports the adoption of Asilomar AI Principles to implement AI.
  • AJR 6 would enact a six-month moratorium on AI more powerful than GPT4.
  • SB 313 (returned to the Secretary of the Senate) would have required transparency in AI interactions with the public and provided the option for human-to-human interaction upon request. SB 313 would have also required non-discriminatory, privacy, and civil rights and liberties protections with respect to AI, as well as create an administrative Office of Artificial Intelligence.

A woman looks at a futuristic computer screen where the image is projected out towards the user.New York is another state to show considerable activity with respect to proposed AI regulation:

  • New York’s Assembly Bill A 4969 (approved by the state house and senate, but eventually vetoed by Governor Kathy Hochul) would have created a commission to study AI and compare regulatory policy to other states. 
  • Assembly Bill A 8195 would amend state technology law and criminal procedure to license high-risk forms of AI while prohibiting others. 
  • Assembly Bill A 8105 calls for an oath of responsible use when dealing with certain generative and surveillance AI. 

Other states proposing administrative AI departments, workgroups, and/or studies include the following:

  • Connecticut recently passed legislation, Raised Bill No. 1103, to create its Office of Artificial Intelligence as well as a task force charged with studying AI and ultimately creating the state’s own AI Bill of Rights.
  • HB 3563 recently adopted out of Illinois calls for the creation of a Generative AI and Natural Language Processing Task Force
  • Louisiana’s SCR 49 seeks to “study the impact of artificial intelligence in operations, procurement, and policy.” 
  • Maryland’s HB 1068 calls for creating a Commision on Responsible Artificial Intelligence in Maryland.
  • North Carolina SB 460 proposes to study the influence of automation in the workforce with particular focus on the risk of displacement that AI may pose to low income and minority workers.
  • Rhode Island’s HB 6423 proposes to study AI and seek suggestions with respect to expansion, implementation and security matters related to its use.
  • Texas’ HB 2060, recently signed into law, creates an artificial intelligence advisory council.

When it comes to examining the utilization of AI in providing government services:

  • Massachusetts legislators introduced H 1974 to regulate the use of AI in mental health services. 
  • Texas has proposed its own legislation on AI in mental health services, HB 4695
  • California proposal SB 398 (returned to the Secretary of the Senate) would have studied the risks and benefits of using AI to assist citizens in obtaining government services, including housing, unemployment, and disaster relief. 

Other interesting takes on regulating AI include:

  • Maryland’s HB 996 addressing civil (proposing strict liability) and criminal liability for AI implementation that results in injury or death.
  • Illinois HB 3285 proposing a disclosure requirement (and, if aggrieved, a right of action) regarding AI content that uses an individual’s voice or likeness without that person’s consent.
  • New Jersey’s S 3926, a proposal to expand the reach of identity theft crime to include fraudulent impersonations and false depictions via artificial intelligence and deepfake technology.
  • Pennsylvania proposal, HB 49, to amend current law to create a registry for businesses that use AI in the Commonwealth.
  • North Dakota’s enactment of HB 1361 declares that the term “person” does not include artificial intelligence.

A woman holds a tablet with an image projected up out of the screen towards the viewerThe world of deposition reporting may seem a far cry from some of the issues mentioned in this post, but the principles are similar. Protecting record testimony is a serious task. You should require a service that appreciates the importance of AI, but also incorporates human guardrails to steer it in the right direction. Readback was built on over 60 years of court reporting experience and 67,000 depositions to provide a fresh, game-changing approach to deposition reporting. A human Guardian works alongside a team of human transcribers and patented state-of-the-art technology to provide an empowering deposition experience. Readback’s flagship level of service, Active Reporting, offers certified transcripts in one day, rough drafts in one hour, and access to near-time text in less than a minute. Visit our website to see how AI can assist your deposition experience and learn The Case For Virtual Proceedings.

*Disclaimer:  Readback is neither a law firm nor a substitution for legal advice. This post should not be taken as legal opinion or advice.

  • Jamal Lacy serves as the law clerk to InfraWare, Inc., a tech-enabled parent company to Readback. In addition to content creation, Mr. Lacy provides legal research and analysis with particular focus on matters of contract, civil procedure, regulatory compliance, and legislative policy. Mr. Lacy received his Bachelor of Arts in Political Science with departmental honors from Trinity College in Hartford, Connecticut, and his Juris Doctor degree from Suffolk University Law School in Boston, Massachusetts.

Tags

AI, AI regulation, artificial intelligence, chatGPT, civil liability, Court Reporting, criminal liability, Deposition Reporting, identity theft, legislation, Pennsylvania, regulations, Tech, tech industry, virtual proceedings

Leave a Reply

Your email address will not be published. Required fields are marked *

Fill out this field
Fill out this field
Please enter a valid email address.
You need to agree with the terms to proceed

Previous Post
Devising Depositions: Readback’s Monthly Meet and Confer for January 2024
Next Post
Proposals and Public Comment: Readback’s Monthly Civil Procedure Review for January 2024
keyboard_arrow_up