March 21, 2024
Pressley Calls for Congress to Address Risks of Artificial Intelligence on Marginalized Communities
“AI algorithms trained on skewed, inaccurate, or unrepresentative data magnify human biases, lead to discriminatory outcomes.”
“We have a responsibility to be innovative in our efforts in order to build reliable protections for everyone, especially those who have historically been left behind or targeted.”
WASHINGTON – Today in a House Oversight and Accountability Subcommittee hearing, Congresswoman Ayanna Pressley (MA-07) highlighted the risks of artificial intelligence (AI) and called on Congress to take action to protect vulnerable communities from AI.
In her question line with Dr. Nichol Turner Lee, Director of Center for Technology Innovation at the Brookings Institute, Rep. Pressley praised the Biden-Harris Administration’s executive order (EO) for taking steps to mitigate harm from AI and urged Congress to ensure biased algorithms do not exacerbate existing disparities.
Footage from her exchange with the witness can be found here, and a transcript is below.
Transcript: Rep. Ayanna Pressley Calls on Congress to Address Risks of Artificial Intelligence on Marginalized Communities
House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation
March 21, 2024
REP. PRESSLEY: Thank you, Madam Chair and thank you to our witnesses for being here today.
As a member of the Financial Services Committees’ Bipartisan Working Group on Artificial Intelligence, I have no doubt that while AI presents opportunities for progress, it also poses significant risks from undermining our privacy to inciting political violence to spreading disinformation.
Congress has been slow to act, forcing the Biden-Harris Administration to take executive action to enforce standards and guardrails.
The AI EO does just that and to suggest that the White House is overstepping, especially when just last week, this subcommittee heard devastating testimony on AI is infringement on the privacy and civil rights of women and girls.
So, that overreach characterization is absurd, in my opinion.
Dr. Turner Lee, in what ways can AI pose disproportionate threats to people from marginalized backgrounds?
DR. TURNER LEE: That is an area that I spend a lot of time with, and I think the effects on marginalized populations, there’s a couple of things.
One, the lack of transparency of AI systems, and particularly how they factor into predictive decision making or eligibility concerns, can foreclose on equal opportunities.
People don’t know what those factors are that are going into credit decisions, housing decisions, criminal justice decisions, and the like.
I’d also say that people of color are disproportionately impacted by deepfakes and misinformation.
The lack of transparency, actually, which is an issue, deepfakes affect anybody in any state, in any party when you actually look at it, but the lack of transparency particularly affects communities of color who have less agency.
And then finally, I would say criminal justice. I just spent a year and a half with the National Academies on facial recognition use in law enforcement, and in that application of AI we also see a lot of vulnerabilities as well.
REP. PRESSLEY: Thank you. Yes, AI algorithms trained on skewed, inaccurate, or unrepresentative data magnify human biases [and] lead to discriminatory outcomes.
The previous administration, for example, has an abysmal record of using technology to incarcerate and to persecute communities of color.
The Trump Administration used AI to identify legal protesters during the George Floyd protests to employ racist algorithms with Immigration and Customs Enforcement, to profile Muslims entering the country, and haphazardly arrest Chinese Americans during its China initiative.
Meanwhile, President Biden’s Executive Order takes unprecedented action to allow innovation while protecting people’s privacy and civil rights.
Doctor Turner Lee, are the steps outlined in the Biden-Harris Administration’s EO sufficient to address biases in AI that can lead to discriminatory outcomes?
DR. TURNER LEE: I wholeheartedly agree.
I applaud this administration for including words like equity and parity as part of the EO in very outright ways so that we address this issue front hand.
I also think, to your point, and to the earlier conversation from my colleagues around the government use of AI, it’s very clear in the EO this distinction between government surveillance that is used for malicious intent by government, versus resiliency, which is the federal agencies just having clearer pathways on their use of AI generally, whether it’s in benefits decisions, criminal justice decisions and actions, and so forth.
REP. PRESSLEY: Thank you, and Dr. Turner Lee, what elements of the EO can Congress strengthen to ensure that advances in AI technology are not used to further involve people with the criminal legal system?
DR. TURNER LEE: I think that Congress can take some steps, and there has been some bipartisan support around the use of facial recognition technology, and how we actually not necessarily ban it, but we have some guardrails that make sense for various communities.
I think Congress can also act on data privacy legislation. That legislation will allow some sense of guidance on what data can be collected, and in the area of biometric collection that can also safeguard communities of color.
I think conversations on election and AI infrastructure and architecture should be of concern. And it has been on a bipartisan level.
I think all of us are concerned about the integrity of our elections based on our artificial intelligence and generative AI. So I think there’s a host of them.
I’m happy to share more of those with you, Congresswoman, going forward.
REP. PRESSLEY: Thank you.
And, you know, certainly we have a responsibility to be innovative in our efforts in order to build reliable protections for everyone, especially those who have historically been left behind or targeted.
So I invite all my colleagues to link arms and minds, if you will, in carrying out that work, whether it’s the use of facial recognition technology to criminalize people of color, deepfake pornography to degrade women, or biased algorithms to keep vulnerable community members from accessing critical resources.
Existing equity concerns are at risk of being worsened for people in my district, the Massachusetts 7th, and across our country.
Thank you and I yield back.
###