Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

Opinion: Don’t hate AI. Enlist AI to support hate victims.

The most recent hate against Haitian Americans feels depressingly familiar: rumors and lies, amplified online and by prominent leaders, invoke racist tropes and incite violence. Already, bomb threats in Springfield, Ohio, have forced schools and city buildings to close. An entire community fears that its most vulnerable members ‒ immigrants and children ‒ will fall victim.
As researchers who study the effects of anti-Asian hate, we’ve seen this all before. The challenge remains the same: how to support community members facing hate and discrimination. 
When anti-AAPI hate incidents, fueled by xenophobic rhetoric, surged during the COVID-19 pandemic, the sudden urge to aid Asian Americans and Pacific Islanders ‒ too often dismissed as monolithic or as model minorities ‒ was a welcome change. Community-based organizations provided legal services, case management, mental health services and community gatherings to the elderly, recent immigrants and those with limited English, living in enclaves such as Chinatown.
But Asian Americans and Pacific Islanders still need more support.
Anti-Asian hate didn’t start with the pandemic, and it certainly didn’t end there. A new report by Stop AAPI Hate and the National Opinion Research Center at the University of Chicago found that last year, 49% of Asian Americans and Pacific Islanders were victims of a hate act in the United States.
Our own research uncovered a troubling paradox within this trend. Surveying 835 Asian Americans in Los Angeles and New York City about their experiences with hate and community services, we found ‒ to our surprise ‒ that U.S.-born or early-immigrant Asians, and who were financially better off, were more likely to report hate incidents than older, first-generation immigrants.
Even as they were more likely to have reported hate, they found it harder to get help. Some of these younger, U.S.-born Asian Americans told us they didn’t seek help because they had more pressing concerns or doubted things would change by reporting an incident.
Opinion:Hate against Haitian immigrants ignores how US politics pushed them here
These findings point to a larger disconnect.
Community-based organizations, while providing crucial services, can zero in on specific groups ‒ the elderly or non-English speaking immigrants who might live nearby ‒ which often means that they inadvertently overlook other AAPIs who might blend in but, it turns out, are equally in need.
As a result, despite reporting more discrimination at a higher rate than older, first-generation immigrants, these younger, more assimilated AAPIs weren’t able to get services from community-based organizations and didn’t feel they had anywhere else to turn.
That makes it imperative to help community organizations reach and support more Asian Americans and Pacific Islanders. And that’s where increasingly available technologies, such as artificial intelligence, could help.
Busy AAPI parents and professionals, like a lot of us, might be most easily reached on their smartphones, where AI-powered mental health apps already provide help, when and where people need it.
For AAPIs not served by community-based organizations, such AI tools could be a game changer. That is if one-size-fits-all apps evolved into culturally attuned resources, designed to support the well-being of specific communities. 
Imagine a victim-service chatbot, armed with a community-based organization’s deep knowledge of localized information and obtuse regulations, that can talk fluently to young AAPI professionals and their elderly relatives. Following a hate incident, the victim could, say, describe that incident, and the chatbot could suggest how to report it, confirm its entry into hate incident databases, describe help they could seek, contact a caseworker and perhaps even assess whether the incident could be prosecuted as a hate crime.
Opinion:AI conspiracy theories are here. Don’t believe everything you read.
There are reasons to be skeptical of enlisting AI to fight hate. After all, hate proliferates on AI-powered social media.
Chatbots and image generators produce racial stereotypes. And many Americans are wary of AI in daily life.
However, a chatbot dedicated to assisting victims of hate might help address such concerns. Mental health chatbots have their problems but they also have advantages, such as providing answers in the language people prefer.
To be sure, much work lies ahead to develop such a chatbot. For one, chatbots would need to be culturally tailored, as many existing ones do not adequately capture cultural nuance and thus risk worsening bias.
For instance, Black AI founders have launched their own chatbots to address what they see as shortcomings in how well ChatGPT and other AI tools understand Black history and culture.
Likewise, cultural identities influence how AAPIs experience stress and distress, access services and seek help. Chatbots would need to recognize lesser-known challenges ‒ like the fact that income inequality is greatest among Asian Americans, and that Chinese Americans have the highest income inequality among Asian Americans.
Tech developers and founders could seize this opportunity to build artificial intelligence tools that resonate with and support the diverse needs of Asian Americans, Haitian Americans or others affected by hate. AI companies could bolster language and cultural capabilities. With adequate resources, community-based organizations could adapt their services to be inclusive of all generations and provide the evidence-based guidance needed to develop such chatbots.
Much of the recent excitement around AI hinges on its potential to upend society ‒ for both good and ill. But meaningful change often starts within a community. California, as one example, has initiated an array of efforts to attack hate against Americans who are Black, transgender or Muslim, among others.
Supporting Asian Americans and others who experience hate could be just one way to harness AI technology for good. Done right, it could transform the healing capacity of communities still grappling with widespread hate, ensuring that no one is left unseen or left behind.
Douglas Yeung is a senior behavioral scientist at RAND and a professor of policy analysis at the Pardee RAND Graduate School. Lu Dong is a behavioral scientist at RAND and a professor of policy analysis at the Pardee RAND Graduate School.

en_USEnglish