Investigators work the scene after a mass shooting at a supermarket, in Buffalo, N.Y., May 16, 2022. A white 18-year-old entered the supermarket with the goal of killing as many Black patrons as possible and gunned down 10. That shooter claims to have been introduced to neo-Nazi websites and a livestream of the 2019 Christchurch, New Zealand mosque shootings on the anonymous, online messaging board 4Chan. (AP Photo/Matt Rourke, File)

The social media posts are of a distinct type. They hint darkly that the CIA or the FBI are behind mass shootings. They traffic in racist, sexist and homophobic tropes. They revel in the prospect of a “white boy summer.”

White nationalists and supremacists, on accounts often run by young men, are building thriving, macho communities across social media platforms like Instagram, Telegram and TikTok, evading detection with coded hashtags and innuendo.

Their snarky memes and trendy videos are riling up thousands of followers on divisive issues including abortion, guns, immigration and LGBTQ rights. The Department of Homeland Security warned Tuesday that such skewed framing of the subjects could drive extremists to violently attack public places across the U.S. in the coming months.

These type of threats and racist ideology have become so commonplace on social media that it’s nearly impossible for law enforcement to separate internet ramblings from dangerous, potentially violent people, Michael German, who infiltrated white supremacy groups as an FBI agent, told the Senate Judiciary Committee on Tuesday.

“It seems intuitive that effective social media monitoring might provide clues to help law enforcement prevent attacks,” German said. “After all, the white supremacist attackers in BuffaloPittsburgh and El Paso all gained access to materials online and expressed their hateful, violent intentions on social media.”

But, he continued, “so many false alarms drown out threats.”

DHS and the FBI are also working with state and local agencies to raise awareness about the increased threat around the U.S. in the coming months.

The heightened concern comes just weeks after a white 18-year-old entered a supermarket in Buffalo, New York, with the goal of killing as many Black patrons as possible. He gunned down 10.

That shooter claims to have been introduced to neo-Nazi websites and a livestream of the 2019 Christchurch, New Zealand, mosque shootings on the anonymous, online messaging board 4Chan. In 2018, the white man who gunned down 11 at a Pittsburgh synagogue shared his antisemitic rants on Gab, a site that attracts extremists. The year before, a 21-year-old white man who killed 23 people at a Walmart in the largely Hispanic city of El Paso, Texas, shared his anti-immigrant hate on the messaging board 8Chan.

References to hate-filled ideologies are more elusive across mainstream platforms like Twitter, Instagram, TikTok and Telegram. To avoid detection from artificial intelligence-powered moderation, users don’t use obvious terms like “white genocide” or “white power” in conversation.

They signal their beliefs in other ways: a Christian cross emoji in their profile or words like “anglo” or “pilled,” a term embraced by far-right chatrooms, in usernames. Most recently, some of these accounts have borrowed the pop song “White Boy Summer” to cheer on the leaked Supreme Court draft opinion on Roe v. Wade, according to an analysis by Zignal Labs, a social media intelligence firm.

Facebook and Instagram owner Meta banned praise and support for white nationalist and separatists movements in 2019 on company platforms, but the social media shift to subtlety makes it difficult to moderate the posts. Meta says it has more than 350 experts, with backgrounds from national security to radicalization research, dedicated to ridding the site of such hateful speech.

“We know these groups are determined to find new ways to try to evade our policies, and that’s why we invest in people and technology and work with outside experts to constantly update and improve our enforcement efforts,” David Tessler, the head of dangerous organizations and individuals policy for Meta, said in a statement.

A closer look reveals hundreds of posts steeped in sexist, antisemitic, racist and homophobic content.

In one Instagram post identified by The Associated Press, an account called White Primacy appeared to post a photo of a billboard that describes a common way Jewish people were exterminated during the Holocaust.

“We’re just 75 years since the gas chambers. So no, a billboard calling out bigotry against Jews isn’t an overreaction,” the pictured billboard said.

The caption of the post, however, denied gas chambers were used at all. The post’s comments were even worse: “If what they said really happened, we’d be in such a better place,” one user commented. “We’re going to finish what they started someday,” another wrote.

The account, which had more than 4,000 followers, was immediately removed Tuesday, after the AP asked Meta about it. Meta has banned posts that deny the Holocaust on its platform since 2020.

U.S. extremists are mimicking the social media strategy used by the Islamic State group, which turned to subtle language and images across Telegram, Facebook and YouTube a decade ago to evade the industry-wide crackdown of the terrorist group’s online presence, said Mia Bloom, a communications professor at Georgia State University.

“They’re trying to recruit,” said Bloom, who has researched social media use for both Islamic State terrorists and far-right extremists. “We’re starting to see some of the same patterns with ISIS and the far-right. The coded speech, the ways to evade AI. The groups were appealing to a younger and younger crowd.”

For example, on Instagram, one of the most popular apps for teens and young adults, white supremacists amplify each other’s content daily and point their followers to new accounts.

In recent weeks, a cluster of those accounts has turned its sights on Pride Month, with some calling for gay marriage to be “re-criminalized” and others using the #Pride or rainbow flag emoji to post homophobic memes.

Law enforcement agencies are already monitoring an active threat from a young Arizona man who says on his Telegram accounts that he is “leading the war” against retail giant Target for its Pride Month merchandise and children’s clothing line and has promised to “hunt LGBT supporters” at the stores. In videos posted to his Telegram and YouTube accounts, sometimes filmed at Target stores, he encourages others to go to the stores as well.

Target said in a statement that it is working with local and national law enforcement agencies who are investigating the videos.

As society becomes more accepting of LGBTQ rights, the issue may be especially triggering for young men who have held traditional beliefs around relationships and marriage, Bloom said.

“That might explain the vulnerability to radical belief systems: A lot of the beliefs that they grew up with, that they held rather firmly, are being shaken,” she said. “That’s where it becomes an opportunity for these groups: They’re lashing out and they’re picking on things that are very different.”

___

Associated Press writer Ben Fox in Washington contributed to this report.