Newsweek: Elon Musk Is Amplifying Bigotry. He Must Be Stopped | Opinion

In responsible hands, with responsible regulations, AI can be indispensable in countering hatred online rather than intensifying it. Our Interparliamentary Task Force will continue to convene global lawmakers and digital platforms in search of solutions to mitigate the hazards presented by AI and its algorithms. This dialogue gives us hope that social media and AI leaders can use this remarkable technology to build a better internet.
By Debbie Wasserman Schultz & Anthony Housefather
Congresswoman; Canadian Parliament Member

Antisemitism 
exploded across social media after Oct. 7. Our feeds flooded with glorified terrorists, global intifada demands, and violent threats targeted at Jewish leaders. Yet online Jewish hate surged to record highs before then, helping fuel similar historic spikes in antisemitic incidents and hate crimes last year, according to the ADL and FBI.

So why do social media platforms seem hardwired for antisemitic hatred?


We know the answer: Social media algorithms deliver us content most geared to absorb our time and generate a reaction, with 
controversial posts eliciting frenzied responses. This business model built on discord prioritizes inflammatory content that triggers engagement and views, which in turn generates revenue.

Reconciling these conflicting objectives will not be easy. But one platform's complete abdication of responsibility shows the dramatic impact of leaving these algorithms to their own devices. 
Twitter, now known as X, was beset by these contradictions before Elon Musk purchased the company last year. Since then, Musk has systematically degraded or eliminated every policy, practice, and person responsible for governing hateful content.

The results are predictable: Since Oct. 7, antisemitism surged by over 900 percent on X, a far greater rate than other mainstream sites and even fringe spaces like 4chan and Gab.

Over the past year, as co-chairs of the Interparliamentary Task Force to Combat Online Antisemitism, we
 met with social media and technology companies—including X—to share these concerns about the rapid development of AI and its role in accelerating the spread of hate. If the AI revolution compares to automobiles overtaking horse-drawn wagons, let's be clear: Artificial intelligence is still in its Model-T era.

As elected representatives, we are obligated to safeguard emerging technologies from abuse and prevent unforeseen harm. That means requiring safety features to protect users, insulate bystanders, and reduce damage to the broader environment. It also means applying strict scrutiny to those behind the wheel. Like most tools, AI is not inherently good or evil. It depends on how it is developed, deployed, and by whom.


Mr. Musk has made it clear that he has no interest in minimizing digital harm. But the contradiction between his lofty objectives and their predictable outcomes is at the root of X's self-destruction. Musk's goal of enhancing Twitter's profitability has been undercut by his embrace of extremists. Advertisers are closing their checkbooks. Musk's grand vision of democratization quickly collapsed into pay-for-play, blue-check patronage that grants false legitimacy and incentivizes the spread of disinformation.

And his free speech absolutism is betrayed by his intolerance for criticism and 
fondness for defamation suits. By dismantling one of the world's most frequented public squares, he's done more damage to political discourse than some of history's most infamous censors.

Mr. Musk's amplification of antisemitic tropes and 
attacks on Jewish organizations would be immaterial if he were merely another anonymous user. But his position as the Internet's self-appointed authority on acceptable speech, his sway over the future of AI, and his unrivaled reach—Musk has ten times more followers than there are Jews on Earth—make his choices all the more significant.

While Mr. Musk apologized for amplifying an antisemitic tweet and took a much publicized trip to Israel, he also repudiated advertisers concerned about extremism on X in the most vulgar way possible.

 In responsible hands, with responsible regulations, AI can be indispensable in countering hatred online rather than intensifying it.

 Our Interparliamentary Task Force will continue to convene global lawmakers and digital platforms in search of solutions to mitigate the hazards presented by AI and its algorithms. This dialogue gives us hope that social media and AI leaders can use this remarkable technology to build a better internet.

Mr. Musk's amplification of the antisemitic fringe is slowly strangling his company. It's not too late for him or his enablers to acknowledge this failure and deliver reforms to stop antisemitism on X. But he's running out of time to do so.

U.S. Rep. Debbie Wasserman Schultz (D-FL) represents South Florida in Congress and Member of Parliament Anthony Housefather represents a Montreal area district in the Canadian House of Commons. They co-chair the Interparliamentary Task Force to Combat Online Antisemitism, a multi-partisan group of parliamentarians from around the globe.

The views expressed in this article are the writers' own.

Read the original piece here.