Site iconSite icon ForkLog

X Imposes Penalties for Unlabeled AI-Generated War Videos

X Imposes Penalties for Unlabeled AI-Generated War Videos

The social network X will temporarily revoke monetization privileges for users who post AI-generated videos of armed conflicts without proper labeling. This was announced by Nikita Bier, the director of the platform’s algorithmic feed product.

Offenders will be excluded from the revenue-sharing program for 90 days. A repeat offense will result in a permanent ban from the monetization system. However, the platform does not plan to delete the posts or accounts themselves.

Bier emphasized that during wars, access to reliable information is critically important, and modern neural networks make it easy to mislead audiences.

The social network will identify fake content in two ways:

The rule update comes amid a surge of generated videos about recent geopolitical conflicts.

The issue of fake content existed on the platform even before the widespread adoption of neural networks—users regularly passed off footage from the video game Arma 3 as real combat. However, accessible AI generators have significantly simplified the creation and dissemination of disinformation.

Meanwhile, X’s payout system financially rewards content creators for high engagement. This incentivizes premium account holders to post viral deepfakes without regard for facts, aiming for quick earnings.

X’s decision effectively shifts the responsibility of detecting fakes onto the user community. The penalty is limited to a temporary loss of income, rather than the removal of disinformation from the platform.

Back in February, it was revealed that the US Army used Claude from Anthropic in an operation to capture Venezuelan President Nicolás Maduro.

Exit mobile version