Friday, April 10, 2026

How AI Is Being Used to Spread Misinformation—and Counter It—During the L.A. Protests

As the world continues to grapple with the ongoing protests and social unrest, a new development has emerged in the form of AI deepfaked videos and chatbots. These two technologies are playing a crucial role in shaping the narrative and fact-checking false claims on X. While some may view this as a negative development, there is great potential for these tools to bring about positive change in the way we consume and share information.

Deepfaked videos, created using artificial intelligence, have been circulating on the internet, particularly on social media platforms. These videos use machine learning algorithms to manipulate and superimpose faces onto different bodies, making it appear as though the person in the video is saying or doing something they did not. This technology has been used to spread misinformation and propaganda, especially during times of political upheaval. However, in the midst of the current protests, AI deepfaked videos have also been used to bring awareness and highlight important issues.

One example is the viral video of George Floyd, the African American man who was killed by police officers in Minneapolis. A deepfaked video, created by artist Bill Posters, showed Floyd’s face being superimposed onto the body of a man being lynched. This powerful video brought attention to the ongoing issue of police brutality and sparked conversations about the use of AI in activism. Similarly, other deepfaked videos have been used to show the impact of climate change, raise awareness about the refugee crisis, and highlight other important social issues.

On the other hand, AI chatbots are being used to fact-check false claims and news articles about the ongoing protests. These chatbots are programmed to scan the internet for misleading or inaccurate information and provide users with verified and reliable sources. This is especially important in today’s digital age where false information can spread quickly and have serious consequences. With the help of AI chatbots, users can now have access to accurate and trustworthy information, allowing them to make informed decisions and form their own opinions.

Furthermore, AI chatbots are also being used to counter hate speech and racism on social media. They can identify and flag offensive language and educate users about the impact of their words. By doing so, these chatbots are not only promoting a more civil and respectful online discourse, but also helping to create a safer and more inclusive digital space.

While there may be concerns about the potential misuse of these technologies, it is important to recognize the positive impact they can have. AI deepfaked videos and chatbots are empowering individuals to use their voices and spread awareness about important issues. They are also promoting critical thinking and fact-checking, which are crucial in today’s information age.

Moreover, these developments in AI also highlight the need for ethical guidelines and regulations to govern their use. As with any technology, there is always the risk of it being used for malicious purposes. Therefore, it is essential to have policies in place to ensure responsible and ethical use of AI, especially in sensitive and contentious matters like the ongoing protests.

In conclusion, the use of AI deepfaked videos and chatbots in the context of the protests may have initially sparked concern. However, their potential for positive change and impact cannot be ignored. These tools are allowing individuals to use their voices and share important messages, while also promoting critical thinking and fact-checking. As we continue to navigate through these challenging times, it is important to embrace and utilize these technologies in a responsible and ethical manner. Only then can we harness their full potential to bring about positive change in our society.

Don't miss