In November 2025, the technology landscape is experiencing a dramatic transformation.
Google NotebookLM introduced a new “Deep Research” mode that enables users to choose between quick or in-depth research styles, while OpenAI launched a limited regional pilot of ChatGPT group chats, enabling multi-user collaboration inside a single AI conversation.
Yet as these innovations reshape how we access information, a parallel crisis threatens the very foundation of reliable tech journalism: the erosion of professional fact-checking.
The Fact-Checking Retreat
The year 2025 marks a turning point for information verification. In January 2025, Meta announced it would end its Third-Party Fact Checking programme in the United States, a decision that sent shockwaves through the global fact-checking community.
According to the Duke Reporters’ Lab, there are now 443 active fact-checking projects, down about 2 percent from 451 active at the end of 2024.
This represents a stark reversal from just three years earlier. The number of active fact-checking projects worldwide peaked in 2022 at 457, according to data collected by the Duke Reporters’ Lab.
The boom years appear to be over, with the overall number of projects shooting up from 110 in 2014 to 453 by the end of 2022, an increase of more than 300 percent before plateauing.
The AI Misinformation Challenge
The rise of generative AI has fundamentally altered the misinformation landscape. AI technologies have significantly impacted the dynamics of misinformation, providing individuals with powerful tools to create convincing falsehoods that spread rapidly, thereby undermining traditional fact-checking and information validation methods.
The scale of the challenge is staggering. AI technologies, with their capability to generate convincing fake texts, images, audio and videos (often referred to as ‘deepfakes’), present significant difficulties in distinguishing authentic content from synthetic creations.
The AI systems meant to help us find information have become part of the problem. These tools hallucinate, a pleasant way of saying they make things up, warned Angie Holan at the GlobalFact conference.
Real-world examples illustrate the absurdity. A few months ago, users asked Google AI how to prevent cheese from sliding off pizza. The solution? Nontoxic glue, the tool said. While this example seems harmless, it demonstrates how AI can confidently present dangerous misinformation as fact.
The Community Notes Experiment
As professional fact-checking retreats, platforms are turning to crowdsourced alternatives. Meta CEO Mark Zuckerberg highlighted that content moderation at Meta would shift to a community labeling model similar to X, formerly Twitter. However, research suggests this approach has significant limitations.
A large-scale study found little evidence that the introduction of Community Notes significantly reduced engagement with misleading tweets on X. Rather, it appears that such crowd-based efforts might be too slow to effectively reduce engagement with misinformation in the early and most viral stage of its spread.
The effectiveness varies by implementation. A team of researchers from University of Illinois Urbana-Champaign and University of Rochester found that X’s Community Notes program can reduce the spread of misinformation, leading to post retractions by authors.
Yet experts agree that no single approach works in isolation. Content moderation is a complex problem. There is no one silver bullet that will work in all situations. The challenge can only be addressed by employing a variety of tools that include human factcheckers, crowdsourcing, and algorithmic filtering.
Current State of Tech Innovation
Despite these challenges, technology continues to advance at breakneck speed. 70% of executives and 85% of investors (venture capital, private equity, and commercial banks) pick AI agents as a top three impactful technology for 2025. The industry is witnessing major developments across multiple fronts:
- AI Infrastructure: IBM announced “Loon,” an experimental quantum chip that advances its approach to error correction by blending quantum hardware with classical signal-processing concepts. Meanwhile, Alphabet Inc. plans to invest approximately €5 billion (US$5.8 billion) in Germany, including a new data center near Frankfurt.
- Autonomous Systems: The US CISA, FBI, and global partners have issued updated threat intelligence on the Akira ransomware group, outlining new TTPs targeting small businesses and critical infrastructure alike. A companion analysis shows Akira has hit more than 250 organizations since March 2023, extracting over $42 million in ransom payments
- Market Dynamics: Fresh data shows global equity fund inflows falling to a four-week low, driven largely by concerns that tech and AI valuations may be overheating.
Emerging Technologies for 2025
Looking beyond AI, the Top 10 Emerging Technologies of 2025, developed in collaboration with Frontiers, highlights 10 innovations with the potential to reshape industries and societies, spanning from structural battery composites and engineered living therapeutics to osmotic power and AI-generated content watermarking.
How to Find Accurate Tech News
In this challenging environment, consumers of tech news must become more sophisticated. When it comes to online content, think twice before you share that social media post. Don’t hit reshare until you stop and think to yourself, ‘Am I reasonably sure that this is accurate … does this seem plausible?’
Practical strategies include:
- Verify Sources: You should also check the credibility of the source material provided in the social media post. Look for established tech journalism outlets with strong editorial standards.
- Use Fact-Checking Tools: Google has created a tool called “Fact Check Explorer” — a searchable database of fact-checking articles from around the world.
- Check Images: If the social media post includes an image that you suspect might be a fake, then you can use reverse image search engines, such as Google and TinEye, that may help you find the original image and where and when it appeared online.
- Distinguish News from Opinion: Cable TV commentators, podcasters and columnists have blurred the line between news and opinion. If the evidence cited in the social media post comes from a news source, you should consider if the social media post is sharing fact-based reporting or someone’s opinion of the news.
The Role of Technology Companies
The responsibility for combating misinformation doesn’t rest solely with consumers. Technology companies need quality, fact-checked information to train their models. What gives hope is that journalism and the journalistic process is still essential. Otherwise, technology can’t survive.
Several organizations are developing AI-powered tools to fight AI-generated misinformation. Full Fact’s technology uses generative AI to monitor and detect misinformation at internet scale, allowing small groups of people to find, check and challenge the most harmful claims.
News agencies like Agence France-Presse (AFP) have developed AI-supported verification tools, including Vera.ai and WeVerify. Several detectors, including True Media, Hive, AI or Not AI, Hugging Face, and Neuraforge, use ML and forensic analysis to detect AI-generated deceptive content.
The Global Dimension
Once you start detaching yourself from third-party fact-checkers in the United States, where a bulk of Facebook’s money is spent, then the biggest source of revenue for those fact-checking organizations goes away, and the quality of fact-checking will fall everywhere.
This creates a particularly acute problem for developing regions where misinformation can have severe real-world consequences.
Video-based platforms such as TikTok and YouTube represent special challenges based on video’s capacity to create parasocial relationships, its low-demand entry threshold for user literacy, and the medium’s capacity to incorporate more information into shorter messages.
Additionally, platforms such as WhatsApp function primarily as a peer-to-peer messaging network, where misinformation spreads more like rumors and can often go under the media radar, reaching into networks with less internet access and through trusted peer channels.
The Path Forward
The current moment represents both crisis and opportunity. While the overall scope of verified AI-generated misinformation remains limited, the findings underscore its growing potential to disrupt the information ecosystem.
Despite challenges, fact-checking organizations remain committed to their mission. Full Fact wholeheartedly rejects claims of bias. Meta has provided no proof for such claims and there is no reason to overturn a system of independent fact checking that puts reliable, evidence-based verdicts at users’ fingertips.
The solution requires collaboration across multiple stakeholders. We are attempting to tackle the problem of disinformation, a phenomenon that is borderless because it’s in all of these global platforms, but the way we are trying to corral it is through regulators in individual countries.
We derive hope from the fact that we’ve dealt with similar global issues before, like financial corruption, counterterrorism, and pandemics, and we have found ways to devise systems that work across much of the world.
Conclusion
As we navigate late 2025, the tech industry faces a paradox: the same AI technologies driving innovation are simultaneously undermining our ability to distinguish truth from fiction.
Professional fact-checking is under pressure, community-based alternatives show mixed results, and AI-generated misinformation is becoming increasingly sophisticated.
For consumers of tech news, the message is clear: approach all information with healthy skepticism, verify sources, use available fact-checking tools, and recognize that no single platform or approach can guarantee accuracy.
The search for reliable tech news requires active participation, critical thinking, and a commitment to information literacy.
The future of accurate tech journalism depends not just on better tools or more funding, but on a shared recognition that in an age of artificial intelligence, human judgment and professional journalistic standards remain irreplaceable.
As we continue to develop technologies that can generate convincing falsehoods at scale, our investment in systems that verify truth must scale proportionally—or we risk losing our ability to distinguish innovation from illusion.