For the humanitarian sector, the 2004 Indian Ocean tsunami marked the first widespread digital coverage of a disaster and the wars in Afghanistan and Iraq in the early 2000’s became the first armed conflicts dissected in real time by thousands of online commentators This signalled the beginning of a decentralized digital information era, shaped less by traditional media and more by fast, participatory online spaces. Although early content often failed to reflect local realities, especially in contexts where local languages were absent from platforms, the evolving blogger community actively reported, verified online content and called out instances of images which exaggerated harm2, fabricated reporting3, and analysed and criticised what they considered falsehoods, sometimes getting it wrong4.
These online blogger communities emerged as a powerful force within the information ecosystem, driven by a mix of technological innovation and social shifts, enabling individuals to share their perspectives. This coincided with growing dissatisfaction with mainstream media, as bloggers offered more personal, unfiltered, and immediate accounts during events – These blogs created spaces for alternative narratives and fostered networks of discussion and forming informal communities. Ultimately, these early blogger networks laid the foundation for participatory media and the decentralized communication models that followed.
For operational and communication professionals in the humanitarian sector, this period felt complex and fast-paced, yet the changes this heralded were unimaginable. Although humanitarian crises – especially wars – have always been accompanied by information intending to deceive, manipulate, damage reputations, disrupt activities and advance political or ideological agendas5, what has changed is the nature, speed and scale of such information. Intent Today, a wide range of actors – professional or amateur, anonymous or overt – create and spread harmful information through the internet, mobile phones and social media platforms offline. Fact, truth and information sharing have become way less important for far too many – particularly in the online space. Now opinion, emotion and perceptions are dominant, motivated by financial, political or ideological reasons and intending to polarize and sow distrust.
The information space instead of facilitating access to trustworthy life-saving information about disaster preparedness and response plays on people’s fears and uncertainties. Information about crises has become increasingly manipulated, politicized and fragmented, including denial of the severity or cause of a disaster, and sometimes even its very existence. (USA, Hurricane Helene, can this be referenced.?
COVID-19 – an unparalleled health emergency in living memory – combined a pandemic and infodemic6affecting people across the globe. Whilst the health emergency has been largely contained, the infodemic has left residual and/or exacerbated distrust in authority, vaccine hesitancy or refusal and an industry of influencers and conspiracy theorists disseminating narratives about the dangers of everything from medical treatments, health care professionals, and even cell phone towers7.
The immediacy of this is harmful – even deadly – and correcting it practically impossible. Often leveraging highly polarized and social divisions, sensitivities or prejudices, harmful information is now leading people to turn against those in authority, refuse humanitarian aid and to dehumanize and mobilize against those seen as “other”. (Valencia flood response – migrants, Stockport riots.) In addition, civilians may inadvertently be engaging in the information dynamics of armed conflict, potentially putting themselves at risk.
This is often occurring at the community level, though in some contexts it is being fuelled by external actors, at a time when trust in traditional institutions is declining across much of the world and non-governmental organizations (NGOs) increasingly framed as partisan or politically aligned. (See Chapter 2 on trust.)
Defining harmful information
Today’s information ecosystem is highly complex, encompassing many forms of harmful information, including misinformation, disinformation, mal-information, hate speech, propaganda, foreign information manipulation and interference (FIMI). (See Annex 1 for definitions.)
These types of information often coexist in the same context and reinforce one another, with intent being the key distinguishing factor. This Report uses the term harmful information to focus on its impact and the responses it demands – rather than specific classifications, which are often politically charged and context-dependent. While there is no agreed upon definition, for the purposes of this Report, harmful information is considered as information that has the potential to cause, contribute to or result in harm to an individual or entity. The ICRC report defines as information that can potentially cause or contribute to harm, either physically, psychologically, economically or socially.
In this framing, legitimate criticism, even if uncomfortable or challenging, is not considered harmful. However, it is important that organizations acknowledge and respond to such criticism, as it can still be used against them.
Understanding harm is essential to assess its impact and on individuals, communities, organizations and society, and to develop appropriate strategies for response.
The evolving information ecosystem
Harmful information is often framed as an online phenomenon, but humanitarian crises have long been affected by rumours, myths and propaganda spread through word-of-mouth, pamphlets, radio and television broadcasts, community meetings and (other) official channels. It is equally important to recognize these offline dynamics, especially as harmful information moves fluidly between online and offline spaces. Ultimately, the impacts are most often felt offline, by people. .Need to add more here regarding offline from contributions received.
The online information environment, spanning digital and cyber domains, has become increasingly complex and contested. States approach it from distinct angles. Some focus on infrastructure – the cables: the physical architecture that enables data flow, including data centres, cloud servers, undersea cables. Others concentrate on the content: the narratives, data, and discourse. These perspectives on cables and content are not mutually exclusive but they shape how control is asserted over the information space, with implications for sovereignty, surveillance, and resilience. Control over content may involve shaping, restricting, or amplifying information flows for political, ideological, or security purposes – often through laws, platform regulation, disinformation campaigns, censorship, manipulation of narratives, or sophisticated influence operations. The boundaries between cybersecurity, information integrity and geopolitical competition are increasingly blurred. The choices made regarding cables and content have real consequences for access to information, freedom of expression and communities’ ability to engage safely and meaningfully in the digital space.
Threat actors’ tactics, techniques, and procedures (TTPs) are no longer limited to technical systems and are leveraging harmful information, including driven by financial, political, or ideological motives. The threat landscape now spans both the digital and physical domains, blurring the lines between technical breaches and cognitive manipulation. Psychological factors are central to the effectiveness of harmful information which exploits emotion, identity and existing grievances and social divisions. This makes it harder to counter with facts alone.
The other factors shaping the information ecosystem are discussed in chapters 2 and 3.
Language, Media, and Trust in Japan’s Humanitarian Space
Japan’s unique linguistic and media environment provides a partial buffer against the global spread of harmful information. As Japanese is spoken almost exclusively within the country, the language itself acts as a kind of firewall, limiting the impact of harmful information campaigns circulating in other languages such as English, Arabic or Russian. Additionally, Japan’s strong domestic media landscape – including public broadcasters – helps shape a more contained information environment. This reduces dependency on foreign media sources and helps limit exposure to externally driven harmful narratives.
Social media usage patterns also differ in Japan. Platforms like LINE and X (formerly Twitter) are more commonly used than global platforms like WhatsApp or Facebook. This creates a different digital ecosystem, where harmful information tends to be more localized and less influenced by international trends.
The Japanese Red Cross Society (JRCS) has consistently maintained a strong public communication strategy that contributes to trust and transparency. The JRCS offers information to donors, volunteers and members of the public primarily through its official website, social media accounts (X, Instagram, Facebook, YouTube), a monthly newspaper and other regular publications. Whenever a disaster occurs in Japan or abroad, it promptly communicates JRCS disaster relief operations to donors and the general public. In all donor communications, it directs people to clear explanatory resources such as its donation guide. As a result, instances of harmful information targeting JRCS have been extremely rare in Japan. Trust is cultivated not only through emergency communication, but through consistent, transparent dialogue over time.
Who are the threat actors?
A diverse range of threat actors are involved in creating and amplifying harmful information from lone wolves to paid contractors, coordinated troll networks to outsourced call centres, propagandists to inauthentic accounts, national and transnational entities. All require differentiated responses. Whether driven by profit, ideology, coercion or ego, these actors operate across a vast and growing influence economy. Some act defensively, others offensively, amplifying noise or falsehoods with strategic intent. The introduction of artificial intelligence (AI) has further lowered the barrier to entry, enabling more sophisticated, large-scale manipulation and deepening the asymmetry.
Threat Actors Involved in Harmful Information
What most clearly differentiates these actors is their intent, whether to deceive, disrupt, distract, or dominate, which shapes both the form their actions take and the appropriate countermeasures. This is not a level playing field. Those spreading harmful content often act faster, louder, and with fewer constraints than those trying to uphold truth and trust.
Information integrity in crisis situations
The United Nations (UN) frames “strengthening information integrity as one of the most urgent challenges of our time” and a fundamental component of human rights, peace and sustainable development. Information integrity refers to the accuracy, consistency and reliability of information. The UN launched the Global Principles for Information Integrity8 to address the growing threats posed by misinformation, disinformation, hate speech and the misuse of digital technologies, including artificial intelligence. These Principles are:
Societal Trust and Resilience;
Healthy Incentives;
Public Empowerment;
Independent, Free and Pluralistic Media;
Transparency and Research.
The UN also developed a voluntary Code of Conduct for Information Integrity on Digital Platforms9. Its Guidance for Strengthening information integrity during a crisis through communications10, published in December 2024, recognizes the importance of increasing resilience to information threats, enhancing detection and analysis capabilities, strengthening mitigation and response measures and providing a common approach to information integrity across crisis contexts.
From broadcast to two‑way engagement
The promise of new tools for community engagement should have marked a broader shift toward stronger, more inclusive communication. Digital platforms enabled a move from one-way, broadcast messaging – press releases, websites, official statements – to two-way engagement. But in practice, this much-valued shift has become increasingly complex. Anyone can now respond, challenge, reinterpret or amplify a message instantly and publicly.
Whilst two-way engagement invites dialogue, it also opens the door to criticism, harmful information and loss of narrative including with a global public and malicious threat actors far removed from the realities on the ground. Social media rarely provides space for meaningful exchange, especially when responses spiral far from the original message. For humanitarian organizations committed to the principles of neutrality and impartiality, this type of unpredictable and emotionally charged engagement can be difficult to monitor and manage and may even have negative operational impacts. Failing to understand and navigate this information environment has real consequences for humanitarian actors to operate.
As of June 2025, platforms like Google (26 years old), Facebook (21), YouTube (20), Twitter/X (19), VK (18), WhatsApp (16), Instagram (14), Signal (12), Telegram (11) and TikTok (8 globally, 9 in China) reach billions of people worldwide. These platforms have reshaped how people access and share information and how they connect and engage. They are also powerful vectors for harmful content at scale. Content is amplified by algorithms that prioritize what drives engagement, often the most shocking or polarizing because it generates more clicks, and ultimately, profit. Algorithms are reinforcing bias and shaping fragmented, bespoke realities and creating echo chambers that distort perception and deepen division. This is less about a failure of these platforms/systems and more about being features of the system.
As UNHCR survey highlighted 20% of their staff had experienced harmful information
The deliberate spread of harmful information erodes trust, casting doubt on humanitarian intentions, principled action and legitimacy. In some cases, it has led to physical violence: refugees and migrants threatened in Aceh, attacks on staff during the Ebola response in Guinea, and hostility during flood response in Valencia. In armed conflicts, it has contributed to the withdrawal of organizations such as the ICRC following a breakdown in acceptance and safety, and of the UN Peacekeeping Operations in Mali. (Check if can include)
Social media users increasingly expect rapid, authentic responses. This creates pressure on organization’s to engage in real time, which is often in tension with centralized approval processes, the need to coordinate with teams on the ground and the imperative for accuracy. For example, during safeguarding or integrity incidents, the expectation for swift and transparent communication can challenge the requirement to verify facts, uphold duty of care and ensure due process. Delays or overly cautious messaging can also be perceived as prioritizing organizational reputation and funding relationships over the protection of individuals.
In an environment where harmful information travels faster than facts and institutional credibility is under constant scrutiny, attempts to engage can easily be misread as reactive, defensive, disingenuous, or worse, lacking empathy. Building genuine two-way engagement in such contexts requires more than responsiveness; it demands intention, transparency and a sustained presence. If actors are leveraging an organization’s communications, e.g., comment sections on posts, to disrupt or divert attention, then two-way engagement is unlikely to change anything. Organizations thus refrain from responding but the messaging remains and can impact credibility.
Organizations also struggle with when to communicate. Delaying responses or choosing silence creates a vacuum, one that actors, whether ill intention or not, will inevitably fill. This was exemplified during the political unrest in Bangladesh in July-August 2024, the Bangladesh Red Crescent Society (BDRCS) quietly activated its emergency protocols, with hundreds of volunteers deployed to provide first aid to injured civilians at key protest sites, hospitals received logistical and blood bank support, and emergency relief (particularly water and food), which was distributed in locations affected by road blockades. In cities like Dhaka, Khulna, and Rajshahi, over 2,000 trained volunteers were deployed to assist where the security situation permitted access. This work was conducted with intentional discretion given the highly charged environment and BDRCS leadership made the operational decision to keep a low public profile in order to reduce the risk of politicization, protect volunteer safety, and avoid being drawn into partisan narratives. “While understandable from a risk management perspective, this silence created a void that was quickly filled with speculation, criticism, and political spin”, recognized the IFRC’s Head of Delegation, Alberto Bocanegra. To check agreement to include.
Who is mgost vulnerable to harmful information — and why?
Despite the unprecedented volumes of information available today, many people still live in information vacuums where vital information is inaccessible or simply unavailable. Vulnerability to harmful information affects individuals, communities, institutions, and entire societies. However, certain groups tend to be disproportionately affected due to structural, contextual, or situational factors. Political and security specialists/authors, Singer and Brooking frame this propagation as: “Like any viral infection, information offensives work by targeting the most vulnerable members of a population – in this case, the least informed.” 12
In humanitarian contexts, this could include13:
Crisis-affected populations facing armed conflict, disaster, displacement, migration, or health emergencies often experience information scarcity or manipulation, disrupted communication, and high anxiety, making them more vulnerable to harmful information.
Marginalised or socially excluded groups already facing discrimination due to race, ethnicity, gender, religion, disability, or legal status are more often targeted by harmful information and may have limited access to reliable information.
Youth aremore exposed to online harmful information as may spend more time online, influenced by peers, trends and online communities, while elderly populations may face challenges with digital literacy or navigating new information environments.
Humanitarian staff, volunteers and organizations are targeted by harmful information that undermines trust, access, and security.
In polarized environments, journalists, human rights defenders and civil society leaders are often targeted by harmful information aimed at discrediting, silencing, or endangering them.
Those with limited access to information and less developed critical thinking skills.
The United Nations Special Rapporteur for Freedom of Expression stated that state-sponsored disinformation “has a potent impact on human rights, the rule of law, democratic processes, national sovereignty and geopolitical stability because of the resources and reach of States and because of their ability to simultaneously suppress independent and critical voices in the country so that there can be no challenge to the official narratives.”14
Understanding the forms of harm and associated risks in humanitarian contexts – how to identify, measure, and mitigate them – is now essential to protecting and assisting people and ensuring effective response.
What is the impact and harm?
Harmful information impacts the lives, safety and dignity of people in humanitarian crises, escalates violence in armed conflict and disasters, distorts realities on the ground, misleading people about availability of aid, on life-saving decisions such as whether to stay or flee, and whether to accept or reject medical responses.
UNHCR has identified a range of offline harms which include xenophobia, racism, persecution, violence, killings, forced displacement, trafficking, exploitation, barriers to accessing rights and services, damaged reputation, erosion of trust and legitimacy, diminished ability to protect and support refugees, threat to the physical security of humanitarian workers, and decreased donor support15.
The World Health Organization (WHO) has focused on …
“Harmful information is reported to induce psychological and social harm in both communities affected by armed conflict and among people serving those communities.”16 See original WHO source. This is because it prevents people from seeking or accessing humanitarian programmes and undermines the ability of organizations to deliver and implement effective interventions.17
As referenced by the International Committee of the Red Cross, the sharing of harmful information sometimes violates international law. International Humanitarian Law (IHL) “imposes important limits on publishing or sharing certain forms”18of harmful information. These include encouraging violations of IHL, threatening violence to spread terror, unduly interfering with humanitarian or medical works, and publishing photos of prisoners of war. Under International Human Rights Law (IHRL) advocating hatred that constitutes incitement to hostility, discrimination or violence, and inciting genocide may also be violations of IHRL or other rules of international law19–
International Disaster Response Law, (CHECK) while comprehensive in outlining legal frameworks for cross-border disaster response does not specifically address harmful information and focuses on facilitating and regulating international assistance: coordinating entry, logistics, and legal responsibilities before, during and after disasters. (Revisions – check what planned?)
To respond effectively, it is essential to understand how harmful information impacts humanitarian action. A clear typology of harm helps build the evidence base and supports efforts to identify, measure and act to mitigate these effects. Each of the following harms can undermine humanitarian response and must be better understood, monitored, and addressed.
Typology of harm
Those with disproportionate access or control over media and platforms can use harmful information to discredit civil society and humanitarian organizations, associating them to bad actors or blaming them for crises to justify repressive policies. The fuels discrimination, human rights abuses and social tensions.20
“Cognitive and contextual factors have a particularly strong influence over how people exchange and consume information. Cognitive overload, for example, can cause individuals to reject true information because of negative emotions, such as stress, anxiety, confusion, fear, or fatigue, that are felt when they are subjected to high volumes of information.”21
Artificial Intelligence and harmful information
People’s opinions are increasingly shaped by systems they do not fully understand and cannot meaningfully challenge. Many assume what they are seeing on their feed is what others are seeing but this is not the case. Artificial Intelligence (AI) – driven recommendation algorithms decide what content surfaces, often reinforcing echo chambers and emotional reactions. The result is a powerful, subtle erosion of individual agency in how information is encountered, understood and acted upon.
The harmful information landscape has evolved with the release of a range of AI models. The publication of the first scientific report – the International AI Safety Report (hereafter, Bengio Report) – in January 2025 states that if it was already difficult to discern what was true and false, AI has now fundamentally shifted the balance of power in favour of those who control information, how it’s produced, who is empowered or disempowered by it, and how it is manipulated. The Report highlights that AI can generate persuasive, human-like content, rapidly and at scale, despite lacking deep conceptual understanding. Content is often indistinguishable from human-produced material and people tend to overestimate their ability to detect it, increasing their vulnerability to manipulation. Emotionally charged or personalised content, especially when combined with social media data, can strongly shape perceptions.22
The Bengio Report draws attention to the broader impact of AI-driven disinformation still being debated: some studies find limited spread and effect, others warn that influence may be concentrated, harder to trace, or have unintended consequences. What is clear is that the information environment is becoming more complex, and more contested. While there is no scientific consensus on the full societal impact of false information, its viral potential is documented. Detection and mitigation techniques such as watermarking or content filters offer some promise but remain limited. Efforts to curb manipulation must also navigate tensions with protecting free expression. Some evidence points to a significant increase in the prevalence of AI-generated deepfake content. But overall, scientific evidence remains limited. Anecdotal reports of harm from AI-generated fake content are growing, but reliable statistics on the scale and impact are still lacking, making it difficult to assess the full extent of the problem.23
Type of Harm | Examples of harm | |
p1.4 | Physical | Physical injury, loss of life, or incitement to violence, panic or unsafe behaviours, treatment avoidance |
p3.19 | Psychological | Emotional or mental trauma, fear, anxiety, disorientation, discrimination, shame, distrust, manipulation, bullying, stalking, harassment |
p1.30 | Social | Disruption to social cohesion and trust within communities, erosion of relationships, stigma, social fragmentation, divisions |
p1.17 | Societal | Shrinking space for humanitarian action, undermining trust in institutions, reputational damage, operational disruption, impaired service delivery, silencing of sectors of society, erosion of rights, restricted access to information, exclusion, limits to freedom of expression Use of harmful information to justify legal persecution, criminalization of speech, or abuse of judicial systems |
p3.17 | Informational Harm | Distortion, suppression, or manipulation of information; loss of access to accurate, timely, or trustworthy information; saturation with falsehoods (information overload); erosion of the shared understanding needed for decision-making. |
p1.20 | Deprivational/Financial/Economic | Livelihood disruption, economic loss, loss of access to essential resources or services, financial losses, theft, looting, fraud/scams, including fraud, diversion of resources, inability to procure necessities, restrictions on funding, extortion, loss of reputation, loss of donor support |
p3.25 | Digital/Technological Harm | Attacks on digital identity, doxxing, algorithmic amplification of harmful content, deep-fakes, bot-driven abuse, platform manipulation. |
p5.10 | Longitudinal/Intergenerational Harm (cross-cutting dimension) | Lasting effects on children exposed to harmful narratives, breakdown of intergenerational trust, perpetuation of stereotypes or trauma, loss of hope. |
AI-generated fake content | Deepfake |
Audio, text, or visual content, produced by generative AI, that depicts people or events in a way that differs from reality in a malicious or deceptive way, e.g., showing people doing things they did not do, saying things they did not say, changing the location of real events, or depicting events that did not happen. | A type of AI-generated fake content, consisting of audio or visual content, that misrepresents real people as doing or saying something that they did not actually do or say. A subset of AI-generated fake content. Limited to audio and visual manipulations (e.g., video face swaps, fake voice recordings). |
A range of mitigation techniques exist, but all have limitations. As AI-generated content becomes increasingly difficult to distinguish from human-created content, detection remains a persistent challenge. Media authentication methods, such as digital watermarks, offer some protection but can be easily bypassed, especially in high-risk or adversarial environments.
There is no consensus on whether more realistic fake content leads to more effective manipulation, or whether the main barrier is distribution. Some experts argue that the real challenge is spreading fake content at scale – not creating it. Research also suggests that ‘cheapfakes’ (less sophisticated manipulations of audiovisual content) can be as harmful as deepfakes reinforcing the idea that reach may matter more than quality. While social media platforms employ moderation, labelling and source credibility checks to limit the spread of such content, these measures raise concerns about free speech. At the same time, research shows that algorithms often prioritise engagement and virality over accuracy or authenticity of content, potentially aiding the rapid spread of AI-generated content.
The UN Governing AI for Humanity Report emphasizes that it is “more useful to look at risks from the perspective of vulnerable communities and the commons”. It includes a snapshot of expert perceptions from the AI Risk Global Pulse Check, a poll capturing AI-related trends and risks as identified by 348 AI experts. However, the Report stresses that risk management must go beyond merely listing or prioritizing risks. It advocates for framing risks based on vulnerabilities – shifting the focus from what the risk is (e.g., “risk to safety”) to who is at risk, where these risks occur, and who should be accountable. This framing draws attention to the vulnerability of individuals, political systems, society, the economy and the environment. In the context of safety, the Report underscores the importance
Frameworks for analysis of harmful information
Analysing harmful information and the risks it poses is essential for understanding the tactics, techniques and procedures used by those who develop and disseminate harmful information. This analysis must take into account contextual, historical, political, social, cultural and economic factors. Several analytical frameworks exist, including the 5 W’s (commonly used by journalists), ABCDE, DISARM and MITRE ATT&CK, each requiring varying levels of training and analytical capacity.
The ABCDE Framework | |
Actors | Who is spreading harmful information? What kind of threat actors are involved? (See above for range of actors) |
Behaviours | What is their influence? |
Content (narratives) | What kind of techniques, tactics and procedures do they use? |
Degree | What is being spread? What narratives are being used? |
Effect | What is the scale and spread of harmful information? What audiences are being reached? Which platforms or media are being used? What is the impact or potential harm on people, community, society? What are risks? |
Effective analysis also depends on monitoring both the operational and information environments to detect harmful content, while incorporating a community-based approach to understand its dynamics and impact.
Across the humanitarian sector, organizations are employing a range of tools – commercially oe internally developed – that combine automated and manual methods to scrape public data and assess how harmful information is affecting their operations or the broader context. However, there is general acknowledgement that optimal solutions have yet to be found, not due to a lack of tools but primarily due to resource constraints. Additionally, concerns persist around the legal and ethical implications of data scraping which can be perceived as intrusive or akin to surveillance.