互聯網在社會關係中所扮演的「挑撥者」角色,並加劇人與人之間的分化與對立
當前互聯網在社會關係中所扮演的「挑撥者」角色,指出平台演算法如何在無形之中加劇人與人之間的分化與對立。報導首先點出,社交媒體與新聞推送平台的核心目標,是透過最大化使用者的參與度來提升商業價值。透過精準分析使用者的行為,例如按讚、留言、停留時間與分享習慣,平台能不斷推送符合使用者偏好的內容。這種「量身定做」的推薦雖然提高了使用者的黏性,但也讓人長期只接觸到與自己觀點一致的資訊,形成所謂的「資訊茧房」與「過濾氣泡」,排除了多元視角。隨著時間累積,使用者逐漸誤以為自己的觀點才是主流甚至唯一正確的觀點,對不同意見的理解與包容性不斷下降,最終導致誤解加深,對立情緒升溫。
新聞進一步指出,這種現象之所以惡化,與人類的心理特質息息相關。人類對於衝突、恐懼、憤怒以及新奇、聳動的資訊往往反應更為強烈,這就是所謂的「負面偏好」。平台演算法捕捉到這種心理傾向後,會更傾向於推送能引發情緒反應的內容,特別是極端言論、社會衝突、煽動性的標題或假消息。相對來說,平靜理性、需要深度思考的資訊往往無法獲得足夠的曝光,逐漸被邊緣化。結果是,互聯網內容的曝光被嚴重傾斜,社會矛盾與對立被放大,人們產生「世界充滿衝突」的錯覺,群體之間的信任感與理解空間被進一步壓縮。
新聞還分析了背後的商業邏輯。互聯網平台的營運模式高度依賴廣告,而廣告價值來自於使用者的注意力與互動數據。因此,使用者的「眼球」本身就成為被售賣的商品。為了爭奪有限的注意力,平台與內容製作者無不傾向於創造最能吸引眼球的內容,而這些內容往往帶有戲劇性、衝突性與情緒煽動性。這意味著平台與創作者在追求商業利益的過程中,實際上利用甚至煽動了人性中的弱點,例如焦慮、憤怒與偏見,客觀上起到了挑撥人際與群體關係的作用。
報導強調,這種「挑撥效應」並非最初設計互聯網時的目的,而是由於平台演算法的注意力導向、人類心理的負面偏好、群體極化效應與現實社會的矛盾共同作用下,在數位空間被放大並加速呈現的結果。它使得社會分歧不僅更快浮現,也更容易引爆為公開衝突,嚴重阻礙不同群體之間的溝通與理解,營造出一種撕裂與對立的氛圍。
至於應對之道,新聞提出了多方面的思考。平台本身需要承擔更大的責任,例如提升演算法的透明度,避免單一化的推薦,並積極抑制虛假或有害資訊的擴散,同時設計出能鼓勵理性對話的機制。使用者則應該培養媒介素養,學會批判性思考,主動走出資訊茧房,透過多元資訊來源來驗證與平衡觀點,以維持理性與包容的心態。社會層面上,教育也被視為關鍵,特別是公民教育,應加強倡導尊重、理性與求同存異的溝通文化。最後,新聞指出,政府監管同樣不可或缺,需要透過合理的法律與制度規範平台行為,保障使用者權益,並維護整體網路生態的健康發展。
整體而言,這則新聞揭示互聯網在注意力經濟與演算法推動下,如何從資訊傳遞的中介者演變成社會分化的「放大器」。提醒社會必須同時在平台責任、使用者素養、教育改革與政策監管等多個層面尋找解方,才能減緩互聯網帶來的撕裂效應,重建數位時代下的社會信任與對話基礎。
The role of the internet as a kind of “provocateur” in social relationships, showing how platform algorithms have unintentionally intensified divisions and confrontations among people. It first points out that the core goal of social media and news recommendation systems is to maximize user engagement and commercial value. By analyzing user behavior—such as likes, comments, time spent on content, and sharing habits—platforms continuously push content that aligns with existing user preferences. While this “tailor-made” recommendation system increases user stickiness, it also limits exposure to diverse perspectives, creating what is known as the “information cocoon” or “filter bubble.” Over time, users may begin to assume their own viewpoints are mainstream or even the only correct ones, reducing their capacity for understanding and tolerance toward opposing views. This gradually deepens misunderstandings and heightens antagonism.
The report further explains that this phenomenon worsens due to human psychological tendencies. People are naturally more responsive to content that evokes conflict, fear, anger, novelty, or sensationalism—a tendency known as “negativity bias.” Algorithms pick up on this preference and prioritize content capable of triggering strong emotional reactions, especially extreme statements, social conflicts, clickbait, or misinformation. By contrast, calm, rational, and nuanced information often receives little visibility and is drowned out. This imbalance results in disproportionate exposure for extreme or inflammatory content, which magnifies social divisions and fosters the illusion that the world is filled with conflict. As a consequence, trust and understanding among different groups decline, while anger and fear become the emotions most easily amplified and spread online.
The article also examines the commercial logic behind this dynamic. Internet platforms rely heavily on advertising revenue, which is determined by user attention (time spent) and engagement (interactions). In this sense, users’ attention is treated as a commodity. To compete for this scarce resource, platforms and content creators—including traditional media and influencers—are incentivized to produce content that is most likely to capture attention. In practice, this often means conflict-driven, emotionally charged, or sensational material. Thus, in the pursuit of profit, platforms and creators effectively exploit and even amplify human vulnerabilities—such as anxiety, anger, and prejudice—thereby aggravating social rifts.
The report stresses that this “provocative effect” of the internet was not part of its original intent. Rather, it is the result of the interaction and amplification of three forces: algorithmic design focused on attention economics, human psychological traits like negativity bias and group polarization, and pre-existing social divisions. Together, they accelerate the emergence of social tensions and make conflicts more likely to explode, complicating dialogue and mutual understanding. The outcome is a digital environment that feels increasingly polarized and fragmented.
As for solutions, the report suggests multiple approaches. Platforms should shoulder greater responsibility by improving algorithm transparency, balancing recommendation systems, suppressing harmful or false information, and creating mechanisms that encourage rational dialogue. Users, on their part, need to build media literacy—learning to think critically, actively stepping out of their information cocoons, cross-checking information sources, and maintaining a rational, inclusive mindset. At the societal level, education plays a crucial role, particularly civic education, which should promote respect, rationality, and a culture of dialogue that embraces differences. Finally, regulation is deemed essential: governments should establish fair laws and frameworks to oversee platform behavior, protect user rights, and safeguard a healthier digital ecosystem.
In sum, this news piece highlights how, driven by attention economics and algorithms, the internet has shifted from being an intermediary of information to a powerful amplifier of social divisions. It emphasizes that a comprehensive solution requires combined efforts at the levels of platform accountability, user literacy, education, and government regulation, in order to reduce the internet’s divisive effects and rebuild trust and dialogue in the digital age.
- 1
- 2
- 3
- 4