Misinformation in media

Misinformation in media
Author – Mujeem Khan
1.Roles for Health Care Professionals in Addressing Patient-Held Misinformation Beyond Fact Correction
SOUTHWELL. et al. (2020), to conclude that most patients trust their health care professionals, patients can find a wide range of inaccurate medical information online with minimal effort Although many resources provide accurate in- formation (e.g., government health agencies, professional organizations, and patient advocacy groups), Mitigating the effects of misinformation requires providers to empower patients with accurate sources of information to meet patients’ own needs for self-education. Patient educational materials should include information about trusted resources.
2.Concrete Recommendations for Cutting Through Misinformation During the COVID-19 Pandemic
DONOVAN, J. (2020), to conclude that at Harvard Kennedy’s Shorenstein Center, the Technology and Social Change Research Project studies how misinformation spreads and what its impact is on politics and so- city (bit.ly/2YcTX09bit.l). Unlike political disinformation, or fake news, health misinformation can quickly lead to changes in behaviors, which is why health communicators can’t wait for tech companies to solve the problem, Search engines and social media platforms are struggling to control the groundswell of new attention to COVID-19 and are having difficulty matching the right information to the right person at the right time. For example, searching Google, Facebook, Twitter, or YouTube for the phrase “Where can I get tested for coronavirus?” will return different information—or worse, fake news, a predatory scam, or malware
3.Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence
HAJLI, N. et al. (2022), to conclude that artificial intelligence (AI) is creating a revolution in business and society at large, as well as challenges for organizations. AI-powered social bots can sense, think and act on social media platforms in ways similar to humans. The challenge is that social bots can perform many harmful actions, such as providing wrong information to people, escalating arguments, perpetrating scams and exploiting the stock market. As such, an understanding of different kinds of social bots and their authors’ intentions is vital from the management perspective. Drawing from the actor-network theory (ANT), this study investigates human and non-human actors’ roles in social media, particularly Twitter. We use text mining and machine learning techniques, and after applying different pre-processing techniques, we applied the bag of words model to a dataset of 30,000 English-language tweets. The present research is among the few studies to use a theory-based focus to look, through experimental research, at the role of social bots and the spread of disinformation in social media. Firms can use our tool for the early detection of harmful social bots before they can spread misinformation on social media about their organizations.
4.The Effectiveness of Social Norms in Fighting Fake News on Social Media
GIMPEL, H. et al. (2021), to conclude that fake news poses a substantial threat to society, with serious negative consequences. Therefore, we investigate how people can be encouraged to report fake news and support social media platform providers in their actions against misinformation. Based on social psychology, we hypothesize that social norms encourage social media users to report fake news. In two experiments, we present participants a news feed which contains multiple real and fake news stories while at the same time exposing them to injunctive and descriptive social norm messages. Injunctive social norms describe what behavior most people approve or disapprove. Descriptive social norms refer to what other people do in certain situations. The results reveal, among other things, that highlighting the socially desired behavior of reporting fake news using an injunctive social norm leads to higher reporting rates for fake news. In contrast, descriptive social norms do not have such an effect. Additionally, we observe that the combined application of injunctive and descriptive social norms results in the most substantial reporting behavior improvement. Thus, social norms are a promising socio-technical remedy against fake news.
5.Misinformation About Commercial Tobacco Products on Social Media— Implications and Research Opportunities for Reducing Tobacco-Related Health Disparities
TAN, A. S. L.; BIGMAN, C. A. (2020), to conclude that misinformation about commercial tobacco products is not new. For decades, major tobacco companies deliberately deceived the public through marketing practices (e.g., brand names or labels such as “natural” and “organic”) and public relations campaigns. The tobacco industry’s deception of the public provides an important historical context for examining current forms of tobacco product misinformation through social media.
6.A Prologue to the Special Issue: Health Misinformation on Social Mediatious diseases.
SYLVIA CHOU, W.-Y.; GAYSYNSKY, A. (2020), to conclude that this special issue came about in recognition of several key trends that have emerged over the past decade, including (1) Americans are increasingly getting their news and health in- formation from SM; (2) the public’s trust in traditional sources of information (e.g., mass media, government agencies, the medical system) is at historic lows; and (3) the online discourse, from politics to health, has become increasingly divisive and partisan. These factors provide a fertile environment where health misinformation can take root and spread, and the potential real-world consequences are alarming. The consequences of endorsing such misinformation can be disastrous:
7.Where We Go From Here: Health Misinformation on Social Media
SYLVIA CHOU, et al. (2020), to conclude that falsehoods have been shown to spread faster and farther than accurate information,1 and research suggests that misinformation can have negative effects in the real world, such as amplifying controversy about vaccines2 and propagating unproven cancer treatments.3 Health misinformation on social media, therefore, urgently requires greater action from those working in public health research and practice. We define “health misinformation” as any health-related claim of fact that is false based on current scientific consensus. Many other types of information pose a challenge for health communication, including contradictory or conflicting findings, changing evidence, and information that involves a high degree of uncertainty; how- ever, these issues are outside the scope of this editorial, which focuses on information that is patently false.
8.Marketplaces of Misinformation: A Study of How Vaccine Misinformation Is Legitimized on Social Media
DI DOMENICO, et al. (2022), to conclude that combating harmful misinformation about pharmaceuticals on social media is a growing challenge. The complexity of health information, the role of expert intermediaries in disseminating information, and the information dynamics of social media create an environment where harmful misinformation spreads rapidly. However, little is known about the origin of this misinformation. This article explores the processes through which health misinformation from online marketplaces is legitimized and spread. Specifically, across one content analysis and two experimental studies, the authors investigate the role of highly legitimized influencer content in spreading vaccine misinformation. By analyzing a data set of social media posts and the websites where this con- tent originates, the authors identify the legitimation processes that spread and normalize discussions about vaccine hesitancy (Study 1). Study 2 shows that expert cues increase the perceived legitimacy of misinformation, particularly for individuals who generally have positive attitudes toward vaccines. Study 3 demonstrates the role of expert legitimacy in driving consumers’ sharing behavior on social media. This research addresses a gap in the understanding of how pharmaceutical misinformation originates and becomes legitimized. Given the importance of the effective communication of vaccine information, the authors present key challenges for policy makers.
9.FAKE NEWS ON SOCIAL MEDIA: PEOPLE BELIEVE WHAT THEY WANT TO BELIEVE WHEN IT MAKES
NO SENSE AT ALL
MORAVEC, et al. (2009), to conclude that fake news (i.e., misinformation) on social media has sharply increased in the past few years. We conducted a behavioral experiment with EEG data from 83 social media users to understand whether they could detect fake news on social media, and whether the presence of a fake news flag affected their cognition and judgment. We found that the presence of a fake news flag triggered increased cognitive activity and users spent more time considering the headline. However, the flag had no effect on judgments about truth; flagging headlines as false did not influence users’ beliefs. A post hoc analysis shows that confirmation bias is pervasive, with users more likely to believe news headlines that align with their political opinions. Headlines that challenge their opinions receive little cognitive attention (i.e., they are ignored) and users are less likely to believe them
10.Monitoring Misinformation on Twitter During Crisis Events: A Machine Learning Approach
HUNT, et al.(2022), to conclude that social media has been increasingly utilized to spread breaking news and risk communications during disasters of all magnitudes. Unfortunately, due to the unmoderated nature of social media platforms such as Twitter, rumors and misinformation are able to propagate widely. Given this, a surfeit of research has studied false rumor diffusion on Twitter, especially during natural disasters. Within this domain, studies have also focused on the misinformation control efforts from government organizations and other major agencies. A prodigious gap in research exists in studying the monitoring of misinformation on social media platforms in times of disasters and other crisis events. Such studies would offer organizations and agencies new tools and ideologies to monitor misinformation on platforms such as Twitter, and make informed decisions on whether or not to use their resources in order to debunk. In this work, we fill the research gap by developing a machine learning framework to predict the veracity of tweets that are spread during crisis events. The tweets are tracked based on the veracity of their content as either true, false, or neutral.
Conclusion
Misinformation in the media is a pervasive issue with significant consequences for society. It erodes trust in institutions, distorts public perceptions, and undermines informed decision-making. The proliferation of misinformation is fueled by various factors, including the rapid spread of information online, the lack of accountability in some media outlets, and the deliberate dissemination of falsehoods for ideological or financial gain. Addressing misinformation requires a multifaceted approach involving media literacy education, fact-checking initiatives, responsible journalism practices, and increased transparency from media organizations. By promoting critical thinking skills and fostering a culture of skepticism, individuals can better discern credible information from misinformation, ultimately safeguarding the integrity of public discourse and democratic processes.
References
DI DOMENICO, G.; NUNAN, D.; PITARDI, V. Marketplaces of Misinformation: A Study of How Vaccine Misinformation Is Legitimized on Social Media. Journal of Public Policy & Marketing, [s. l.], v. 41, n. 4, p. 319–335, 2022. DOI 10.1177/07439156221103860. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=f31641f5-e79b-36fa-a14a-eeb15ab1a577. Acesso em: 26 fev. 2024.
DONOVAN, J. Concrete Recommendations for Cutting Through Misinformation During the COVID-19 Pandemic. American Journal of Public Health, [s. l.], v. 110, p. S286–S287, 2020. DOI 10.2105/AJPH.2020.305922. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=93e278aa-3c75-3acc-a488-05e085cc8f48. Acesso em: 26 fev. 2024.
GIMPEL, H. et al. The Effectiveness of Social Norms in Fighting Fake News on Social Media. Journal of Management Information Systems, [s. l.], v. 38, n. 1, p. 196–221, 2021. DOI 10.1080/07421222.2021.1870389. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=c58ad880-3b5c-315f-9939-1e9619b15b28. Acesso em: 26 fev. 2024.
HAJLI, N. et al. Social Bots and the Spread of Disinformation in Social Media: The Challenges of Artificial Intelligence. British Journal of Management, [s. l.], v. 33, n. 3, p. 1238–1253, 2022. DOI 10.1111/1467-8551.12554. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=76eb3cb5-a6c2-395d-9a79-59a99478c9e1. Acesso em: 26 fev. 2024.
HUNT, K.; AGARWAL, P.; ZHUANG, J. Monitoring Misinformation on Twitter During Crisis Events: A Machine Learning Approach. Risk Analysis: An International Journal, [s. l.], v. 42, n. 8, p. 1728–1748, 2022. DOI 10.1111/risa.13634. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=e495cef6-0f28-3c51-9bbf-f6801a6a338c. Acesso em: 26 fev. 2024.
MORAVEC, P. L.; MINAS, R. K.; DENNIS, A. R. Fake News on Social Media: People Believe What They Want to Believe When It Makes No Sense at All. MIS Quarterly, [s. l.], v. 43, n. 4, p. 1343–1360, 2019. DOI 10.25300/MISQ/2019/15505. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=73b930b8-9b11-39d9-bbb0-a259e316110d. Acesso em: 26 fev. 2024.
SOUTHWELL, B. G.; WOOD, J. L.; NAVAR, A. M. Roles for Health Care Professionals in Addressing Patient-Held Misinformation Beyond Fact Correction. American Journal of Public Health, [s. l.], v. 110, p. S288–S289, 2020. DOI 10.2105/AJPH.2020.305729. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=28c34359-2bc9-3bcf-b6d0-7ea25dc93d62. Acesso em: 26 fev. 2024.
SYLVIA CHOU, W.-Y.; GAYSYNSKY, A. A Prologue to the Special Issue: Health Misinformation on Social Media. American Journal of Public Health, [s. l.], v. 110, p. S270–S272, 2020. DOI 10.2105/AJPH.2020.305943. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=a5203620-cba5-3d34-a7b3-813db83e5fba. Acesso em: 26 fev. 2024.
SYLVIA CHOU, W.-Y.; GAYSYNSKY, A.; CAPPELLA, J. N. Where We Go From Here: Health Misinformation on Social Media. American Journal of Public Health, [s. l.], v. 110, p. S273–S275, 2020. DOI 10.2105/AJPH.2020.305905. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=9bd0247b-cd39-328d-af2d-fe5dca1fe41d. Acesso em: 26 fev. 2024.
TAN, A. S. L.; BIGMAN, C. A. Misinformation About Commercial Tobacco Products on Social Media—Implications and Research Opportunities for Reducing Tobacco-Related Health Disparities. American Journal of Public Health, [s. l.], v. 110, p. S281–S283, 2020. DOI 10.2105/AJPH.2020.305910. Disponível em: https://research.ebsco.com/linkprocessor/plink?id=0e929e5d-24b8-3451-ae61-6cd87ba614f5. Acesso em: 26 fev. 2024.

Leave a comment