Jump to content

Elmira Telegram

Administrators
  • Content Count

    4,450
  • Joined

  • Last visited

  • Days Won

    43

Everything posted by Elmira Telegram

  1. Disinformation campaigns use emotional and rhetorical tricks to try to get you to share propaganda and falsehoods. hobo_018/E+ via Getty Images by H. Colleen Sinclair, Louisiana State University Information warfare abounds, and everyone online has been drafted whether they know it or not. Disinformation is deliberately generated misleading content disseminated for selfish or malicious purposes. Unlike misinformation, which may be shared unwittingly or with good intentions, disinformation aims to foment distrust, destabilize institutions, discredit good intentions, defame opponents and delegitimize sources of knowledge such as science and journalism. Many governments engage in disinformation campaigns. For instance, the Russian government has used images of celebrities to attract attention to anti-Ukraine propaganda. Meta, parent company of Facebook and Instagram, warned on Nov. 30, 2023, that China has stepped up its disinformation operations. Disinformation is nothing new, and information warfare has been practiced by many countries, including the U.S. But the internet gives disinformation campaigns unprecedented reach. Foreign governments, internet trolls, domestic and international extremists, opportunistic profiteers and even paid disinformation agencies exploit the internet to spread questionable content. Periods of civil unrest, natural disasters, health crises and wars trigger anxiety and the hunt for information, which disinformation agents take advantage of. Meta has uncovered and blocked sophisticated Chinese disinformation campaigns. Certainly it’s worth watching for the warning signs for misinformation and dangerous speech, but there are additional tactics disinformation agents employ. It’s just a joke Hahaganda is a tactic in which disinformation agents use memes, political comedy from state-run outlets, or speeches to make light of serious matters, attack others, minimize violence or dehumanize, and deflect blame. This approach provides an easy defense: If challenged, the disinformation agents can say, “Can’t you take a joke?” often followed by accusations of being too politically correct. Shhh … tell everyone Rumor-milling is a tactic in which the disinformation agents claim to have exclusive access to secrets they allege are being purposefully concealed. They indicate that you will “only hear this here” and will imply that others are unwilling to share the alleged truth – for example, “The media won’t report this” or “The government doesn’t want you to know” and “I shouldn’t be telling you this … .” But they do not insist that the information be kept secret, and will instead include encouragement to share it – for example, “Make this go viral” or “Most people won’t have the courage to share this.” It’s important to question how an author or speaker could have come by such “secret” information and what their motive is to prompt you to share it. People are saying Often disinformation has no real evidence, so instead disinformation agents will find or make up people to support their assertions. This impersonation can take multiple forms. Disinformation agents will use anecdotes as evidence, especially sympathetic stories from vulnerable groups such as women or children. Similarly, they may disseminate “concerned citizens’” perspectives. These layperson experts present their social identity as providing the authority to speak on a matter; “As a mother …,” “As a veteran …,” “As a police officer ….” Convert communicators, or people who allegedly change from the “wrong” position to the “right” one, can be especially persuasive, such as the woman who got an abortion but regretted it. These people often don’t actually exist or may be coerced or paid. If ordinary people don’t suffice, fake experts may be used. Some are fabricated, and you can watch out for “inauthentic user” behavior, for example, by checking X – formerly Twitter – accounts using the Botometer. But fake experts can come in different varieties. A faux expert is someone used for their title but doesn’t have actual relevant expertise. A pseudoexpert is someone who claims relevant expertise but has no actual training. A junk expert is a sellout. They may have had expertise once but now say whatever is profitable. You can often find these people have supported other dubious claims – for example, that smoking doesn’t cause cancer – or work for institutes that regularly produce questionable “scholarship.” An echo expert is when disinformation sources cite each other to provide credence for their claims. China and Russia routinely cite one another’s newspapers. A stolen expert is someone who exists, but they weren’t actually contacted and their research is misinterpreted. Likewise, disinformation agents also steal credibility from known news sources, such as by typosquatting, the practice of setting up a domain name that closely resembles a legitimate organization’s. You can check whether accounts, anecdotal or scientific, have been verified by other reliable sources. Google the name. Check expertise status, source validity and interpretation of research. Remember, one story or interpretation is not necessarily representative. It’s all a conspiracy Conspiratorial narratives involve some malevolent force – for example, “the deep state,” – engaged in covert actions with the aim to cause harm to society. That certain conspiracies such as MK-Ultra and Watergate have been confirmed is often offered as evidence for the validity of new unfounded conspiracies. Nonetheless, disinformation agents find that constructing a conspiracy is an effective means to remind people of past reasons to distrust governments, scientists or other trustworthy sources. But extraordinary claims require extraordinary evidence. Remember, the conspiracies that were ultimately unveiled had evidence – often from sources like investigative journalists, scientists and government investigations. Be particularly wary of conspiracies that try to delegitimize knowledge-producing institutions like universities, research labs, government agencies and news outlets by claiming that they are in on a cover-up. Basic tips for resisting disinformation and misinformation include thinking twice before sharing social media posts that trigger emotional responses like anger and fear and checking the sources of posts that make unusual or extraordinary claims. Good vs. evil Disinformation often serves the dual purpose of making the originator look good and their opponents look bad. Disinformation takes this further by painting issues as a battle between good and evil, using accusations of evilness to legitimize violence. Russia is particularly fond of accusing others of being secret Nazis, pedophiles or Satanists. Meanwhile, they often depict their soldiers as helping children and the elderly. Be especially wary of accusations of atrocities like genocide, especially under the attention-grabbing “breaking news” headline. Accusations abound. Verify the facts and how the information was obtained. Are you with us or against us? A false dichotomy narrative sets up the reader to believe that they have one of two mutually exclusive options; a good or a bad one, a right or a wrong one, a red pill or a blue pill. You can accept their version of reality or be an idiot or “sheeple.” There are always more options than those being presented, and issues are rarely so black and white. This is just one of the tactics in brigading, where disinformation agents seek to silence dissenting viewpoints by casting them as the wrong choice. Turning the tables Whataboutism is a classic Russian disinformation technique they use to deflect attention from their own wrongdoings by alleging the wrongdoings of others. These allegations about the actions of others may be true or false but are nonetheless irrelevant to the matter at hand. The potential past wrongs of one group does not mean you should ignore the current wrongs of another. Disinformation agents also often cast their group as the wronged party. They only engage in disinformation because their “enemy” engages in disinformation against them; they only attack to defend; and their reaction was appropriate, while that of others was an overreaction. This type of competitive victimhood is particularly pervasive when groups have been embedded in a long-lasting conflict. In all of these cases, the disinformation agent is aware that they are deflecting, misleading, trolling or outright fabricating. If you don’t believe them, they at least want to make you question what, if anything, you can believe. You often look into the things you buy rather than taking the advertising at face value before you hand over your money. This should also go for what information you buy into. H. Colleen Sinclair is Associate Research Professor of Social Psychology at Louisiana State University This article is republished from The Conversation under a Creative Commons license. Read the original article.
  2. From the album: Horseheads

    This pic is from the Horseheads Historical Society FB page. They write:
  3. Read the rest here. Do you agree with any of these? What are you hoping takes a hike a 2024?
  4. Shane MacGowan, lead singer of The Pogues has died at age 65 Read more here.
  5. Henry Kissinger has died at 100 Source
  6. by Walker Larson Lightyear? A flop. Strange World? A flop. The Little Mermaid? A flop. Ant-Man and the Wasp: Quantumania? A flop. The Marvels? A flop. Snow White? Postponed, probably for fear of a flop. All told, Disney has lost nearly $1 billion at the box office due to films like these bombing, according to box office analyst Valient Renegade. What went wrong with these films? The common thread linking them all is a determined and ever-more-obvious woke agenda that is poisoning their storylines. Lightyear—an animated children’s film, for goodness’ sake—features a homosexual kissing scene. Strange World includes a teen gay romance. The Little Mermaid stars an African American Ariel, seemingly just in order to check a diversity box. The Marvels contains only incompetent men so that masculine women can show off. According to lead actress Rachel Zegler, Disney’s new live action Snow White will have a “modern edge,” and the titular character “is not gonna be saved by the prince.” But Disney’s choice to double down on the political fads of the day, turning art and entertainment into propaganda, seems to be hurting their bottom line. The public seems to be rebelling against their scheduled indoctrination sessions at the theater. Will that be enough to cause a course correction, especially when Disney sees the success of recent films that specifically avoid including LGBTQ+, feminist, and race-related agendas? It’s hard to say. If anything could bring studio executives to their senses, it would be the sensation of sinking lower in their plush leather chairs as their wallets slowly collapse. The Hollywood Reporter just ran an article titled “Marvel Studios Taking Stock of Strategy Amid ‘The Marvels’ Meltdown,” which suggests that Disney (the owner of Marvel) may consider a new direction, although the article hints that the Marvel Cinematic Universe’s problem is cranking out too many spin-off TV shows, not its political agenda. The article relates that Marvel and Disney are scaling back the number of superhero movies in 2024 from three to one. Marvel seems poised to reduce its output and focus on quality over quantity. But will that “quality” include a return to good, old-fashioned storytelling? Or will it mean just glitzier, better-written versions of the propaganda they’ve been churning out in recent years? One problem with designing stories to fit a preconceived political mold is that, almost by definition, it works against those factors that make a story appealing. Stories touch us when they tap into the universal human experience, communicating something fundamental about what it means to be alive, the joys and sorrows, tragedies and triumphs common to us all. Stories are inspiring when they show us heroism, self-sacrifice, love—in forms that all of us, regardless of race or sexual orientation, can relate to. The best ones open our eyes to the mystery and wonder of the universe and the fragile beauty of human life. They take us out of ourselves and our limited “identity”—in the sense that progressives use the term. Wokeism, on the other hand, is predicated on the assumption that no human experience is truly universal. Rather, one’s experience of life is fundamentally different if one is black or homosexual or female or part of any other subgroup one cares to mention. For this reason, we need greater “representation” of these types of people on screen, because a black or female or homosexual audience member can’t relate to all these “straight white males.” Because political correctness focuses obsessively on identity and separating people into categories, it misses (or intentionally obscures) what is common in human nature. Of course, it’s going to be less appealing to people generally when it sets out to appeal only to subcategories of the population. Wokeism divides, whereas truly great stories unite. Part of the irony here is how shallow our fashionable vision of diversity really is, for it can only understand identities based on mere externals or accidental qualities, such as skin color or gender, as though those things were the most important, most fundamental aspect of a person. In reality, there are many other and much more profound forms of identity. For instance, though I am one of those dreaded “straight white males,” I can relate much better to a black woman who shares my religious views than I can to another white male who has different beliefs. What one believes and values is a much deeper, more important form of identity. That being said, I feel no need to play the identity game anyway, even at this deeper level I am pointing to. That’s because I believe in the stability and universality of human nature, which ought to be at the heart of storytelling. Stories ought to explore timeless and universal truths by accessing those parts of human life that don’t change with time or place: love, courage, heroism, family, death, birth, the search for meaning, and so forth. I reject the principle that every perspective is altogether historically and culturally situated, and therefore necessarily dispossesses people of other times or cultures. It is an extremely narrow-minded view of humanity, destined only to further polarize people. In the end, our most recent iteration of the “progressive” agenda is incompatible with true art. I am therefore not surprised by Disney’s travails. Either the art will die, or the agenda will die. Eventually, either Disney will fail, or they will start telling stories about the good, the true, and the beautiful, a triad once known as the transcendentals because they transcend this sublunary realm of change and all of its petty politics that pass away as swiftly as the autumn leaves. Propaganda is temporary. Great art is timeless. This content is republished and licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
  7. The following was written by Seth Adams, owner of the Werdenberg building on the corner of Water and Main, in response to Legislator Morse's column ( reposted here with permission 😞
  8. In this week's Guest View at ElmiraTelegram.com, County Legislator Lawana Morse offers her thoughts on the Arena and what changed her mind to vote in support of funding out of ARP funds. Read it here.
  9. Citizens have sometimes been surprised to find public officials blocking people from viewing their social media feeds. alashi/DigitalVision Vectors via Getty Images by Lynn Greenky, Syracuse University The First Amendment does not protect messages posted on social media platforms. The companies that own the platforms can – and do – remove, promote or limit the distribution of any posts according to corporate policies. But all that might soon change. The Supreme Court has agreed to hear five cases during this current term, which ends in June 2024, that collectively give the court the opportunity to reexamine the nature of content moderation – the rules governing discussions on social media platforms such as Facebook and X, formerly known as Twitter – and the constitutional limitations on the government to affect speech on the platforms. Content moderation, whether done manually by company employees or automatically by a platform’s software and algorithms, affects what viewers can see on a digital media page. Messages that are promoted garner greater viewership and greater interaction; those that are deprioritized or removed will obviously receive less attention. Content moderation policies reflect decisions by digital platforms about the relative value of posted messages. As an attorney, professor and author of a book about the boundaries of the First Amendment, I believe that the constitutional challenges presented by these cases will give the court the occasion to advise government, corporations and users of interactive technologies what their rights and responsibilities are as communications technologies continue to evolve. Public forums In late October 2023, the Supreme Court heard oral arguments on two related cases in which both sets of plaintiffs argued that elected officials who use their social media accounts either exclusively or partially to promote their politics and policies cannot constitutionally block constituents from posting comments on the officials’ pages. In one of those cases, O’Connor-Radcliff v. Garnier, two school board members from the Poway Unified School District in California blocked a set of parents – who frequently posted repetitive and critical comments on the board members’ Facebook and Twitter accounts – from viewing the board members’ accounts. In the other case heard in October, Lindke v. Freed, the city manager of Port Huron, Michigan, apparently angered by critical comments about a posted picture, blocked a constituent from viewing or posting on the manager’s Facebook page. Courts have long held that public spaces, like parks and sidewalks, are public forums, which must remain open to free and robust conversation and debate, subject only to neutral rules unrelated to the content of the speech expressed. The silenced constituents in the current cases insisted that in a world where a lot of public discussion is conducted in interactive social media, digital spaces used by government representatives for communicating with their constituents are also public forums and should be subject to the same First Amendment rules as their physical counterparts. If the Supreme Court rules that public forums can be both physical and virtual, government officials will not be able to arbitrarily block users from viewing and responding to their content or remove constituent comments with which they disagree. On the other hand, if the Supreme Court rejects the plaintiffs’ argument, the only recourse for frustrated constituents will be to create competing social media spaces where they can criticize and argue at will. Content moderation as editorial choices Two other cases – NetChoice LLC v. Paxton and Moody v. NetChoice LLC – also relate to the question of how the government should regulate online discussions. Florida and Texas have both passed laws that modify the internal policies and algorithms of large social media platforms by regulating how the platforms can promote, demote or remove posts. NetChoice, a tech industry trade group representing a wide range of social media platforms and online businesses, including Meta, Amazon, Airbnb and TikTok, contends that the platforms are not public forums. The group says that the Florida and Texas legislation unconstitutionally restricts the social media companies’ First Amendment right to make their own editorial choices about what appears on their sites. In addition, NetChoice alleges that by limiting Facebook’s or X’s ability to rank, repress or even remove speech – whether manually or with algorithms – the Texas and Florida laws amount to government requirements that the platforms host speech they didn’t want to, which is also unconstitutional. NetChoice is asking the Supreme Court to rule the laws unconstitutional so that the platforms remain free to make their own independent choices regarding when, how and whether posts will remain available for view and comment. In 2021, U.S. Surgeon General Vivek Murthy declared misinformation on social media, especially about COVID-19 and vaccines, to be a public health threat. Chip Somodevilla/Getty Images Censorship In an effort to reduce harmful speech that proliferates across the internet – speech that supports criminal and terrorist activity as well as misinformation and disinformation – the federal government has engaged in wide-ranging discussions with internet companies about their content moderation policies. To that end, the Biden administration has regularly advised – some say strong-armed – social media platforms to deprioritize or remove posts the government had flagged as misleading, false or harmful. Some of the posts related to misinformation about COVID-19 vaccines or promoted human trafficking. On several occasions, the officials would suggest that platform companies ban a user who posted the material from making further posts. Sometimes, the corporate representatives themselves would ask the government what to do with a particular post. While the public might be generally aware that content moderation policies exist, people are not always aware of how those policies affect the information to which they are exposed. Specifically, audiences have no way to measure how content moderation policies affect the marketplace of ideas or influence debate and discussion about public issues. In Missouri v. Biden, the plaintiffs argue that government efforts to persuade social media platforms to publish or remove posts were so relentless and invasive that the moderation policies no longer reflected the companies’ own editorial choices. Rather, they argue, the policies were in reality government directives that effectively silenced – and unconstitutionally censored – speakers with whom the government disagreed. The court’s decision in this case could have wide-ranging effects on the manner and methods of government efforts to influence the information that guides the public’s debates and decisions. Lynn Greenky is Professor Emeritus of Communication and Rhetorical Studies at Syracuse University This article is republished from The Conversation under a Creative Commons license. Read the original article.
×
×
  • Create New...