Yesterday was quiet enough – in fact it was significantly more quiet than usual because I now have no phone service at all on my land line … so no robocall rings. But the technician supposedly comes today. Hopefully that will provide resolution.
Cartoon –
Short Takes –
The Daily Beast – Opinion – Former Murdoch Exec: Fox News Is Poison For America
Quote – Fox News has caused many millions of Americans—most of them Republicans (as my wife and I were for 50 years)—to believe things that simply are not true. For example, Yahoo News reports that 73 percent of Republicans blame “left-wing protesters” for the Jan. 6 attack on the Capitol. Of course, that is ludicrous. All one has to do is look at the pictures or videos of the attack to see that the violent mob was comprised of Trump supporters. Similarly, a poll by SSRS in late April found that two-thirds of Republicans either believe or suspect that the election was stolen from Trump—60 percent saying there is “hard evidence” that the election was stolen. As noted above, this ridiculous notion has been thoroughly refuted. But millions of Americans believe these falsehoods because they have been drilled into their minds, night after night, by Fox News. Click through for more. This is a “Consider the source” story. This dude is in a position to know what he is talking about, even though he gives Rupert too much benefit of the doubt (reminds me of the KMart ads aimed at WalMart claiming Sam would have been ashamed. No, he wouldn’t have.)
The Hill – Eric Adams wins New York City mayoral primary
Quote – “Now we must focus on winning in November so that we can deliver on the promise of this great city for those who are struggling, who are underserved, and who are committed to a safe, fair, affordable future for all New Yorkers,” he said…. The race is the first in which the board sought to implement ranked-choice voting, which allows voters to list five candidates in order of preference. Click through for some details. I disn’t follow this race, but did follow the Manhattan DA Dem primary, which Alvin Bragg won.Hopefully they will work together well.
Reuters – Judge finds U.S. 60% responsible for 2017 Texas church mass shooting
Quote – In a decision released on Wednesday, U.S. District Judge Xavier Rodriguez said the Air Force did not use reasonable care when it failed to enter Devin Patrick Kelley’s plea to domestic violence charges in a database used for background checks for those buying firearms. Rodriguez said the government bore “significant responsibility” for harm to victims of the Nov. 5, 2017 massacre at the First Baptist Church in Sutherland Springs, Texas, 31 miles (50 km) east of San Antonio. Click through for background. Even if you remember it, it’s been a while, and a lot has happened in the meantime.
Housekeeping note – I am having my internet go in and out, and my phone line has so much static that it drowns out everything else, including the dial tone (and that’s loud.) If I should disappear, I’ll be back. But I’m hoping, with patience, to keep to the schedule. Sure glad I reached Squatch before this happened!
Cartoon –
Short Takes –
The Hill – Doug Emhoff carves out path as first second gentleman
Quote – Emhoff has headlined regular events but still kept a low profile in the media, striking a balance between breaking barriers as the first male second spouse while also fulfilling the role in a way second ladies have done traditionally. Observers say the lack of media attention is a sign of success. Click through for the story.
Aeon – Lies and honest mistakes
Quote – [E]ven honest journalists and careful scholars will sometimes get things wrong. Honest mistakes are made. Once flagged, these errors will be immediately corrected and acknowledged; there might be some hard questions asked about process failures, too. But there’s a very big difference between an error and a lie – and between ‘fake news’ and ‘false news’. A fake is always false, and was intended to be. But a falsehood is not always a fake; it could simply be a mistake. Click through and maybe bookmark. This understanding is how I was brought up, and it seems to me that it’s not the way that most people think. But I think it’s important for us to strive for and, maybe more importantly, to protect ourselves against increasingly effective ways of spreading falsehoods.
AP News – Jimmy, Rosalynn Carter mark 75 years of ‘full partnership’
Quote – Carter has said often since leaving the Oval Office in 1981 that the most important decision he ever made wasn’t as head of state, commander in chief or even executive officer of a nuclear submarine in the early years of the Cold War. Rather, it was falling for Eleanor Rosalynn Smith in 1945 and marrying her the following summer. “My biggest secret is to marry the right person if you want to have a long-lasting marriage,” Carter said. Click through for more. Many things in the news today will continue to be in the news. But it will be a long time before we see anything like this again. (P.S. George Burns said essentially the same thing – “Marry Gracie.”)
Food for Thought
I’m not trying to throw shade on Canada. It could just ae easily been a picture of the US – or anywhere, really.
Experts in autocracies have pointed out that it is, unfortunately, easy to slip into normalizing the tyrant, hence it is important to hang on to outrage. These incidents which seem to call for the efforts of the Greek Furies (Erinyes) to come and deal with them will, I hope, help with that. As a reminder, though no one really knows how many there were supposed to be, the three names we have are Alecto, Megaera, and Tisiphone. These roughly translate as “unceasing,” “grudging,” and “vengeful destruction.”
I’m afraid I’ve been sitting on this one for a while … and it’s not a new topic, but one the Furies and I have looked at in the past, more than once. And I’m sure we will again. The misuse of technology – any technology – is a situation in which those determined to subvert it for their own ends are in a constant race with those equally determined to to keep it useful and beneficial. So here’s the current state of the art.
================================================================
Study shows AI-generated fake reports fool experts
· AIs can generate fake reports that are convincing enough to trick cybersecurity experts.
· If widely used, these AIs could hinder efforts to defend against cyberattacks.
· These systems could set off an AI arms race between misinformation generators and detectors.
If you use such social media websites as Facebook and Twitter, you may have come across posts flagged with warnings about misinformation. So far, most misinformation – flagged and unflagged – has been aimed at the general public. Imagine the possibility of misinformation – information that is false or misleading – in scientific and technical fields like cybersecurity, public safety and medicine.
There is growing concern about misinformation spreading in these critical fields as a result of common biases and practices in publishing scientific literature, even in peer-reviewed research papers. As a graduate student and as facultymembers doing research in cybersecurity, we studied a new avenue of misinformation in the scientific community. We found that it’s possible for artificial intelligence systems to generate false information in critical fields like medicine and defense that is convincing enough to fool experts.
General misinformation often aims to tarnish the reputation of companies or public figures. Misinformation within communities of expertise has the potential for scary outcomes such as delivering incorrect medical advice to doctors and patients. This could put lives at risk.
To test this threat, we studied the impacts of spreading misinformation in the cybersecurity and medical communities. We used artificial intelligence models dubbed transformers to generate false cybersecurity news and COVID-19 medical studies and presented the cybersecurity misinformation to cybersecurity experts for testing. We found that transformer-generated misinformation was able to fool cybersecurity experts.
Transformers
Much of the technology used to identify and manage misinformation is powered by artificial intelligence. AI allows computer scientists to fact-check large amounts of misinformation quickly, given that there’s too much for people to detect without the help of technology. Although AI helps people detect misinformation, it has ironically also been used to produce misinformation in recent years.
Transformers, like BERT from Google and GPT from OpenAI, use natural language processing to understand text and produce translations, summaries and interpretations. They have been used in such tasks as storytelling and answering questions, pushing the boundaries of machines displaying humanlike capabilities in generating text.
Transformers can also be used for malevolent purposes. Social networks like Facebook and Twitter have already faced the challenges of AI-generated fake news across platforms.
Critical misinformation
Our research shows that transformers also pose a misinformation threat in medicine and cybersecurity. To illustrate how serious this is, we fine-tuned the GPT-2 transformer model on open online sources discussing cybersecurity vulnerabilities and attack information. A cybersecurity vulnerability is the weakness of a computer system, and a cybersecurity attack is an act that exploits a vulnerability. For example, if a vulnerability is a weak Facebook password, an attack exploiting it would be a hacker figuring out your password and breaking into your account.
We then seeded the model with the sentence or phrase of an actual cyberthreat intelligence sample and had it generate the rest of the threat description. We presented this generated description to cyberthreat hunters, who sift through lots of information about cybersecurity threats. These professionals read the threat descriptions to identify potential attacks and adjust the defenses of their systems.
We were surprised by the results. The cybersecurity misinformation examples we generated were able to fool cyberthreat hunters, who are knowledgeable about all kinds of cybersecurity attacks and vulnerabilities. Imagine this scenario with a crucial piece of cyberthreat intelligence that involves the airline industry, which we generated in our study.
This misleading piece of information contains incorrect information concerning cyberattacks on airlines with sensitive real-time flight data. This false information could keep cyber analysts from addressing legitimate vulnerabilities in their systems by shifting their attention to fake software bugs. If a cyber analyst acts on the fake information in a real-world scenario, the airline in question could have faced a serious attack that exploits a real, unaddressed vulnerability.
A similar transformer-based model can generate information in the medical domain and potentially fool medical experts. During the COVID-19 pandemic, preprints of research papers that have not yet undergone a rigorous review are constantly being uploaded to such sites as medrXiv. They are not only being described in the press but are being used to make public health decisions. Consider the following, which is not real but generated by our model after minimal fine-tuning of the default GPT-2 on some COVID-19-related papers.
The model was able to generate complete sentences and form an abstract allegedly describing the side effects of COVID-19 vaccinations and the experiments that were conducted. This is troubling both for medical researchers, who consistently rely on accurate information to make informed decisions, and for members of the general public, who often rely on public news to learn about critical health information. If accepted as accurate, this kind of misinformation could put lives at risk by misdirecting the efforts of scientists conducting biomedical research.
Although examples like these from our study can be fact-checked, transformer-generated misinformation hinders such industries as health care and cybersecurity in adopting AI to help with information overload. For example, automated systems are being developed to extract data from cyberthreat intelligence that is then used to inform and train automated systems to recognize possible attacks. If these automated systems process such false cybersecurity text, they will be less effective at detecting true threats.
We believe the result could be an arms race as people spreading misinformation develop better ways to create false information in response to effective ways to recognize it.
Cybersecurity researchers continuously study ways to detect misinformation in different domains. Understanding how to automatically generate misinformation helps in understanding how to recognize it. For example, automatically generated information often has subtle grammatical mistakes that systems can be trained to detect. Systems can also cross-correlate information from multiple sources and identify claims lacking substantial support from other sources.
Ultimately, everyone should be more vigilant about what information is trustworthy and be aware that hackers exploit people’s credulity, especially if the information is not from reputable news sources or published scientific work.
================================================================ Alecto, Megaera, and Tisiphone, I can certainly see how these examples could fool even professionals … and particularly the medical exacmple on side effects, simply because side effects are so unpredictable. But misstatements which in theory should have been more obvious have also fooled experts. And fooling experts can certainly have disastrous results. If there is any way we can all be on our guard any more than we already are, then dear Furies, please help us to do so.
Dave Muhlbauer for Iowa – He is running against Chuck Grassley. Direct link to campaign website (H/T Nameless) https://www.muhlbauerforiowa.com
Now This News – Biden Admin Plans to Invest in Jobs of the Future
Titus – Parody Infomercial
Liberal Rednack – Local News Reporter Drops “Bombshell” Live on Air – Reaction. The podcast cited was Yuesday, but it’ll be up on Trae’s channel for replay. They run about an hour so I don’t use them here.
Rocky Mountain Mike – Louie Gohmert’s Dark Side Of The Moon
Beau – Let’s talk about how Trump’s talking points are made….
Glenn Kirschner – Boston Globe Editorial Board Advocates Prosecuting Donald Trump. Here’s Why They’re Right
Meidas Touch – more with Ruth Ben-Ghiat
Thom Martmann – Why Trump Must Be Prosecuted. Kind of long, on account of all the facts,both historical and contemporary.
Now This News – I’ll just comment there is little in which Democrats are more interested than in protecting honest citizens (such as peaceful protesters exercising First Amendment right) from ciminals (like people waving guns around illegally, indeed feloniously.)
Parody Project – KiNG OF CORONA | The Freedom Toast
Beau – Let’s talk about infrastructure in the US and China….