…oh, for the love of…ugh…no…not really about the love, this lot
Writing in today’s Observer, the Conservative peer and former Cabinet Office minister Francis Maude, who is expected to report shortly to Rishi Sunak, says that in order for ministers to get the best advice possible, we need “to be more robust and less mealy mouthed about ‘politicisation’”.
Maude’s ideas, which also include external auditing of advice given by civil servants to reward those who perform best, will cause deep alarm across Whitehall following the resignation of former deputy prime minister Dominic Raab on Friday after accusations were upheld that he had bullied officials whom he believed had underperformed.
Raab was forced to quit the Cabinet after an official inquiry found that he had engaged in “abuse or misuse of power” by undermining and humiliating staff – and was also “intimidating and insulting” – during his time at the Ministry of Justice.
The Raab case has highlighted tensions between the need for Conservative ministers to drive policies forward to deliver on their political objectives, and the independence of the civil servants who serve them.
…I’d ask why it is that these assholes can never admit they’re the problem & their solutions aren’t solutions so much as ways to continue to fine tune the particular ways they most want to make shit worse while having it be harder for anyone to so much as draw attention to the shortcomings they want to double down on…but…what would be the point when they do such a good job of constantly providing illustrations?
The Raab case has highlighted tensions between the need for Conservative ministers to drive policies forward to deliver on their political objectives, and the independence of the civil servants who serve them.
…seriously…not enough for the guy who got complaints of a consistent flavor from literally every department that he’s ever been given charge of to claim “activist civil servants” were clubbing together to try to cancel him for crimes against woke do-you-have-any-more-of-these-dog-whistles-I-think-it-needs-more-dog-whistle in his could-you-be-more-churlish but-I-don’t-wanna-resign valediction…nope…they’ve been in power for upwards of a decade but since the problem cannot be that actually brexit was as dumb an act of self-sabotage as they bloody well knew it to be…the only reason they’ve managed to improve exactly nothing for anyone not rich enough to have been just fine if they hadn’t…must be that the civil service isn’t sufficiently attuned to the party line…fucking outstanding bit of reverse-engineered faux-justification there from the party of blind arrogance
“It is perfectly possible to preserve impartiality and, indeed, improve continuity while allowing ministers more say in appointments,” he writes. “I will address this in the accountability and governance review I am undertaking for the government. Without material adjustment, there will be more cases like Raab’s when frustrations boil over.”
…the problem isn’t the shitty quality of the people in charge, you see…it’s that you can’t get the levels of unearned deference required out of the people who have to convert all this frothing nonsense into a semi-functional bureaucracy if they have shit like job security & aren’t a subservient cog in a party apparatus…I mean…what else could the problem be?
“In France, permanent civil servants often have overt political affiliations and it causes few problems. In Australia, permanent civil servants in ministers’ private offices are released from the normal obligations of political impartiality and can take part in party political activity. We don’t need to go that far, but the key, as always, is transparency and pragmatism.”
…france…uh, huh…right…the place where protesters have been following long-standing traditions & setting fire to the banlieues of paris & building breezeblock walls across public roads down south because someone managed to push through a bunch of legislation that lacked a popular mandate by a significant margin…that france? …or australia?
https://www.theguardian.com/commentisfree/2013/jun/27/vicious-fights-australian-politics
…ah…what’s the point?
Many in Whitehall now fear that in the run-up to a general election, expected next year, the Conservatives will turn their fire increasingly on the civil service and blame it for the government’s shortcomings and the failings of Brexit.
[…]
After announcing he would quit on Friday, Raab lashed out at what he called “activist civil servants” who were able to “block reforms or changes through a rather passive-aggressive approach” when dealing with ministers.But Lord McDonald, who gave evidence to Adam Tolley KC’s bullying investigation, told BBC Radio 4’s Today programme: “I disagree strongly with Mr Raab. I think all the civil servants I saw working for Dominic Raab worked very hard for him in the way they are required to do.
“There is no civil service activism, there is no civil service passive aggression, there is no separate civil service agenda. I saw no evidence of a small group of activists trying to undermine a minister. The issue is a minister’s behaviour.”
https://www.theguardian.com/politics/2023/apr/22/tories-consider-controversial-plan-to-politicise-civil-service-after-raab-scandal
…frankly I strongly suspect that if the civil service really was wall-to-wall humphrey appleby types giving it the full yes, minister
…the UK probably wouldn’t be in the mess they’ve gotten it into…but you can’t tell these new statesmen anything…bunch of right b’stards, the lot of ’em
…what can I say…some people…just don’t fucking get it
…the fucking ticks weren’t what gave accounts with cachet the thing these fuckwits so desperately covet & so painfully misunderstand despite all their flailing about flagrantly caping for for the world’s richest baby…who’s still all crabby because it turns out owning twitter hasn’t made him any better at twitter…let alone likable…but…seriously…outside of actively turning the platform into a haven for bad actors & removing yet more of the fragile guardrails it at least attempted to provide
…it literally only serves to make everything worse in a vain effort to appease…well…the vain-&-entirely-not-glorious…but…you’d have to pull your head out of your ass to notice a thing like that
The blue tick is not just an honorific, however. Subscribers to the new service will get boosted rankings in conversations and search, while their replies will also receive greater prominence. Tweets that they interact with will also benefit.
[…]
Some safeguards against imposter accounts – which bedevilled a previous Twitter Blue push – have been introduced, such as blocking new accounts from signing up to the service for 30 days. The Twitter Blue website page adds that the platform is “working on an updated process for new Twitter accounts in order to help minimise impersonation risks and may impose and change waiting periods for new accounts without notice”.
…but…he learns from his mistakes, right?
…enough already with the fucking around
…can we get to the finding out part?
…I know…I’m sucking all the fun out of this
…sincerely…this is petty vandalism as much as it is a wholesale misapprehension of what added value to what in the old model, imperfect as it may have been
…how’s that working out for you there you colossally conceited cromulently concatenated cretin?
….I know…that’s a lot of embedded tweets & an uncharacteristic lack of block-quoting…but the epic nature of this level of dumb committing so completely to doubling down without a flicker of a hint of a suggestion that he might figure out that his whole ass is showing…again…well…it’s…special…bless his little cotton socks
Days before his SpaceX Starship blew up after takeoff and he removed blue check marks from Twitter users’ profiles, he appeared in a prerecorded interview on Fox News’s “Tucker Carlson Tonight.” As if he didn’t have his hands full enough with SpaceX, Twitter and Tesla — which disappointed investors with its results this week, causing the stock to fall around 10 percent Thursday — Musk talked about his plans for “TruthGPT,” an AI product he plans to deploy to compete with Microsoft and Google.
www.washingtonpost.com/technology/2023/04/21/elon-musk-ai-startup/
[…]
Normally brash and unguarded with his promises, Musk appeared to hedge his bets on his AI pursuits in the interview with Fox News, explaining why he was entering a field in which Google and Microsoft already have a head start.
[…]
As doubts grow about whether Musk can effectively run his current companies, there’s growing apprehension about joining his teams — which are known for burnout, a constant grind and leaving workers on the edge, according to the people familiar with the matter.
[…]
Musk has been critical of OpenAI and the direction it has taken under CEO Sam Altman, decrying the limitations it has placed on the AI, which Musk sees as an infringement on truth. Musk also signed a letter last month calling for a pause on AI development, alongside other business leaders and academics.
[…]
“I’m going to start something which I call TruthGPT,” Musk said during the interview, “or a maximum truth-seeking AI that tries to understand the nature of the universe.”
…you know how sometimes there’s just so much wrong with the shit somebody talks…layers of the stuff…wrong in their assumptions…in their assertions…until they’re just all ass & no umption?
For now, AI advances are limited to automation. When ChatGPT was asked recently about how it might change how people deal with government, it responded that “the next generation of AI, which includes ChatGPT, has the potential to revolutionize the way governments interact with their citizens.”
…ok…so it isn’t unique to elon
But information flow and automated operations are only one aspect of governance that can be updated. AI, defined as technology that can think humanly, act humanly, think rationally, or act rationally, is also close to being used to simplify the political and bureaucratic business of policymaking.
…whose ass did you pull that specious definition out of anyway…defined that way there isn’t any fucking AI to be writing about for fuck’s sake
“The foundations of policymaking – specifically, the ability to sense patterns of need, develop evidence-based programs, forecast outcomes and analyze effectiveness – fall squarely in AI’s sweet spot,” the management consulting firm BCG said in a paper published in 2021. “The use of it to help shape policy is just beginning.”
[…]
According to Darrell West, senior fellow at the Center for Technology Innovation at the Brookings Institution and co-author of Turning Point: Policymaking in the Era of Artificial Intelligence government-focused AI could be substantial and transformational.“There are many ways AI can make government more efficient,” West says. “We’re seeing advances on a monthly basis and need to make sure they conform to basic human values. Right now there’s no regulation and hasn’t been for 30 years.”
But that immediately carries questions about bias. A recent Brookings study, “Comparing Google Bard with OpenAI’s ChatGPT on political bias, facts, and morality” […is an interesting read…though at no point does it involve a technology that can think…much less rationally…let alone humanly]
[…]
How that effects systems of governance has yet to be fully explored, but there are cautions. “Algorithms are only as good as the data on which they are based, and the problem with current AI is that it was trained on data that was incomplete or unrepresentative and the risk of bias or unfairness is quite substantial,” says West.The fairness and equity of algorithms are only as good as the data-programming that underlie them. “For the last few decades we’ve allowed the tech companies to decide, so we need better guardrails and to make sure the algorithms respect human values,” West says. “We need more oversight.”
Michael Ahn, a professor in the department of public policy and public affairs at University of Massachusetts, says AI has the potential to customize government services to citizens based on their data. But while governments could work with companies like OpenAI’s ChatGPT, Google’s Bard or Meta’s LLaMa – the systems would have to be closed off in a silo.
“If they can keep a barrier so the information is not leaked, then it could be a big step forward. The downside is, can you really keep the data secure from the outside? If it leaks once, it’s leaked, so there are pretty huge potential risks there.”
[…]
And much of what is imagined around AI straddles the realms of science fiction and politics. Professor West said he doesn’t need to read sci-fi – he feels as if he’s already living it. Arthur C Clarke’s HAL 9000 from 1968 remains our template for a malevolent AI computer. But AI’s impact on government, as a recent Center for Public Impact paper put it, is Destination Unknown.
[…]
Last year, tech worker Keir Newton published a novel, 2032: The Year A.I. Runs For President, that imagines a supercomputer named Algo, programmed by a Musk-like tech baron under the utilitarian ethos “the most good for the most people” and running for the White House under the campaign slogan, “Not of one. Not for one. But of all and for all.”Newton says while his novel could be read as dystopian he’s more optimistic than negative about AI as it moves from automation to cognition. He says that when he wrote the novel in the fractious lead-up the 2020 election it was reasonable to wish for rational leadership.
“I don’t think anyone expect AI to be at this point this quickly, but most of AI policymaking is around data analytics. The difference comes when we think AI is making decisions based on its own thinking instead of being prescribed a formula or set of rules.
https://www.theguardian.com/technology/2023/apr/22/artificial-intelligence-ai-us-government
…but…it’s all so new & completely unpredictable…right?
Any time you log on to Twitter and look at a popular post, you’re likely to find bot accounts liking or commenting on it. Clicking through and you can see they’ve tweeted many times, often in a short time span. Sometimes their posts are selling junk or spreading digital viruses. Other accounts, especially the bots that post garbled vitriol in response to particular news articles or official statements, are entirely political.
It’s easy to assume this entire phenomenon is powered by advanced computer science. Indeed, I’ve talked to many people who think machine learning algorithms driven by machine learning or artificial intelligence are giving political bots the ability to learn from their surroundings and interact with people in a sophisticated way.
During events in which researchers now believe political bots and disinformation played a key role—the Brexit referendum, the Trump-Clinton contest in 2016, the Crimea crisis—there is a widespread belief that smart AI tools allowed computers to pose as humans and help manipulate the public conversation.
Pundits and journalists have fueled this: There have been extremely provocative stories about the rise of a “weaponized AI propaganda machine”, and stories claiming that “artificial intelligence conquered democracy.” Even my own research into how social media is used to mold public opinion, hack truth, and silence protest—what is known as “computational propaganda”—has been quoted in articles that suggest our robot overlords are already here.
…except…this is a post from 2020
The reality is, though, that complex mechanisms like artificial intelligence played little role in computation propaganda campaigns to date. All the evidence I’ve seen on Cambridge Analytica suggests the firm never launched the “psychographic” marketing tools it claimed to possess during the 2016 US election—though it said it could target individuals with specific messages based on personality profiles derived from its controversial Facebook database.
When I was at the Oxford Internet Institute, meanwhile, we looked into how and whether Twitter bots were used during the Brexit debate. We found that while many were used to spread messages about the Leave campaign, the vast majority of the automated accounts were very simple. They were made to alter online conversation with bots that had been built simply to boost likes and follows, to spread links, to game trends, or to troll opposition. It was gamed by small groups of human users who understood the magic of memes and virality, of seeding conspiracies online and watching them grow. Conversations were blocked by basic bot-generated spam and noise, purposefully attached to particular hashtags in order to demobilize online conversations. Links to news articles that showed a politician in a particular light were hyped by fake or proxy accounts made to post and repost the same junk over and over and over. These campaigns were wielded quite bluntly: these bots were not designed to be functionally conversational. They did not harness AI.
There are, however, signals that AI-enabled computational propaganda and disinformation are beginning to be used. Hackers and other groups have already begun testing the effectiveness of more dangerous AI bots over social media. A 2017 piece from Gizmodo reported that two data scientists taught an artificial intelligence to design its own phishing campaign: “In tests, the artificial hacker was substantially better than its human competitors, composing and distributing more phishing tweets than humans, and with a substantially better conversion rate.”
Problematic content is not spread only by machine-learning-enabled political bots. Nor are problematic uses or designs of technology being generated only by social-media firms. Researchers have pointed out that machine learning can be tainted by poison attacks—malicious actors influencing “training data” in order to change the results of a given algorithm—before the machine is even made public.
…but sure…black box LLMs trained on stormfront & 4chan is totally the sort of thing to hand over the interaction between a state & its citizenry to…so glad to live in this enlightened age of meritocracy
There is a big debate in the academic community, however, as to whether passively identifying potentially false information for social-media users is actually effective. Some researchers suggest that fact-checking efforts both online and offline do not work very effectively in their current form. In early 2019, the fact-checking website Snopes, which had partnered with Facebook in such corrective efforts, broke off the relationship. In an interview with the Poynter Institute, Snopes’s vice president of operations Vinny Green said, “It doesn’t seem like we’re striving to make third-party fact checking more practical for publishers—it seems like we’re striving to make it easier for Facebook.”
[…]
Facebook, Google, and others like them employ people to find and take down content that contains violence or information from terrorist groups. They are much less zealous, however, in their efforts to get rid of disinformation. The plethora of different contexts in which false information flows online—everywhere from an election in India to a major sporting event in South Africa—makes it tricky for AI to operate on its own, absent human knowledge. But in the coming months and years it will take hordes of people across the world to effectively vet the massive amounts of content in the countless circumstances that will arise.There simply is no easy fix to the problem of computational propaganda on social media. It is the companies’ responsibility, though, to find a way to fix it. So far Facebook seems far more focused on public relations than on regulating the flow of computational propaganda or graphic content. According to The Verge, the company spends more time celebrating its efforts to get rid of particular pieces of vitriol or violence than on systematically overhauling its moderation processes.
[…]
It’s unsurprising that a technologist like Zuckerberg would propose a technological fix, but AI is not perfect on its own. The myopic focus of tech leaders on computer-based solutions reflects the naïveté and arrogance that caused Facebook and others to leave users vulnerable in the first place.There are not yet armies of smart AI bots working to manipulate public opinion during contested elections. Will there be in the future? Perhaps. But it’s important to note that even armies of smart political bots will not function on their own: They will still require human oversight to manipulate and deceive. We are not facing an online version of The Terminator here. Luminaries from the fields of computer science and AI including Turing Award winner Ed Feigenbaum and Geoff Hinton, the “godfather of deep learning,” have argued strongly against fears that “the singularity”—the unstoppable age of smart machines—is coming anytime soon. In a survey of American Association of Artificial Intelligence fellows, over 90% said that super-intelligence is “beyond the foreseeable horizon.” Most of these experts also agreed that when and if super-smart computers do arrive, they will not be a threat to humanity.
…& sure…a lot’s happened in the last three years in terms of the potential of LLM bots…for those with deep pockets…but…they’re still all in the if-I-only-had-a-brain class of scarecrows
Grady Booch, a leading expert on AI systems, is also skeptical about the rise of super-smart rogue machines, but for a different reason. In a TED talk in 2016, he said that “to worry now about the rise of a superintelligence is in many ways a dangerous distraction because the rise of computing itself brings to us a number of human and societal issues to which we must now attend.”
https://www.technologyreview.com/2020/01/08/130983/were-fighting-fake-news-ai-bots-by-using-more-ai-thats-a-mistake/
[…]
Yes, ever-evolving technology can automate the spread disinformation and trolling. It can let perpetrators operate anonymously and without fear of discovery. But this suite of tools as a mode of political communication is ultimately focused on achieving the human aim of control. Propaganda is a human invention, and it’s as old as society. As an expert on robotics once told me, we should not fear machines that are smart like humans, so much as humans who are not smart about how they build machines.
…but then…you’d have to…I dunno…have paid attention to the people who went through a bunch of trial & error…some of it at great expense in both time & money…if…say…you wanted to avoid making a glaring & costly error due entirely to your misplaced belief in the inherent genius of not-fucking-bothering to let the facts get in the way of your self-aggrandizing little feat of patting yourself on the back for some shit you clearly didn’t do
…it’s almost like these unimaginably rich assholes just assume it doesn’t matter how big a hole they dig…money & fanbois will keep them from ever having to acknowledge it…or something
As the most powerful rocket ever built blasted from its launchpad in Boca Chica, Texas, on Thursday, the liftoff rocked the earth and kicked up a billowing cloud of dust and debris, shaking homes and raining down brown grime for miles.
[…]
Virtually everywhere in the city “ended up with a covering of a rather thick, granular, sand grain that just landed on everything,” Valerie Bates, a Port Isabel spokeswoman, said in an interview.
[…]
Closer to the launch site, large pieces of debris were recorded flying through the air and smashing into an unoccupied car.Louis Balderas, the founder of LabPadre, which films SpaceX’s launches, said that while it was common to see some debris, smoke and dust, the impact of Thursday’s liftoff was unlike anything he had ever seen.
“There were bowling ball-sized pieces of concrete that came flying out of the launchpad area,” Mr. Balderas said. The blast, he added, had created a crater that he estimated was around 25 feet deep.
[…]
Eric Roesch, an expert in environmental compliance and risk assessment who has been tracking SpaceX’s rocket launches, said in an interview that he and others had long warned of the environmental risks to the surrounding region. But without a chemical analysis of the dust and debris, he added, it was difficult to say whether or not they were harmful to human health.But, Mr. Roesch said, “the presence of that dust kind of indicates to me that the impact modeling was inadequate, because this was not really disclosed as a possible impact.”
In June, an environmental assessment by the Federal Aviation Administration concluded that SpaceX’s plans for orbital launches would have “no significant impact” on the region along the Gulf Coast.
[…]
The F.A.A. said in a statement on Friday that Space X’s “anomaly response plan” was activated.In the outlined plan, SpaceX is responsible for evaluating the situation and notifying the proper agencies.
If an event causes debris in the area where the rocket took off, the plan says that the company would need to obtain an emergency special use permit from the U.S. Fish and Wildlife Service and that access in the area could possibly be restricted.
https://www.nytimes.com/2023/04/21/us/spacex-rocket-dust-texas.html
…seriously…you couldn’t make it up…well…okay…you could…but you’re not rich enough for that not to be painfully embarrassing…trust me on this…musk is one of the richest people there’s ever been…& it’s clearly embarrassing enough to have pained him…not to have made him stop & think or anything…that’d be…I dunno…fucking crazy or some shit?
probably the least problematic problem of ai…..but did you catch the fake drake song?
https://futurism.com/the-byte/drakes-label-still-furious-about-ai
hadnt really thought about it…but record companies have a real bitch of a problem on their hands…
theres going to be so many fakes….
…yup…saw a thing a little while back about scammers deep-fake-ing voices from outgoing answerphone messages…so the call-from-someone-you-know-on-a-new-number thing is another powder keg in a similar vein, too
…shame we don’t all have record exec levels of legal support, really?
RIP covers this pretty well, but for the non-engineering-minded among us, there’s a useful TL;DR at the end of this article. Basically, Elmo fucked up hard by overruling his engineers and essentially blew apart the launchpad. In addition there were systemic failures in software and design that further contributed to the damage the rocket sustained when the launchpad disintegrated and threw debris around like cannonballs.
A Starship Post-mortem: Why the giant rocket failed and why it’s Elon Musk’s fault
The interesting thing to me was the post-launch publicity, where Elmo stans screeched repeatedly about how “successful” the launch was and essentially claimed that Phony Stark planned the whole thing and achieved his every objective. That is manifestly untrue. It was an epic clusterfuck, and the government should immediately terminate all contracts with Space X.
…phony stark
Sadly, I am forced to admit that it didn’t originate with me. I saw it elsewhere.
The same Valley and finance bro mentality that led to the giant flood of deposits to SVB is all that’s propping up Musk now.
It’s what made all of this possible in the first place, because he never would have had the capital to buy Twitter in the first place if big lenders like Morgan Stanley hadn’t taken on billions in unsecured debt because they just trusted Musk, because, well, he was Musk.
The only difference with SVB is the backroom chat that launched the bank panic hasn’t completely run from Musk yet.
I have to wonder if we’ll see a chorus wanting a bailout of Musk shareholders at some point soon though. Because you know, innovation, trailblazers, free enterprise.
More and more people are coming to the belated realization that Elmo just isn’t very smart. I mean, I used to think he was marginal, but as time goes on and he makes more and more blunders, I’m thinking he’s just not that bright. The Elmo fanboys credit him with massive intellect but nobody that’s living in their mom’s basement is funding his ventures. And while investors can be slow on the uptake, declining returns catches their attention. At a certain point they see through his disinformation campaigns. Elmo’s lost billions.
It’s weird how Elmo parallels Trump in a lot of ways. Both of them have a fanatic fanbase that claims they are geniuses playing 12D chess, and in reality they both just got lucky and then subsequently fucked everything up.
…not forgetting the saudis & such on the financing
…or the tesla &/or twitter shareholders on the suing front
I’m not sure the Saudis were even pretending there was an investment the way Morgan Stanley was.
…I think they’re a lot happier with their RoI, myself?
I used to think Elon Musk watched The Fifth Element and decided that Gary Old man’s villain was going to be his role model for life. Except he’s clearly not smart enough for that role model.
There’s a strong element of Dr. Evil finding out Number Two has built an incredibly profitable legitimate business, and decides to wreck it all so he can hold the government hostage for one million dollars.
Also holy fuck I hope people are getting samples of that dust and shit from the launch because now I’m worried about all those people nearby and silicosis or asbestos exposure.
dya reckon insurance will cover that cars windows?
i swear i saw this video pop up as a tweet over on oppo….but cannot for the life of me find that comment again….so meh
Thank you for the catnip: “colossally conceited cromulently concatenated cretin?”
Sorry to see the asshole that owns Twitter being an asshole.
ha!….biddy at the gas station just hit me with one of my favourite dutch sayings when i had to settle for something other than my regular brand of smokes…
als het niet kan zoals het moet…. moet het maar zoals het kan
which translates to if it cant be done as it should….it should be done as it can
fantastic way of saying make do with what you got
…they need to change that saying from “there’s always someone tougher/meaner/better” to “there’s always someone dumber”…I don’t know how many sub-basements you need to drag the bar this low…but…damn these motherfuckers be dumb
…also…I get that there’s a pun about “free” speech…& it’s fun to dunk on people who paid up for their elon-endorsed checkmark…which for legal purposes he would have been better calling validation than verification, I guess…but…I’m starting to think a lot of people might genuinely not understand free speech is about what you get to say a lot more than whether anyone gets paid to air it
…unless I’ve caught the dumb version of a contact high, I guess…in which case maybe these are all geniuses pretending to be dumber than the proverbial box of rocks?
[P.S. …the preview undersells the screenshot of the thread]