…thought provoking [DOT 16/3/23]

minus the thought...

…so…I dunno…there may have been an overabundance of rabbit holes in my vicinity yesterday…so as far as today’s news goes…I might be behind the curve…certainly I could stand to know more about this one

…sure…I’ve seen arguably too many movies

…but…well…the account is an abc news guy…& if it’s accurate

…a fire…in the apartment…that began while the FBI were in attendance…quite the coincidence, no?

Credit Suisse has announced that it will take a CHF50bn ($53.7bn) loan from the Swiss central bank, in an action it says will “pre-emptively strengthen its liquidity” as it moves to stem a crisis of confidence a day after its share price plummeted.

This additional liquidity would support the bank in taking the “necessary steps to create a simpler and more focused bank built around client needs”, its statement said. The bank said it was also making buyback offers on about $3bn worth of debt.
[…]
Asian stocks slid on Thursday and investors turned to the safety of gold, bonds and dollars as fears of a broader crisis intensified despite the intervention, leaving markets on edge ahead of a European Central Bank meeting (ECB) later on Thursday.

Expectations of a 50 basis-point rate rise in Europe have evaporated as markets radically rethink the global interest rate outlook. Money market pricing implies a less than a 20% chance of such a rise from the ECB, down from 90% a day earlier.

The move to shore up Credit Suisse’s finances came a few hours after the central bank and the Swiss financial markets regulator issued a joint statement pledging emergency funding if needed. They insisted there was no “direct risk” of contagion from turmoil in the US banking system after the sudden collapse last week of the US lender Silicon Valley Bank.

“Credit Suisse meets the capital and liquidity requirements imposed on systemically important banks,” the Swiss National Bank said. Credit Suisse is one of 30 banks globally deemed too big to fail, forcing it to set aside more cash to weather a crisis.

Credit Suisse saw its shares fall by as much as 30% on Wednesday, prompted by comments from Credit Suisse’s largest shareholder, Saudi National Bank (SNB), which said it was unable to stump up more cash because of regulatory restrictions limiting its holding to below 10%.

https://www.theguardian.com/business/2023/mar/16/credit-suisse-takes-50bn-loan-from-swiss-central-bank-after-share-price-plunge

…I mean…there’s coincidences & coincidences…& sometimes trying to prove that one of these things is not like the other is a full-time gig…I think I saw a figure quoted of that skimming $75billion or so off the value of the markets in europe…though they’re bobbing back today…but…different sort of fire…metaphorical house…same kind of rabbit hole

Credit Suisse’s ability to shoot itself in the foot is legendary but you would have thought its shareholders would have learned not to make matters worse. But no, the chairman of Saudi National Bank, which bought a 9.9% stake in the Swiss bank only last year, picked a terrible moment to say his firm would “absolutely not” be investing more.

…as I understand it they cited a pre-existing regulatory niggle about taking positions worth >10%…so…if the bank hadn’t accounted for that to the point that they were still looking at them as a first port of call for further liquidity…I have questions?

To be fair, Ammar al-Khudairy gave an explanation (going over 10% would mean extra regulatory rules) and also said he didn’t think Credit Suisse needed extra capital because its financial ratios are “fine”. Too late: the market heard the “absolutely not” comment and wondered where beleaguered Credit Suisse would turn if, in fact, more capital is required.

…likewise on the poor choice of words…which is either a failure to read the room…or…you know…the other thing

Remember, it was only on Tuesday that the bank had to confess to “material weaknesses” in its internal controls after a prod from regulators in the US. Last year’s loss of 7.3bn Swiss francs (£6.6bn) was a record and deposit outflows have continued. A three-year turnaround plan under chief executive Ulrich Körner – the latest of many attempts to draw a line under years of scandal (Greensill, Archegos, “tuna bonds” for Mozambique) and risk-management failures – is in its infancy.

Cue a plunge in the share price, as severe as 30% at one point on Wednesday, to an all-time low, a level that is either ridiculously cheap or a prelude to full-blown crisis. The former pride of Swiss banking, an institution founded 1856, was valued at a mere 7bn Swiss francs at its lowest point. By way of irrelevant comparison, the national chocolate champion, Nestlé, is worth almost 300bn Swiss francs.

For “don’t panic” optimists, this is just a case of jittery investors unfairly playing games of whack-a-mole after the collapse of Silicon Valley Bank in the US last week. There are no direct links between the two institutions but the market is hard-wired to hunt for the next victim. It is easy to hit Credit Suisse, a bank that everybody already regarded as the weakling among big financial institutions in Europe.

https://www.theguardian.com/business/nils-pratley-on-finance/2023/mar/15/credit-suisse-has-shot-itself-in-the-foot-and-wounded-the-global-banking-system

…for those who maybe lit upon the whole thing while festooned in tinfoil millinery…rabbit holes ahoy…I’d probably still be down one if I wasn’t so easily distracted

The AI behind popular chatbot ChatGPT has been updated to a new version known as GPT-4 – and many people have already been unknowingly exposed to the newest AI’s supposedly improved capabilities for weeks prior to the announcement.

OpenAI, the company that developed GPT-4, says it “spent 6 months making GPT-4 safer and more aligned” so that the AI is less likely to produce “disallowed content” in response to human users’ queries. GPT-4 delivers “human-level performance” and outperforms its predecessor GPT-3.5 on many simulated exams for university admissions and professional fields such as law and medicine, according to an OpenAI blog post and technical report. For example, GPT-4 passed a simulated bar exam for lawyers with a score among the top 10 per cent of test takers, whereas GPT-3.5 previously scored in the bottom 10 per cent.

The technical report about GPT-4 shared minimal details about the AI’s architecture, hardware, computing power requirements for training and the data used to train it. OpenAI described this lack of disclosure as being due to “the competitive landscape and safety implications of large-scale models like GPT-4.”

But the missing information also makes it difficult to independently check claims about GPT-4’s performance accuracy and safety. “If anything, OpenAI has provided even less detail with which to evaluate its new system,” says Sarah Myers West at the AI Now Institute, a research centre in New York City. “But what we’ve seen so far in the roll-out of generative AI is that these systems are prone to error […] so we should be wary of any claims that these issues are resolved.”

OpenAI itself reiterated the increasingly familiar warning about even the most advanced language AIs by cautioning that GPT-4 can still “hallucinate” facts and make reasoning errors – meaning that “great care should be taken when using language model outputs” and especially in “high-stakes contexts.”
[…]
The rollout of the updated AI service comes as Silicon Valley tech giants such as Microsoft and Google – along with Chinese companies such as Baidu and Tencent – are competing in an AI arms race to develop and deploy AI chatbots capable of understanding language and performing many different tasks based on written prompts.

https://www.newscientist.com/article/2364375-gpt-4-openai-says-its-ai-has-human-level-performance-on-tests/

…I may actually be forced to post this mid-rabbit hole…so…if the tunes are late…assume one of them is the blind boys of alabama singing the theme from the wire…anyway…openAI can’t wait to tell you about their latest shiny virtual object

We’ve created GPT-4, the latest milestone in OpenAI’s effort in scaling up deep learning. GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks. For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%. We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Over the past two years, we rebuilt our entire deep learning stack and, together with Azure, co-designed a supercomputer from the ground up for our workload. A year ago, we trained GPT-3.5 as a first “test run” of the system. We found and fixed some bugs and improved our theoretical foundations. As a result, our GPT-4 training run was (for us at least!) unprecedentedly stable, becoming our first large model whose training performance we were able to accurately predict ahead of time. As we continue to focus on reliable scaling, we aim to hone our methodology to help us predict and prepare for future capabilities increasingly far in advance—something we view as critical for safety.

We are releasing GPT-4’s text input capability via ChatGPT and the API (with a waitlist). To prepare the image input capability for wider availability, we’re collaborating closely with a single partner to start. We’re also open-sourcing OpenAI Evals, our framework for automated evaluation of AI model performance, to allow anyone to report shortcomings in our models to help guide further improvements.

…open source…that always sounds good…but…nah…I’m probably just being paranoid…where was I?

In a casual conversation, the distinction between GPT-3.5 and GPT-4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient threshold—GPT-4 is more reliable, creative, and able to handle much more nuanced instructions than GPT-3.5.

https://openai.com/research/gpt-4

…that…in at least some ways…might be something of an understatement…despite the part where it arguably overstates a thing or two

…mind you…can’t wait might have been…a mite previous on my part…remember that “many people have already been unknowingly exposed to the newest AI’s supposedly improved capabilities for weeks” part?

“We are happy to confirm that the new Bing is running on GPT-4, customized for search,” according to a blog post from Yusuf Mehdi, Microsoft’s head of consumer marketing. “If you’ve used the new Bing in preview at any time in the last six weeks, you’ve already had an early look at the power of OpenAI’s latest model.”

…the rules of this black box shell game sure are pretty fluid

It’s been a big day for AI news. In addition to this Bing confirmation and the official announcement of GPT-4, Google announced a swath of AI features coming to Gmail, Docs, and more, as well as opening up access to its own AI language model, PaLM. But we’re still waiting on the wider availability of Google’s own AI-powered chatbot, Bard.

https://www.theverge.com/2023/3/14/23639928/microsoft-bing-chatbot-ai-gpt-4-llm

…it’s enough to make you dizzy…well…make me dizzy, any road…I mean…they ain’t kidding about the “pretty cool shit”

OpenAI has introduced the world to its latest powerful AI model, GPT-4, and refreshingly the first thing they partnered up on with its new capabilities is helping people with visual impairments. Be My Eyes, which lets blind and low vision folks ask sighted people to describe what their phone sees, is getting a “virtual volunteer” that offers AI-powered help at any time.

We’ve written about Be My Eyes plenty of times since it was started in 2015, and of course the rise of computer vision and other tools has figured prominently in its story of helping the visually impaired more easily navigate everyday life. But the app itself can only do so much, and a core feature was always being able to get a helping hand from a volunteer, who could look through your phone’s camera view and give detailed descriptions or instructions.

The new version of the app is the first to integrate GPT-4’s multimodal capability, which is to say its ability to not just chat intelligibly, but to inspect and understand images it’s given:
[…]
But the video accompanying the description is more illuminating. In it, Be My Eyes user Lucy shows the app helping her with a bunch of things live. If you’re not familiar with the rapid-fire patois of a screen reader you may miss some of the dialogue, but she has it describe the look of a dress, identify a plant, read a map, translate a label, direct her to a certain machine treadmill at the gym and tell her which buttons to push at a vending machine. (You can watch the video below.)

Be My Eyes Virtual Volunteer […link might not work…I couldn’t get it to embed, anyway…but it played for me?]

It’s a very concise demonstration of how unfriendly much of our urban and commercial infrastructure is for people with vision issues. And it also shows how useful GPT-4’s multimodal chat can be in the right circumstances.

https://techcrunch.com/2023/03/14/gpt-4s-first-app-is-a-virtual-volunteer-for-the-visually-impaired/

…it’s a nifty trick…though the paranoid part of me isn’t overly enamored by the idea of what capability that sort of thing might add to the kind of data-hoovering shenanigans that various tech behemoths &/or state-level actors have been shown to have a fondness for…swings & roundabouts & all that sort of thing…speaking of prices to pay

GPT-4 is available today to OpenAI’s paying users via ChatGPT Plus (with a usage cap), and developers can sign up on a waitlist to access the API.

Pricing is $0.03 per 1,000 “prompt” tokens (about 750 words) and $0.06 per 1,000 “completion” tokens (again, about 750 words). Tokens represent raw text; for example, the word “fantastic” would be split into the tokens “fan,” “tas” and “tic.” Prompt tokens are the parts of words fed into GPT-4 while completion tokens are the content generated by GPT-4.

GPT-4 has been hiding in plain sight, as it turns out. Microsoft confirmed today that Bing Chat, its chatbot tech co-developed with OpenAI, is running on GPT-4.

Other early adopters include Stripe, which is using GPT-4 to scan business websites and deliver a summary to customer support staff. Duolingo built GPT-4 into a new language learning subscription tier. Morgan Stanley is creating a GPT-4-powered system that’ll retrieve info from company documents and serve it up to financial analysts. And Khan Academy is leveraging GPT-4 to build some sort of automated tutor.

https://techcrunch.com/2023/03/14/openai-releases-gpt-4-ai-that-it-claims-is-state-of-the-art/

…call me a cynic…but…easy as it sometimes is to make fun of “early adopters”…there’s a flip-side to that stuff…& the google kind of free-to-use isn’t without its costs & profit margins…as a for instance…so…what do you call a multiplicity of rabbit holes?

Yesterday, OpenAI announced GPT-4, its long-awaited next-generation AI language model. The system’s capabilities are still being assessed, but as researchers and experts pore over its accompanying materials, many have expressed disappointment at one particular feature: that despite the name of its parent company, GPT-4 is not an open AI model.

OpenAI has shared plenty of benchmark and test results for GPT-4, as well as some intriguing demos, but has offered essentially no information on the data used to train the system, its energy costs, or the specific hardware or methods used to create it.

Many in the AI community have criticized this decision, noting that it undermines the company’s founding ethos as a research org and makes it harder for others to replicate its work. Perhaps more significantly, some say it also makes it difficult to develop safeguards against the sort of threats posed by AI systems like GPT-4, with these complaints coming at a time of increasing tension and rapid progress in the AI world.

“I think we can call it shut on ‘Open’ AI: the 98 page paper introducing GPT-4 proudly declares that they’re disclosing nothing about the contents of their training set,” tweeted Ben Schmidt, VP of information design at Nomic AI, in a thread on the topic.

https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview

…is that like a tech-enhanced variation on the theme of the better mousetrap?

Google and Microsoft are both close to announcing revamps of their search engines to include direct answers supplied by artificial intelligence. Meanwhile, several search start-ups have already embedded AI in their services, giving the first glimpse of how the technology behind ChatGPT might transform one of the biggest online markets.

The sudden spurt of experimentation is long overdue, said Greg Sterling, an analyst who has followed the search market since 1999. For younger users in particular, Google’s search results pages seem cluttered and strewn with advertising, he said. “People are ready for something that is simpler, seemingly more credible and doesn’t have tons of ads stuffed in it.”

On their own, systems such as ChatGPT, based on so-called large language models that can “understand” complex queries and generate text responses, do not represent a direct alternative to search. The information used to train ChatGPT is at least a year old and the answers it gives are limited to information already in its “memory”, rather than more targeted material pulled from the web in response to specific queries.

That has led to a race to develop a new hybrid of AI and traditional search. Known as retrieval augmented generation, the technique involves first applying search tools to identify the pages with the most relevant material, then using natural language processing to “read” them. The results are injected into a large language model such as OpenAI’s GPT-3, which then spits out a more precise answer.

Google, Microsoft and some start-ups plan to embed AI into their services in ways that may transform the market [FT]

…sometimes the other name for adopting early…is getting there first

The price of the most important raw material feeding the latest artificial intelligence boom is collapsing fast. That should push the technology into the mainstream much more quickly. But it also threatens the finances of some of the start-ups hoping to cash in on this boom, and could leave power in the hands of a small group.

The raw material in question is the processing power of the large language models, or LLM, that underpin services such as ChatGPT and the new chat-style responses Microsoft recently showed off for its Bing search engine.

The high computing costs from running these models have threatened to be a serious drag on their use. Only weeks ago, using the new language AI cost search engine You.com 50 per cent more than carrying out a traditional internet search, according to chief executive Richard Socher. But by late last month, thanks to competition between LLM companies OpenAI, Anthropic and Cohere, that cost gap had fallen to only about 5 per cent.

Days later, OpenAI released a new service to let developers tap directly into ChatGPT, and slashed its prices for using the technology by 90 per cent.

This is great for customers but potentially ruinous for OpenAI’s rivals. A number, including Anthropic and Inflection, have raised or are in the process of trying to raise cash to support their own LLM ambitions.

Seldom has a technology moved straight from research into mainstream use so rapidly, prompting a race to “industrialise” processes that were developed for use in lab settings. Most of the gains in performance — and reduction in costs — are coming from improvements in the underlying computing platform on which the LLMs run, as well as from honing the way the models are trained and operate.

To a certain extent, plunging hardware costs benefit all contenders. That includes access to the latest chips specifically designed to handle the demands of the new AI models such as Nvidia’s H100 graphics processing units or GPUs. Microsoft, which runs OpenAI’s models on its Azure cloud platform, is offering the same facilities — and cost benefits — to other LLM companies.

Yet large models are as much art as science. OpenAI said “a series of system-wide optimisations” in the way ChatGPT processes its responses to queries had brought costs down 90 per cent since December, enabling that dramatic price reduction for users.

Training an LLM costs tens of millions of dollars, and techniques for handling the task are changing fast. At least in the short term, that puts a premium on the relatively small number of people with experience of developing and training the models.

By the time the best techniques are widely understood and adopted, early contenders could have achieved a first-mover advantage. Scott Guthrie, head of Microsoft’s cloud and AI group, points to new services such as GitHub Copilot, which the company launched last summer to suggest coding ideas to software developers. Such services improve quickly once they are in widespread use. Speaking at a Morgan Stanley investor conference this week, he said the “signal” that comes from users of services such as this quickly becomes an important point of differentiation.

Falling costs of AI may leave its power in hands of a small group [FT]

…still…I’m sure I’m just paranoid from being stuck down all these rabbit holes…it’s probably fine

[…]The company has released a long paper of examples of harms that GPT-3 could cause that GPT-4 has defences against. It even gave an early version of the system to third party researchers at the Alignment Research Center, who tried to see whether they could get GPT-4 to play the part of an evil AI from the movies.

It failed at most of those tasks: it was unable to describe how it would replicate itself, acquire more computing resources, or carry out a phishing attack. But the researchers did manage to simulate it using Taskrabbit to persuade a human worker to pass an “are you human” test, with the AI system even working out that it should lie to the worker and say it was a blind person who can’t see the images. (It is unclear whether the experiment involved a real Taskrabbit worker).

But some worry that the better you teach an AI system the rules, the better you teach that same system how to break them. Dubbed the “Waluigi effect”, it seems to be the outcome of the fact that while understanding the full details of what constitutes ethical action is hard and complex, the answer to “should I be ethical?” is a much simpler yes or no question. Trick the system into deciding not to be ethical and it will merrily do anything asked of it.

https://www.theguardian.com/technology/2023/mar/15/what-is-gpt-4-and-how-does-it-differ-from-chatgpt

…heaven help us all if someone asks it to provide the NYT with an automated bret stephens op-ed generator…though…the idea of him getting his walking papers because his job got taken by a thing that doesn’t have a mind…I’d admit to maybe finding that part funny to contemplate

In the short term, some experts believe AI will enhance jobs rather than take them, although even now there are obvious impacts: an app called Otter has made transcription a difficult profession to sustain; Google Translate makes basic translation available to all. According to a study published this week, AI could slash the amount of time people spend on household chores and caring, with robots able to perform about 39% of domestic tasks within a decade.

For now the impact will be incremental, although it is clear white collar jobs will be affected in the future. Allen & Overy, a leading UK law firm, is looking at integrating tools built on GPT into its operations, while publishers including BuzzFeed and the Daily Mirror owner Reach are looking to use the technology, too.

“AI is certainly going to take some jobs, in just the same way that automation took jobs in factories in the late 1970s,” says Michael Wooldridge, a professor of computer science at the University of Oxford. “But for most people, I think AI is just going to be another tool that they use in their working lives, in the same way they use web browsers, word processors and email. In many cases they won’t even realise they are using AI – it will be there in the background, working behind the scenes.”

https://www.theguardian.com/technology/2023/feb/24/ai-artificial-intelligence-chatbots-to-deepfakes

…but…I worry about what might get lost in the mix…to take a trivial example…I’m pretty certain somewhere among the reams of stuff I was reading about this stuff yesterday was something that pointed out…ironically as it seemed to me at the time…that one of the “standardized” tests (which are their own grab-bag of inherited biases & flawed approximations for quantifying things that arguably don’t lend themselves to that approach) GPT-4 struggles to achieve “human-level performance” with are essays about english lit…which made me look at that thing about the death of the english major from the other day (…week? …who can even tell any more? …what even is time?) in a different light…not that I’d say bret’s facility with language makes me think he could outcompete that particular algorithmic nemesis…oh…hang about…it was forbes, I think?

In fact, it scores in the top ranks for at least 34 different tests of ability in fields as diverse as macroeconomics, writing, math, and — yes — vinology.

“GPT-4 exhibits human-level performance on the majority of these professional and academic exams,” says OpenAI.
[…]
“Companies that are slow to adopt AI will be left behind – large and small,” says IDC analyst Mike Glennon. “AI is best used in these companies to augment human abilities, automate repetitive tasks, provide personalized recommendations, and make data-driven decisions with speed and accuracy.”

https://www.forbes.com/sites/johnkoetsier/2023/03/14/gpt-4-beats-90-of-lawyers-trying-to-pass-the-bar/

…never mind…as you were…it wasn’t that one…maybe if I had a nifty AI assistant I could have found it & this wouldn’t be the best part of an hour late getting posted…horseshoes & hand grenades, I guess…anyway…while I scrabble to find some tunes & somehow simultaneously not be this late with the rest of what I’m due to get to today…here’s the long version of the video that 9sec definition in a tweet myo linked in yesterday’s comments came from…it’s more like 15mins…but I found the dude pretty endearing…particularly the part where he looked at the camera…kinda shrugged…& brightly announced “I don’t have answers to these questions”

…good job I didn’t get into hypothetical parallels along the lines of prigozhin is to putin as desantis is to trump…we’d be here all day…anyway…where was I? …oh, yeah…blind boys

avataravataravataravataravataravataravataravataravataravataravataravatar

28 Comments

  1. The use of AI in  Be My Eyes and similar apps is very cool but I don’t have a lot of faith in humankind. I doubt that the positives will even outweigh the harm that will be done. But as the saying goes you can’t unring a bell. I’m glad some people will benefit from it until we find a way to make the world a bigger mess than it already is.

  2. Completely unrelated to anything here. I found the small pack of Tide pods I bought to take on a vacation last year so I wanted to use those up. As someone who has used unscented laundry detergent for most of my adult life, Jesus fucking Christ do these reek of whatever scent is in there. It’s not a bad scent, it’s just that I can smell it down the hall from where I’m air-drying things.

      • …I pretty sure you could argue that the post has a strong whiff of something about it…so could be?

        …I vaguely recall a chemistry teacher once being disparaging about the aromas found in things like lynx deodorant as being “cheap esters” or something…might have spelled that wrong…the connection where I am is dodgy & wouldn’t let me google to check so I’m going by what I think I remember someone saying altogether more years ago than I care to admit

        …either way…fair enough?

        • Yeah I don’t have any reason to suspect anything besides cheap chemicals that probably are bad for us in things like this.

          I always use unscented laundry detergent myself because I figured my shampoo/conditioner has a scent, sometimes I wear perfume, my lotion is sometimes scented, etc etc. I don’t see all those aromas mixing well.

          Added benefit is that when I have houseguests, any time someone has allergy issues with scent products I don’t have to worry about the towels. 

    • Mrs Butcher has such an aversion to scents that I’ve been using unscented detergents for over 20 years (and no dryer sheets in all that time either). Once in a while I come across someone who uses a heavily scented detergent and it makes me want to choke.

  3. These people have NOTHING but trying to blame the left…

    https://talkingpointsmemo.com/edblog/a-quick-look-at-the-lying-trumpist-liars-behind-that-database-on-corprate-giving-to-blm

    The opposite of Woke is MAGA?

    This amazing woman is the hero we need!

    https://apnews.com/article/filibuster-transgender-gender-affirming-therapy-bill-nebraska-cavanaugh-b9018fd1bf72112ca984ff58679eda6d

    • That TPM article is wild.

      The Claremont Institute is claiming that Bank of America gave $8 billion to the Black Lives Matter movement.

      They’re claiming that banks granting mortgages in low and moderate income neighborhoods is funding BLM, and they’re counting things like donations to HBCUs and the Smithsonian’s African American History museum.

      • …meanwhile per that not-helping-with-poverty article from the other day you’ve got similar interests diverting funds to things like anti-abortion “support”

        …it’s false equivalencies all the way down, I guess?

  4. Our key server provider is an SVB client. They are okay, but sent an emergency email yesterday asking that all payments to their SVB account be suspended, informing that new banking info will be sent via docusign and separate email, and warning that scams and phishing are already rampant, so we should confirm the docusign banking information with them before taking any action.

  5. These are a couple of interesting polls which seem to show some softening of support for Fox.

    https://www.prri.org/spotlight/why-we-divide-republicans-by-media-trust-the-oan-newsmax-effect/

    https://variety.com/2023/tv/news/fox-news-dominion-lawsuit-viewers-less-trust-1235554399/

    I have the usual qualms about not going too far with these polls — sample size, methodology, blah blah.

    But taken together with other polls like the recent one showing a strong majority of Americans have a positive sense of “woke” and DeSantis’s soft support, I’m wondering what is going on.

    Part of it is I think Fox has just lost perspective. Murdoch has gotten old and his hosts and producers have favored their own pet causes and people over the longterm strength of Fox. They are like the joke about the dog that actually caught up to a car and had no idea what to do next.

    Part of it may also be a general vibe shift in the country. I would not be surprised if Americans overall are gradually seeing the right as the extremists that they are, and also incapable of doing anything.

    A decent chunk is probably due to ongoing softening of interest in news in general.

    It’s obviously not necessarily true this translates into anything for the good guys. Someone worse may capitalize, and Fox has benefited in large part because their competitors have left them a giant void to exploit. The mainstream press to a large extent still acts like it’s 1978, and there’s no sign they will change.

    But this may be a hint that things are going to go in a different direction than people expect.

    • …I think the dog that caught the car goes for a few strands of the MAGA-brand tangled weave…they’re going full steam ahead with the anti-abortion stuff…& that leaves a lot more folks cold than signing up by every halfway-reasonable-looking assessment I’ve run across since the supreme court blew that up…including whatever bullshit the guy in texas is trying to talk himself into

      https://www.reuters.com/legal/texas-judge-consider-banning-abortion-pill-us-2023-03-15/

      …& for all that it sounds like it belongs in the sure-jan-dot-gif column…between the dominion/smartmatic stuff & the state of the orange state’s man-getting-measured-for-an-orange-jumpsuit of the year for the next several years’ legal dance card…some of these assholes could yet…in the parlance of the wire…fall behind this shit

      …I mean…they’re going to bring war crime charges in the ICC over ukraine before that happens…& that says a bunch…but…it still seems more likely than not to me…possibly this side of november ’24

      …so…I go back & forth on the optimistic/pessimistic interpretation…I think you’re right that both fox news & the florida retiree who needs to stay retired have been bleeding off their share of that captured market…& if he’s still in it then I like to think that’s going to split votes & generally gum up the primary works…& hopefully they wind up with a ticket that won’t get through a general election even with all the voter suppression, gerrymandering, electoral college “adjustments” & the rest of it…which is hard to quantify in terms of what the eye of that needle looks like & their chances of somehow wedging that camel through the thing

      …but I can’t help a bit of trepidation at the prospect that in the event he’s too legally encumbranced to lay claim to the ticket…& desantis turns out to have overplayed his disney-slaying anti-woke hand…or not…whether all the thank-god-for-the-quiet-part assholes who were hoping they could vote for trump-but-not-actually-trump will vote with a sigh for relief for whatever ticket they run with…while the MAGA-martyr’s acolytes will come out in whatever droves they can muster to vote for the proxy as though their candidate of choice being unavailable is tantamount to someone actually taking their guns away

      …either way I’d feel a sight more relaxed about the prospect if it looked less like the dems’ race to lose & more like the GOP’s dumpster fire to self-immolate…because while I support transitioning away from fossil fuels & generally burning less of them if at all possible…I would pitch in gas money for that sort of a worthy cause?

      • My big takeaway is that the next couple of years are a lot more fluid than the pundits think. I think they’re so used to pretending the entire game is played on a soccer field with 22 players, two coaches and a few subs — never mind what’s happening among owners and agents and corporate sponsors. They can’t imagine what happens if players start picking up the ball, going up into the stands, divide into four teams, go out of the stadium altogether, or just quit playing for a while.

        There really aren’t rules in the way the pundits think there are.

        • …I tend to think of it more like the way there are rules to how the LLM things work…& in some ways it’s possible to sound like you can describe how this or that part of them works or drill down into what changes in the code reflect what changes in the behavior or outputs…but it’s also still basically false for even those people to say they “know” how it works

          …& in the background there are vertiginous sums of money being ushered hither & yon in surprisingly opaque ways given how obvious something that size would seem like it ought to be…& those mostly seem to prioritize what very much seem to be perverse incentives in the sense of not in fact going about things in a fashion that would serve some pretty obvious public interests…foreign & domestic

          …if you play six degrees of separation with, say…elon & his eye-watering bid to turn twitter into a troll farm…I think you only need about two of those degrees to skip to the people who took a swipe at credit suisse in one direction…& people who are carrying the same water for the kremlin as…say…carlson/bannon/gaetz/greene/boebert/&-the-list-goes-on in another…& that’s without bringing china into it, even

          …one way or another I’m starting to think of them as the “burn it all” contingent…those being the interests that seem to bring them into alignment that way…just keeping on staving off anything that might actually start to divert the fluid mechanics of the cashflow in the de-facto-capitalist global marketplace away from the well-worn waterways of a fossil-fuel based industrial model…it’s got to end someday…& that day is looking closer all the time…so they’re doing everything they can to haul in all the cash they can before the fat lady sings

          …it’s obviously way the hell more complicated than that…but I bet joe manchin figures he could do business with a man like de santis…& he’s peddling a whole ukraine-is-a-woke-welfare-queen snake oil routine…while dotardus maximus just the other day claimed he could “so easily” prevent WWIII…by doubling down on the what-vlad-wants-vlad-gets-tell-me-more-about-trump-moscow bullshit that got us here…so…that’s fast becoming another foreign-policy-is-all-bullshit-I-don’t-care-about shibboleth over that side of the aisle

          …& if things on “the world stage” keep on getting more & more unfriendly then eventually there’s a non-zero chance trade & international relations join the no-longer-recognise-the-rules club…& things like that willow drilling operation wind up getting lauded as prescient instead of completely at odds with what ought to have been an obvious priority for essentially all of us everywhere

          …that tory budget I was threatening to moan about the other day does a respectable job of lining up in the same row looking very much like a similar sort of a fowl, too…your very top end tax brackets (particularly the ones close enough to retirement to be in with a shot of having scored a “final-salary” private pension…as opposed to the state/statutory one you get for paying taxes (garnished directly off your wages for most folks)…that may or may not survive paying out to the generation that includes the too-good-to-last kind of pension-havers)…they make out pretty great…could be an extra quarter-mill in those pension pots that would have been the taxman’s

          …meanwhile…if you earn between the lowest wage you still have to pay income tax on…which is a shade over a grand a month (AKA arguably not enough to live in london) & £50k a year…that taxman will have an extra £500 off you…& between £50k & about a hundred or so that’d be an extra grand of the hard-earned that isn’t yours

          …& is there drilling going on for fossil fuels to add to reserves the world already can’t afford to burn there, too…you betcha

          …they say war is diplomacy by other means…but some days I wonder if we sleep on the part where it’s just as true if you flip the terms…or swap out diplomacy for “business”…there’s a reason a lot of would-be gordon gekkos read sun tzu & machiavelli…or that hagakure thing that sounds real cool in ghost dog…but less so if by “getting wet” they actually mean charging headlong into a storm of their own making

          …&…I am once more reminded how viscerally offensive it is that musk so erroneously cites the works of iain m banks as though he’d be a hero & not a villian in that context

          …I think it might be time for a drink

Leave a Reply