…poor footing [DOT 24/1/23]

good standing...

…pretty sure this could be the set up to a joke…but

New research conducted by a professor at University of Pennsylvania’s Wharton School found that the artificial intelligence-driven chatbot GPT-3 was able to pass the final exam for the school’s Master of Business Administration (MBA) program.

Professor Christian Terwiesch, who authored the research paper “Would Chat GPT3 Get a Wharton MBA? A Prediction Based on Its Performance in the Operations Management Course,” said that the bot scored between a B- and B on the exam.
Terwiesch did not immediately respond to a request for further comment. A spokesperson for artificial intelligence startup OpenAI, which created the bot, also did not immediately respond to a request for comment Monday.

The GPT-3 model used in the experiment appears to be an older sibling of the most recent ChatGPT bot that has become a controversial topic among educators and those who work in the field of AI. ChatGPT, the newest version, “is fine-tuned from a model in the GPT-3.5 series,” according to OpenAI’s website.


…still…it does seem to have a predictable punchline

Microsoft says it is making a “multiyear, multibillion dollar investment” in the artificial intelligence startup OpenAI, maker of ChatGPT and other tools that can write readable text and generate new images.

…it’s that kind of cost-cutting that impresses those MBA-types, right?

The investment announcement came a day before Microsoft was scheduled to report its earnings from the October-December financial quarter and after disclosing last week its plans to lay off 10,000 employees, close to 5% of its global workforce.


…or something

Just this week, Amazon said it was axing 18,000 workers, or 6% of its office staff, while business software firm Salesforce said it would reduce its workforce by 10%, or roughly 8,000 people.

That followed announcements from dozens of other firms including big names such as Meta, the owner of Facebook, WhatsApp and Instagram, hardware heavyweight Cisco, and payments firm Stripe.

Are tech job cuts a warning for the wider economy? [BBC]

…very possibly what some people refer to as “the smart money”

The re-engagement of Google’s founders, at the invitation of the company’s current chief executive, Sundar Pichai, emphasized the urgency felt among many Google executives about artificial intelligence and that chatbot, ChatGPT.
The new A.I. technology has shaken Google out of its routine. Mr. Pichai declared a “code red,” upending existing plans and jump-starting A.I. development. Google now intends to unveil more than 20 new products and demonstrate a version of its search engine with chatbot features this year, according to a slide presentation reviewed by The New York Times and two people with knowledge of the plans who were not authorized to discuss them.
Since stepping back from day-to-day duties, Mr. Page and Mr. Brin have taken a laissez-faire approach to Google, two people familiar with the matter said. They have let Mr. Pichai run the company and its parent company, Alphabet, while they have pursued other projects, such as flying car start-ups and disaster relief efforts.

Their visits to the company’s Silicon Valley offices in the last few years have mostly been to check in on the so-called moonshot projects that Alphabet calls “Other Bets,” one person said. Until recently, they have not been very involved with the search engine.

But they have long been keen on bringing A.I. into Google’s products. Vic Gundotra, a former senior vice president at Google, recounted that he gave Mr. Page a demonstration of a new Gmail feature around 2008. But Mr. Page was unimpressed by the effort, asking, “Why can’t it automatically write that email for you?” In 2014, Google also acquired DeepMind, a leading A.I. research lab based in London.
Google has a list of A.I. programs it plans to offer software developers and other companies, including image-creation technology, which could bolster revenue to Google’s Cloud division. There are also tools to help other businesses create their own A.I. prototypes in internet browsers, called MakerSuite, which will have two “Pro” versions, according to the presentation.

In May, Google also expects to announce a tool to make it easier to build apps for Android smartphones, called Colab + Android Studio, that will generate, complete and fix code, according to the presentation. Another code generation and completion tool, called PaLM-Coder 2, has also been in the works.

Google executives hope to reassert their company’s status as a pioneer of A.I. The company aggressively worked on A.I. over the last decade and already has offered to a small number of people a chatbot that could rival ChatGPT, called LaMDA, or Language Model for Dialogue Applications.
Google, OpenAI and others develop their A.I. with so-called large language models that rely on online information, so they can sometimes share false statements and show racist, sexist and other biased attitudes.

That had been enough to make companies cautious about offering the technology to the public. But several new companies, including You.com and Perplexity.ai, are already offering online search engines that let you ask questions through an online chatbot, much like ChatGPT. Microsoft is also working on a new version of its Bing search engine that would include similar technology, according to a report from The Information.

Mr. Pichai has tried to accelerate product approval reviews, according to the presentation reviewed by The Times. The company established a fast-track review process called the “Green Lane” initiative, pushing groups of employees who try to ensure that technology is fair and ethical to more quickly approve its upcoming A.I. technology.

The company will also find ways for teams developing A.I. to conduct their own reviews, and it will “recalibrate” the level of risk it is willing to take when releasing the technology, according to the presentation.


…the fact it can still screw up fairly basic arithmetic doesn’t fill me with confidence about its moral calculus

On Monday, Microsoft announced a “multiyear, multibillion-dollar investment” in OpenAI, though it declined to disclose the terms. The news site Semafor first reported on Jan. 9 that the company was planning to build on previous investments in OpenAI by pouring another $10 billion into it.

Last week, Microsoft launched an OpenAI service as part of its Azure cloud platform, offering businesses and start-ups the ability to incorporate models like ChatGPT into their own systems. The company has already been building AI tools into many of its consumer products, such as a DALL-E 2 feature in its Bing search engine that can create images based on a text prompt, and the Information reported recently that it’s working to bring more of them to Microsoft Office as well.

Eventually, CEO Satya Nadella said in Davos last week, “Every product of Microsoft will have some of the same AI capabilities.”
At Davos, Nadella predicted that the current generation of AI will spark an industry-wide “platform shift” on par with the moves to mobile devices and cloud computing over the past 15 years.

There’s also a skeptical view, in which the technology proves dazzling as a toy and a novelty but underwhelming or even harmful in its practical applications.

ChatGPT can produce remarkably plausible-sounding text on a wide range of topics, but it’s prone to factual errors and problematic biases, and some school districts have already banned it as a potential cheating tool. The tech news site CNET is under fire for quietly using AI to write articles, some of which were found to contain errors. Stability AI, the maker of AI art generator Stable Diffusion, has been sued by Getty Images for allegedly training its model on copyrighted works without permission. Microsoft has its own history of AI missteps, including the 2016 release of a chatbot called Tay that trolls trained to embrace genocidal hatred.
“Some people are going to get hurt — that’s inevitable,” said Meta’s chief AI scientist, Yann LeCun, on Thursday at a forum hosted by the company Collective[i]. That shouldn’t stop the march of progress, he added, but it is important for companies involved in developing new forms of AI to find ways to mitigate the damage.

Still, Microsoft’s embrace of OpenAI puts renewed pressure on its rivals, particularly in the lucrative cloud computing sector. While using ChatGPT to improve its own products could help Microsoft hold its edge in productivity software, the larger battle is to sell AI services to businesses. Those could include established firms looking to build a smarter customer-service chatbot, start-ups developing more specialized AI tools, or even other AI companies that need cloud computing power to train their own models.

One application that’s already gaining traction is the use of AI to assist software developers in writing code. Microsoft’s subsidiary GitHub uses OpenAI technology in a tool called GitHub Copilot, which can suggest code in real time as you’re programming.

Meanwhile, OpenAI appears to be rolling out a paid, premium version of ChatGPT, called ChatGPT Professional, at an initial price of $42 per month, according to tweets by users who’ve signed up for the service. Screenshots indicate the premium service will remain available when there’s high demand, and will come with faster responses to queries and early access to new features. OpenAI spokeswoman Hannah Wong told The Post on Friday that access to the basic tool would remain free.


…but…if you can’t tell the difference

Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.

I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”

I was surprised at how ChatGPT effectively drafted a compelling argument that reflected my views on A.I., and so quickly. As one of just three members of Congress with a computer science degree, I am enthralled by A.I. and excited about the incredible ways it will continue to advance society. And as a member of Congress, I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated.

…so…I guess I better make my peace with the fact that this is what AI means now

A.I. is part of our daily life. It gives us instantaneous search results, helps us navigate unfamiliar roads, recommends songs we might like and can improve almost any task you can imagine. A.I. is embedded in systems that help prevent fraud on your credit card, predict the weather and allow early detection of diseases. A.I. thinks exponentially faster than humans, can analyze orders of magnitude more data than we can and sees patterns the human mind would never see.

…maybe that still sounds like algorithmic analysis & machine learning rather than, say, the other thing…but…

What we need is a dedicated agency to regulate A.I. An agency is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error. Creating such an agency will be a difficult and huge undertaking because A.I. is complicated and still not well understood.

…or…you could go the way of europe…to the extent that you can transpose that sort of thing

Going from virtually zero regulation of A.I. to an entire federal agency would not pass Congress. This critical and necessary endeavor needs to proceed in steps. That’s why I will be introducing legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply.

We may not need to regulate the A.I. in a smart toaster, but we should regulate it in an autonomous car that can go over 100 miles per hour. The National Institute of Standards and Technology has released a second draft of its AI Risk Management Framework. In it, NIST outlines the ways in which organizations, industries and society can manage and mitigate the risks of A.I., like addressing algorithmic biases and prioritizing transparency to stakeholders. These are nonbinding suggestions, however, and do not contain compliance mechanisms. That is why we must build on the great work already being done by NIST and create a regulatory infrastructure for A.I.


…always assuming some bright spark doesn’t decide to just cut out the middle man

in what may be a first, a Massachusetts state senator has used a surging new tool to help write a bill aimed at restricting it: ChatGPT, the artificial intelligence chatbot.

On Friday, state Sen. Barry Finegold (D) introduced legislation to set data privacy and security safeguards for the service and others like it that was “drafted with the help of ChatGPT.”
Finegold and chief of staff Justin Curtis said in an interview that while the chatbot initially rejected their request to whip up a bill to regulate services like ChatGPT, with some trial and error it eventually produced a draft that the state senator described as “70 percent there.”
ChatGPT created a draft, later refined and formatted by Finegold’s office, that outlined restrictions against discriminatory data use and plagiarism and requirements that companies maintain “reasonable security practices,” according to screenshots shared with The Technology 202.
While much of it was in response to specific queries, Curtis said the tool did make some original contributions. “It actually had some additional ideas that it generated, especially around de-identification, data security,” he said.

…I mean, sure…there are members of congress I’d maybe think of as less than “70% there”…but…that’d be a sound basis for trepidation when it came to relying on them to spot the the 30% of the legislative framework that the magic box doing their homework for them might fail to account for…to…umm…put it mildly

Daniel Schuman, a policy director at the Demand Progress advocacy group, argued that there is a place for AI-driven tools like ChatGPT in the legislative process, from summarizing documents to comparing materials and bills — but not without significant human oversight.

“AI also can have significant biases that can arise from the dataset used to create it and the developers who create it, so humans must always be in the loop to make sure that it is a labor-saving device, not a democracy-replacement device,” he said in an email.

…which I expect it will if it gets to mediate between people & their representatives…for example

“In particular, this could include initial drafts of constituent letters or casework, boosting the efficiency of district offices and [legislative correspondents],” [Zach Graves, executive director of the Lincoln Network think tank] said. “But it could also help with drafting dear colleague letters, tweets, press releases and other functions.”


…not that there isn’t considerable scope for improvement

when it comes to perhaps Barr’s highest-profile controversy — his misleading summary of the Mueller report — old habits apparently die hard.
In what was otherwise a relatively chummy interview [this weekend], Maher did briefly press Barr on the subject of the summary, saying the way he “mischaracterized” the Mueller report was “shady.”

Barr defended his handling of the matter. But in doing so, he rolled out some of the most misleading aspects of his summary all over again.

…you know…the “no collusion” classics

“Collusion is not a specific offense or theory of liability found in the United States Code, nor is it a term of art in federal criminal law,” the Mueller report reads. “For those reasons, the Office’s focus in analyzing questions of joint criminal liability was on conspiracy as defined in federal law.”

Barr’s use of the “no collusion” phrasing was suspect not just because the report didn’t directly address it, but because it matched Trump’s own mantra and defined the amorphous term in a way Trump surely approved of. And it’s arguably even more jarring today, given that a later bipartisan Senate report, released in August 2020, detailed perhaps the most significant example to date of a high-ranking Trump campaign aide working with someone it described as a “Russian intelligence officer.”

…of which more, anon

…it’s almost like there’s a causal relationship between smokescreens & dumpster fires or something

A former senior F.B.I. official in New York who oversaw some of the agency’s most secret and sensitive counterintelligence investigations was accused on Monday of taking money from a former Albanian intelligence employee and from a representative of Oleg V. Deripaska, a Russian oligarch.

The charges against the former official, Charles F. McGonigal, came in separate indictments unsealed in New York and Washington, D.C., after an investigation by his own agency and federal prosecutors. In the New York case, he was charged with violating economic sanctions that the United States has imposed on Russia because of its aggression in Ukraine.

Before he retired in 2018, Mr. McGonigal had been the special agent in charge of the F.B.I.’s counterintelligence division in New York. In that post, he supervised investigations of Russian oligarchs, including Mr. Deripaska, whom the U.S. attorney’s office in Manhattan charged him with aiding. Mr. Deripaska is an aluminum magnate with ties to President Vladimir V. Putin.

Federal prosecutors said Mr. McGonigal, 54, broke U.S. law by agreeing to help Mr. Deripaska, who was indicted himself last year on sanctions charges, investigate a rival oligarch and try to get off the sanctions list.
The indictment unsealed in Washington charged that Mr. McGonigal, while working for the bureau, took $225,000 in secret cash payments from a person who decades earlier had served with Albanian intelligence. Mr. McGonigal concealed that relationship from the F.B.I., the indictment said.

The indictment also charges that the F.B.I.’s New York office, at Mr. McGonigal’s request, opened a criminal investigation into foreign lobbying in which the former Albanian intelligence employee, who was not named in the indictment, provided information as a confidential informant.
The indictment unsealed on Monday in Federal District Court in Manhattan charges Mr. McGonigal with one count of violating U.S. sanctions, one count of money laundering and two conspiracy counts for what it said were attempts to aid Mr. Deripaska.

Mr. Deripaska was a client of Paul Manafort, who for several months in 2016 served as Donald J. Trump’s campaign chairman and in 2018 was convicted of financial fraud and other crimes.
The indictment in Manhattan on Monday also charged a second man, Sergey Shestakov, 69, a former Soviet and Russian diplomat who became an American citizen. Mr. Shestakov later worked as an interpreter for the federal courts and U.S. attorney’s offices for the Southern and Eastern Districts of New York, according to the indictment.
Mr. McGonigal served in the F.B.I. for more than two decades, working in Russian counterintelligence, organized crime and counterespionage, according to the Southern District indictment. He had a role in the investigation into Russian interference in the 2016 election led by Robert Mueller III, asking judges to renew wiretaps on Carter Page, a former Trump campaign adviser. The agency later conceded the surveillance was not legally justified.
The indictment says that while Mr. McGonigal was still working for the bureau in 2018, Mr. Shestakov introduced him via email to an employee of Mr. Deripaska. The indictment identifies the employee only as Agent-1 and describes him as a former Soviet and Russian Federation diplomat.

Mr. Shestakov asked Mr. McGonigal to help Agent-1’s daughter obtain an internship with the New York Police Department in counterterrorism, intelligence gathering or “international liaisoning,” according to the indictment.

The indictment says Mr. McGonigal agreed, and he sought help from someone he knew in the department, telling his contact, “I have an interest in her father for a number of reasons.” Mr. McGonigal also told an F.B.I. subordinate that he wanted to recruit Agent-1, whom he described as a Russian intelligence officer, the indictment says.

Through Mr. McGonigal’s efforts, Agent-1’s daughter “received V.I.P. treatment from the N.Y.P.D.,” according to the indictment. A police sergeant assigned to brief the daughter later reported the event to the department and the bureau after Agent-1’s daughter “claimed to have an unusually close relationship to ‘an F.B.I. agent’ who had given her access to confidential F.B.I. files, and it was unusual for a college student to receive such special treatment from the N.Y.P.D. and F.B.I.,” the indictment said.


…more by way of your HUM(an)INT(elligence) than AI…though the bet always seems to be on that being outmatched by ignorance…studied or otherwise

On Maher’s show, Barr again oversimplified. He pitched the report as Mueller saying he didn’t “find there was obstruction.” In fact, Mueller laid out five instances in which he suggested Trump’s conduct appeared to satisfy the criteria for an obstruction charge. Mueller at one point did say that “this report does not conclude that the President committed a crime” — but in the context of an extended discussion about why he felt he wasn’t even allowed to make such a conclusion.


…that & starting with the conclusion you’d like to be foregone before tacking some half-baked nonsense on the front end that you’d struggle to argue your way to in a logical direction

President Biden, consistent with his idea of building an economy from “the bottom up and the middle out,” has tried to get the rich and big corporations to pay more taxes. The MAGA GOP, abandoning all pretense of populism, has a scheme to junk the progressive tax code and replace it with a national sales tax, with devastating results for the middle class.
“Over the past 40 years, the wealthy have gotten wealthier, and too many corporations have lost their sense of responsibility to their workers, their communities and the country,” Biden said in a speech in September 2021. “CEOs used to make about 20 times the average worker in the company that they ran. Today, they make more than 350 times what the average worker in their corporation makes.” He added, “Since the pandemic began, billionaires have seen their wealth go up by $1.8 trillion. That is, everyone who was a billionaire before the pandemic began, the total accumulated wealth beyond the billions they already had has gone up by $1.8 trillion.”

That grotesque widening of income inequality offends most Americans, who consistently tell pollsters the rich should pay more.

“The debate should focus on one accurate and alarming number: the IRS has 2,284 fewer skilled auditors to handle the sophisticated returns of wealthy taxpayers than it did in 1954,” Chuck Marr of the Center on Budget and Policy Priorities wrote. “The decade-long, House Republican-driven budget cuts have created dysfunction at the IRS, where relatively few millionaires are now audited.”

But allowing tax cheats to avoid paying what they legally owe is not the sum total of the GOP thinking on taxes. “As part of his deal to become House speaker,” Semafor reported, “Kevin McCarthy reportedly promised his party’s conservative hardliners a vote on legislation that would scrap the entire American tax code and replace it with a jumbo-sized national sales tax.”

A mammoth 30 percent sales tax would be grossly regressive, socking it to the same working- and middle-class families Republicans ostensibly worry are paying more at the pump and grocery store because of inflation.

The GOP plan boils down to this: Let rich tax cheats get away with not paying what they owe while redoing the entire tax system so the overwhelming burden will fall on those less able to pay. Genius![…]

The plan is unlikely even to get a vote. But it is indicative of the utter lack of seriousness that pervades the GOP. They throw out one boneheaded idea after another, hoping to please some segment of their base or donors, with nary a care in the world for the needs of their constituents nor for the actual challenges we face.


…sometimes it doesn’t seem like you’d need help from an AI to find some common denominators

“He used to say he was never being paid enough,” one source who worked in No 10 under Johnson recalled. “He was so incredibly tight with money, jokes were made if his wallet ever came out.”

Another described him as a “wheeler-dealer”, and claimed that at the time, staff in Downing Street felt there were some “quite ridiculous arrangements going on” to fund his lifestyle – referring to the way Johnson’s flat redecoration was initially funded before he repaid the costs personally.

The credit facility loan is said to have been guaranteed by a distant Canadian cousin of Johnson, introduced to the cabinet secretary, Simon Case, by Richard Sharp just before he was recruited as the BBC’s chairman.

It only came to light at the weekend after a report in the Sunday Times. But Johnson’s financial reliance on others stretches back many years.

…including the part where the guy he appointed chancellor at one point…you know…After the Guardian revealed on Friday that Zahawi’s settlement of an HMRC tax bill worth millions included a seven-figure penalty, the Conservative party chairman and former chancellor gave his version of events – and said that his error was “careless and not deliberate…that guy…kinda makes a sort of sense…I mean…imagine what the world looks like when you can be careless to the tune of millions owed…but I digress

Johnson’s credit facility and the lack of any previous declaration about it are evidence of a “constitutional hole” whereby some gifts, donations and financial help need to be recorded publicly and others do not, Slocock said.
Among the most generous benefactors are Lord and Lady Bamford, of the construction equipment manufacturer JCB. They covered £23,000 worth of his wedding celebration – from the hire of a marquee to portable toilets, flowers and an ice-cream van.

The register of members’ interests shows the pair have, between them, given gifts and hospitality worth an estimated £84,000 since Johnson said he would stand down as prime minister.

Separately, he was given a Caribbean holiday worth at least £15,000 fresh after winning the 2019 general election. The trip was “facilitated” by David Ross, a Tory donor and co-founder of Carphone Warehouse, but an investigation was launched into it by the standards commissioner when it emerged the villa he stayed at was owned by Sarah Richardson, a US financier.

Johnson was hardly living on the breadline when he was PM, however. As well as being entitled to a £164,080 salary, he is reported to have rented out two properties – in Camberwell, south London, and Thame, Oxfordshire.

He also earned thousands from his books, and was paid handsomely for a Daily Telegraph column to the tune of £22,900 for 10 hours’ work per month in the year leading up to his stint in No 10.

As a former prime minister, Johnson also gets an allowance of up to £115,000. The Liberal Democrats have launched a bid to try to deny him access to it until he “comes clean” about the latest loans.

…which we can reasonably expect to happen a few days after hell freezes over

As well as making lucrative speeches since leaving office, Johnson also received a donation of £1m from a Thai-based British businessman – one of the biggest ever recorded to an individual UK politician.


…I guess you could call it progress

Back in Major’s sleaze days, the “cash for questions” scandal saw MPs taking bribes in brown envelopes from Mohamed Al-Fayed for asking parliamentary questions. How much? A mere £2,000 a time.

By the time Owen Paterson resigned in 2021 for improper lobbying – he was facing a 30-day suspension amid a parliamentary investigation – cumulatively he had received at least £500,000 in payments. But that was small potatoes compared with the shock discovery at the time that Sunak was chancellor of the exchequer, his wife, Akshata Murty, may have avoided paying up to £20m in tax, with her non-dom status implying that her permanent residence was outside the UK; meanwhile Sunak held a US green card that implied he would be living in the US.

Then there was Sajid Javid’s former life as a £3m-a-year Deutsche Bank purveyor of collateralised debt obligations (CDOs). It was reported in 2014 that he made use of the bank’s “dark blue” tax loophole in the Cayman Islands, which helped bankers to avoid tax on huge bonuses. A judge found the scheme to be “sophisticated attempts of the Houdini taxpayer to escape from the manacles of tax”. (Javid denied receiving any tax advantage from the scheme at the time.) It’s no surprise that Ayn Rand’s The Fountainhead is his favourite book – it’s a song for the survival of the fittest, in which individualism triumphs over collectivism. And it’s no surprise either that this former health secretary now calls for a debate about ending a “free at the point of delivery” NHS, writing approvingly about payments for GP and A&E visits.
Evidence of tax distortions benefiting only the rich mount up by the week. The Institute for Fiscal Studies’ TaxLab lists an array of wasteful tax reliefs. The latest example comes from the Resolution Foundation thinktank, which has identified “five terrible tax breaks”, used by just 70,000 individuals, that deprive the public realm of £4bn.


…& I guess it does add up to something

…but when it comes to putting one thing together with another

A spokesperson for Donald Trump would not say if the former president knew a notorious Philadelphia mobster, after the two men were photographed together at a Trump-owned golf club earlier this month.

“President Trump takes countless photos with people,” the spokesperson told the Philadelphia Inquirer, which obtained the picture of Trump and Joseph “Skinny Joey” Merlino standing together and making thumbs-up gestures.

…something, something…who you know

Trump has regularly been compared to a mafia don, given his penchant for threatening those who do not do his bidding and links to New York organised crime figures including, notoriously, Anthony “Fat Tony” Salerno.

Speaking to CBS in 2013, Trump was asked if he had “ever knowingly done business with organised crime”.
“I have met on occasion a few of those people. They happen to be very nice people.”

He added: “You just don’t want to owe them money. Don’t owe them money.”


…it’s not like they’re lawyers…or the government…who you can avoid paying…speaking of which



  1. “Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. ”


    Robocopish anyone?

    • …just one of many a techno-dystopia for which you could check off a positively alarming number of  boxes

      …there’s a fair bit of competition from the likes of philip k dick & william gibson…but whoever gets credit for sketching out that sort of thing there’s something about the three-ring-binder based franchise model that goes for everything from your mafia-run pizza delivery to lot-specific sovereign rights in neal stephenson’s snow crash that just might be trying it damndest to bootstrap its way into getting life to imitate art

      …probably best not to think about it too much…I’m sure that’s what all that AI stuff is supposed to do for us, anyway…then again…in that one the metaverse is a whole thing…so…maybe we don’t have to worry about it after all?

  2. A professor from my grad school who I’m Facebook friends with ran some of his exam short essay questions through chatgpt or whatever is called and then shared the results.

    None of the answers were A or even B level work. All of the answers had the look of when a student didn’t study and is trying to bullshit their way through an essay without really having all the details but at least a basic understanding of the course material. Which is to say, for many college courses, about 25%-50% of the students for any given test and the group that typically can pull a low D to low C on an exam.

    • We’re dabbling extensively with ChatGPT at work, because, hey, I write for a living. It is highly limited as you ask it more technical questions, and yeah, it will generate plain old wrong shit. You simply can’t trust it. It’s really bad at in-depth analysis. You also see it generating similar copy if you ask variations of the same question, trying to drill down.

      That said, you can generate a significant chunk of copy that you then edit, which could improve the process for some people with limited writing skills. For me it’s easier just to write the copy myself than to go back and edit it, but if you’re having a “writer’s block” day (doesn’t happen to me often but it does sometimes) it can be a useful jumpstart to get me moving.

      It’s going to be a problem for teachers, though. Most non-writing teachers aren’t going to be sophisticated enough to spot it. Which is why there’s a proliferation of ChatGPT detectors that are suddenly available.

      So we tested those at work too. Every single one of the samples I wrote came back as 99% ChatGPT. Problem is, if you have above-average grammar and spelling skills, it’s going to flag you as AI. Because humans can’t do that, right? So to get my copy past the detector, I had to do things like insert run-on sentences or incorrect punctuation.

      The other thing to remember is that you’re getting a free teaser of it now. Very shortly it’s going to be monetized. Are there students that will pay $50 a month for it? Sure there are. Where you will see it more is in the workplace, where the company picks up the tab. But who cares if your real estate blog is AI-generated? If the information is useful, its SEO will rise, and nobody is going to worry about it. If it’s useless, it will sink like most of the online content generated today.

      As for teaching, you’re going to see a renewed emphasis on in-person classes and doing exams by hand in class. That’s going to absolutely suck for kids now who never learned to write by hand. If you don’t have kids you probably don’t realize that schools no longer teach cursive or any type of handwriting. My daughter’s signature looks like it’s done by a six-year-old and it takes her like 30-40 seconds to painstakingly scrawl it. And my kid is a top student — she just never had to learn cursive or how to write by hand. So there’s going to be some serious educational disruption.

      • I think the issue is somewhat similar to AI driving.

        There’s a lot of seriously underanalyzed thought that you can simply do a handoff from AI to people when necessary, and everything will be fine.

        In limited circumstances, I think that’s true — having AI monitoring other cars and ease up on the throttle when traffic ahead suddenly stops is generally fine, but there are a lot of conditions.

        One is that the more work AI does, the harder it is to get people back into the loop quickly. You have to force people to stay engaged. With AI driving, you basically have to do what purely manual driving does — always let people know they are on the edge of disaster if they get distracted. The systems need to make sure people are always involved in controlling the car — there can’t be phone breaks.

        With things like writing, I think AI will basically need to be programmed to always have blank spots and things to fix in order to force people to edit it.

        • …the other part is that as it gets integrated in more & more systems you don’t know which one is serving you what or why

          …if you get a chunk of AI-generated text but to fact check it you’re left running searches through what might at its roots be the same AI integrated into your search engine…is it going to be blind to the same faults in its analysis &/or synthesis…& that’s a fairly straightforward example

          …when it comes to processing “big data” for, say, government work…there’s a really good chance of nested compound errors that are almost impossible to interrogate…which I’m sure will marry right up with, say, benefits for the elderly &/or infirm

          …brave new world, that has such wonders in it?

          • With a regular car if the wheels are misaligned and the steering tends to pull a bit to one side, it’s consistent thing you can learn to adjust for until you get it fixed by a mechanic who can zero in on what the issue is.

            There’s a risk with AI that it will learn to reinforce its bias. If it thinks that health benefits should be cut, and human start questioning its intial set of sources and using others, it’s possible that the AI will find out and start generating the kinds of SEO, recommendation numbers, and whatever else is needed to push its prefered sources back on top.

            • …that’d be about the sort of thing I referred to as “nested compound errors” somewhere else in these today…

              …& although computers remain allegedly bound by logic…”optimization” is a decidedly relative term…so when you’re working on logic that says “we gave it a crap ton of data” (with many a possible bad seed) & then “we let it run the numbers over & over until the results started looking useful”…there’s a pretty big potential to sail past the point where you no longer know how variables are weighted let alone how the calculation is performed…you just have a magic answer

        • I don’t think we’ll see self-driving cars until and unless everyone says, okay, do it. I think you could put AI in charge of ALL the variables with a reasonable chance of success. But if you’ve got humans and AI driving, the unpredictability factor is just way too high.

          To even initiate the system, you’d probably have to have them function separately, like “Lexus Lanes” on roads that let drivers that want to pay tolls and go faster. You’d need separate AI lanes where you enter them and immediately the computer takes complete control of your car and all the others in the AI lanes. So the cars communicate and everybody behaves predictably. I could see robot buses running on standard, programmed routes, but again, you’d have to take steps to keep idiot drivers from getting in the way. So again, restricted lanes.

          Weather would be the other issue — could AI handle snowstorms, hurricanes, etc.

          You know what would be easiest? Public transportation. Ha!

      • …well…you at least get more for your 42 bucks a month (per one of those bits of block-quoted stuff up there) than for giving elon $8

        …gotta wonder what douglas adams woulda made of that price point, though?

      • Hell, even before ChatGPT, the automated plagiarism software was total bullshit.  Every time I had to submit a paper during my masters program, I had to continually remind the professors to ignore the cheating assessment because it would ding me if I had too many citations.

  3. I once got into an argument with an MBA about the complexity, necessity and utility of their degree.

    I didn’t think it would be hard to replace MBAs with AIs… It turns out I wasn’t wrong.

      • I should note that I got both from the same school, so it’s not a case of getting one at Harvard and the other at University of Phoenix. Business concepts are just easy if you’re used to reading and analyzing Shakespeare.

        • And another thing … (I keep thinking of things. Got my degrees a long time ago but these debates were going on then.)

          Before I started the MBA program I of course talked to other people. Universally, the people with liberal arts undergraduate degrees breezed through the program effortlessly (my school was known, and still is, for its business program). The people that struggled were the marketing and general business majors. Accounting majors did better, but they sucked at business analysis (or anything that required writing, like marketing plans, etc.)

          One of the peculiarities of business school was “group work.” Basically the teachers let you crowdsource your assignments. For good students, it was a chore working with dullards. For dullards, it was an easy A as long as you took care to get a bright person on your projects. But dullards and bright students all got As. The system works!

          I often wonder what my thesis committee would have said if I told them I wanted to do it as a “group project.”

          • For good students, it was a chore working with dullards. For dullards, it was an easy A as long as you took care to get a bright person on your projects. But dullards and bright students all got As.

            I mean, is there any better training for the business environment than that? “You are now qualified to work on committees.”

            • That was the literal justification given for group work. “You’ll need to develop these skills for the workplace.” And yes, it’s true, you’ll constantly need to work with idiots. And it’s still the same pattern — one person who knows their shit, maybe another one or two who can be helpful in an administrative capacity, and one or two more who do nothing and expect credit. The knowledgeable one is pushing all these people up the ladder, and the worst thing that can happen is that everyone realizes that you’re the workhorse. Because they’ll all desperately conspire to hold you back. I’ve been told “we can’t promote you because you do too much work.”

              There’s no mechanism to weed out idiots. At all. And if you get away from a particular idiot, that idiot will hire another sucker to do all the work and then steal the credit from the sucker. And you’ll probably end up working for another idiot.

              Yeah, it’s a persistent pattern.

          • Once upon a time, when I was still working in audio, I got sent by my boss back to my alma mater to go find an intern.  I was a relatively recent graduate so I still knew a lot of the students.  They all kept asking me what classes they should take before graduating.  Of course, they were asking me which technical classes they should take.  My answer was to tell them to take as many English lit and comp classes as they could possibly fit into their schedules.  When I got the inevitable blank looks, I would remind them, “if you write like and idiot, and you talk like an idiot, people are going to think you’re an idiot and you’ll never get an interview.”

  4. The problem still isn’t the AI itself — it’s still created by humans and those are the people who I don’t trust to make it work well. I’m glad that story noted “societal biases” because that’s still the biggest roadblock.  And it feels very metaverse-ish — “look at this awesome thing that’s gonna change the world!” that nobody has any real interest in down the line.

    Silicon Valley has not really shown itself to be great predictor of what people actually want in new tech. The recent layoffs show that; the era of free money (or extremely low-interest money) is on hold and it turns out that it’s much harder to succeed if you need customers to buy in rather than just rich/deluded investors. “Growing our user base” is no longer going to get companies by; they’re going to actually have to show profits or at least close the gap on profitability and … I don’t think a lot of these places are gonna survive it if that’s the metric.

    (Edit to add: It’s also one thing for a profitable business to see a down cycle coming and tighten its belt. Ford can do that. Ford makes money. Netflix, on the other hand, has never made money. It tightens its belt and nobody’s gonna want it because they don’t have a library and won’t let you share passwords and cancel every show people kinda like. All these companies are acting like they can bean-count their way to profit and I’m not sure the entertainment model will work the same way it does for cars or even like a Microsoft.)

    • …the other side of the “AI” stuff that bothers me…& there’s a big overlap on the baked in bias type stuff…is the black-box-ness of it all

      …it’s not just people don’t know how it works the stuff out…it’s that they can’t

      …which…has not gone well for people when it wasn’t a machine’s working that wouldn’t show…once you take it on faith…you pays your money & take…well…its choice?

  5. People are insane. This morning I was reading about the death of a 27-year-old fashion model. A guy, who looked like he was 16 to me, and since I’m not an ephebophile not outstandingly attractive in my eyes but premature death always is, morbid as I am. I can imagine several potential causes of death of a 27-year-old fashion model (his family isn’t saying) but this article allowed comments, and the general consensus among the Internet strangers weighing in is that it was the Covid vaccine.

    How soon these morons forget that it was The Greatest President in History Himself, Donald J. Trump, who accelerated the development of the vaccine, and the previous conspiracy, several conspiracies ago, was that this life-saving gift was delayed until after the 2020 election so that Trump wouldn’t get credit for it.

    • There was a flood of similar creepy attacks on the vaccine when Damar Hamlin’s heart stopped in the Bills game this month.

      It’s not simply the acts of random kooks — it’s preplanned and orchestrated by this point. And it’s worth pointing out that a bunch of the people Bari Weiss and Matt Taibbi claimed were being shut down by pre-Musk Twitter simply for being “conservative” were in fact involved in this kind of orchestration of anti-vax campaigns, and were muted before Musk’s takeover in part because they violated Twitter’s policies about organizing these kinds of campaigns.

      The foot soldiers are bad, but the commanders are the worst.

  6. …fit added “but, of course” news…it seems like those google/alphabet lay-offs were… determined algorithmically

    “Have been trying to figure out exactly how Google decided who it was going to fire. The pattern doesn’t seem particularly clear – people got let go right up to VP level, including some very long-standing employees who were well known as admired thought leaders. Some people who got promoted in the last cycle got fired. At least one SRE got fired while they were oncall for production stuff.

    I believe this was an attempt at some kind of “double-blind” exercise where nobody inside the company saw the list before the exits happened. This is beneficial to the company as individuals can’t be accused of (for instance) discrimination or retaliation against individuals if they didn’t know who would be going. It sidesteps a whole lot of potential legal problems if you can just say “It was the algorithm!”.

    …&…we officially have a third ring to this circus


    • Honestly, I’d rather have been laid off from an algorithm or even random drawing than what actually happened to me, which was the result of personal animus on the part of the COO (who was later terminated for her illegal management activities — but hey, I didn’t get offered my job back or even an apology).

      I’m not sure what to think about Pence. Will MAGAs shut up or will they go berserk? I mean, berserk, of course, but what rationale will Fox News offer them for their hysteria?

      • True. Every time I see Uncle Joe playing pattycake with McConnell I think, “That dude would fucking shiv you right now if he thought he could get away with it.” But Uncle Joe will doubtless assume they’ve formed a “relationship” — which they have, it’s just perpetrator and victim.

  7. i kinda wish..instead of ai…we’d focus on cybernetics

    it is admittedly a selfish notion…..but being able to replace the broken parts of me with new shiney ones….is a little more important to me than having machines run daily life…..assuming they fucking work right…and no one fucks it up

    you know…at a ripe old 39 and falling apart from the damage i has my priorities

    tho…i’ll keep my rain knee…if i can…. remarkably usefull

Leave a Reply