…if talk is cheap [DOT 1/6/23]

you'd think the cost would be lower...

…hmmm

Hundreds of artificial intelligence scientists and tech executives signed a one-sentence letter that succinctly warns AI poses an existential threat to humanity, the latest example of a growing chorus of alarms raised by the very people creating the technology.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” according to the statement released Tuesday by the nonprofit Center for AI Safety.
[…]
Altman and others have been at the forefront of the field, pushing new “generative” AI to the masses, such as image generators and chatbots that can have humanlike conversations, summarize text and write computer code. OpenAI’s ChatGPT bot was the first to launch to the public in November, kicking off an arms race that led Microsoft and Google to launch their own versions earlier this year.

…ok…let’s say they’re right…the thing they might build someday…that they look to have lurched closer to figuring out…that thing could pose an existential threat…if we give it the ability

Skeptics also point out that companies that sell AI tools can benefit from the widespread idea that they are more powerful than they actually are — and they can front-run potential regulation on shorter-term risks if they hype up those that are longer term.
[…]
“We need widespread acknowledgment of the stakes before we can have useful policy discussions,” [Dan] Hendrycks [a computer scientist who leads the Center for AI Safety] wrote in an email. “For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently underemphasized relative to the actual level of threat.”

…but…why-not-both.gif?

Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met with President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications including using it to spread disinformation and potentially aid in more targeted drone strikes.

“These technologies are no longer fantasies of science fiction. From the displacement of millions of workers to the spread of misinformation, AI poses widespread threats and risks to our society,” Sen. Richard Blumenthal (D-Conn.) said Tuesday. He is pushing for AI regulation from Congress.
[…]
Altman suggested in a recent blog post that there likely will be a need for an international organization that can inspect systems, test their compliance with safety standards, and place restrictions on their use ― similar to how the International Atomic Energy Agency governs nuclear technology.

…if I’m honest, though…I think if you were actually that kind of concerned about the potential threat of a “true” AGI…an actual general intelligence…a mind that isn’t human…you wouldn’t be so busily muddying that definition by applying it to preditictive text on steroids the way we all seem to be with these “generative pre-trained transformer” bots

Others have implied that the comparison to nuclear technology may be alarmist. Former White House tech adviser Tim Wu said likening the threat posed by AI to nuclear fallout misses the mark and clouds the debate around reining in the tools by shifting the focus away from the harms it may already be causing.

https://www.washingtonpost.com/business/2023/05/30/ai-poses-risk-extinction-industry-leaders-warn/

…& besides…the threats the current crop pose were threats already…& ones we haven’t done as much about as you’d think given how much we’ve found out about the ways they’ve been quietly blowing up in our face…though…the people calling a lot of the shots have certainly settled on some eye-wateringly-profitable answers that can do things like make a $25million fine a rounding error in the end of year accounts

Regulators said Wednesday that Amazon failed to delete children’s recordings and location information, in some cases before mid-2019 retaining transcripts parents specifically directed Alexa to erase.

More than 800,000 children under the age of 13 have their own Alexa profiles, according to the lawsuit filed by the Justice Department on behalf of the Federal Trade Commission. The voice assistant is especially popular with young children who can’t read but can access information and entertainment by talking to the device.

By recording children and using transcripts of those recordings to improve its product even after deletion requests, the U.S. government alleges that Amazon has violated the Children’s Online Privacy Protection Act of 1998, a law that has recently been enforced against other popular tech companies including Fortnite-maker Epic Games and YouTube.

The commission is also fining the company over Ring, Amazon’s home surveillance company best known for its doorbell camera. Regulators say the company illegally allowed employees and contractors to view private videos of customer’s homes and are fining the company an additional $5.8 million.

…$5.8million…it’d change your life…but will it moderate the behavior of a company whose owner built the world’s most transparent edifice of ballistic overcompensation?

“At Amazon, we take our responsibilities to our customers and their families very seriously,” said Amazon spokeswoman Parmita Choudhury in a statement. “Our devices and services are built to protect customers’ privacy, and to provide customers with control over their experience. While we disagree with the FTC’s claims regarding both Alexa and Ring, and deny violating the law, these settlements put these matters behind us.”

…seriously? …our shit is “built to protect customers’ privacy”…in the same breath as “we disagree” that it’s just been demonstrated to the court’s satisfaction that what that shit did was diametrically opposed to the claim you just made? …I’ll take “what is not an apology?” for however many millions upon millions of dollars…or I would…chance’d be a fine thing

Choudhury said Amazon agreed to “remove child profiles that have been inactive for more than 18 months” as part of the settlement, and worked with the FTC to expand a compliant program called Amazon Kids. Regarding Ring, she said Amazon addressed privacy problems “before the FTC began its inquiry.”

…sure-jan.gif

Federal regulators said the case against Amazon is intended to send a signal to all tech companies that are racing to use mass data to refine AI models, as the recent release of ChatGPT sparks an AI arms race in Silicon Valley.

“Machine learning is no excuse to break the law,” said commissioner Alvaro M. Bedoya, in a statement joined by FTC Chair Lina Khan (D) and commissioner Rebecca Kelly Slaughter. “Claims from businesses that data must be indefinitely retained to improve algorithms do not override legal bans on indefinite retention of data.”
[…]
The FTC for years has also been investigating Amazon for potential violations of U.S. antitrust laws, in a wide-ranging case that opened under the Trump administration in 2019. That potential case is being closely watched as a bellwether of Khan’s ability to rein in the power and influence of the tech industry. The prominent tech critic rose in notoriety for an academic paper she wrote in the Yale Law Journal called “Amazon’s Antitrust Paradox,” which criticized the company’s dominance.
[…]
In the absence of action in Washington, state legislatures have recently become more active in passing laws intended to keep children safe online.

California last year passed the Age Appropriate Design Code, which would require companies to consider children’s safety in the design of their products. NetChoice, an industry group that counts Amazon as a member, has sued to block the law from taking effect. A patchwork of state laws governing kids’ time online is emerging, as Utah and Arkansas adopt laws that requires social media sites to verify the ages of users.

https://www.washingtonpost.com/technology/2023/05/31/amazon-alexa-ring-ftc-lawsuit-settlement/

…& when they’re not helping themselves…they’re just not helping

Meta is threatening to block users in California from sharing news articles on its social media networks to protest a state legislative proposal that would force tech companies to pay publishers for their content.

The social media giant said Wednesday that if the California Journalism Preservation Act passes, the company would “be forced” to pull news from Facebook and Instagram in the state rather than agree to pay news outlets the journalism usage fee that the bill would require.

Meta’s stand mirrors its responses to a wave of regulatory proposals around the world that aim to bolster the struggling news industry by requiring social media platforms to negotiate deals with news outlets for content shared on their platforms. Over the years, traditional news publishers have lost key revenue sources while tech companies such as Facebook and Google became the predominant beneficiaries of the digital advertising market.
[…]
In recent years, Meta has threatened to pull news from its platforms in protest of similar proposals in Australia and Canada. The law eventually passed in Australia and has been credited with directing an estimated $130 million annually to news outlets from Meta and Google. The Canadian proposal is still under consideration.

Lawmakers in Washington D.C. dropped a measure last year that would have created a temporary carve-out in antitrust law to allow publishers to band together to negotiate with the tech giants over the distribution of their content after Meta said it would “consider removing news from our platform” if it passed.

…maybe it makes sense that something that on balance you’d think would have aggregated beyond the point of being this easy to brush off…can keep getting brushed off if you can keep it contained in sufficiently separated pieces…seems to work?

“This threat from Meta is a scare tactic that they’ve tried to deploy, unsuccessfully, in every country that’s attempted this,” Wicks said in a statement. “It’s egregious that one of the wealthiest companies in the world would rather silence journalists than face regulation.”

[…I’ll come back to this one]

…it’s…almost like we’re…I dunno

Beijing blamed U.S. sanctions on its defense minister for its rejection of an invitation to meet with Defense Secretary Lloyd Austin in Singapore. [NBC]

Elon Musk meets Chinese foreign minister, who calls for ‘mutual respect’ in U.S. relations [NBC]

…going about this all wrong?

The scale of plastic pollution is growing, relentlessly. The world is producing twice as much plastic waste as two decades ago, reaching 353 million tonnes in 2019, according to OECD figures.  

The vast majority goes into landfills, gets incinerated or is “mismanaged”, meaning left as litter or not correctly disposed of. Just 9 percent of plastic waste is recycled. 

Ramping up plastic recycling might seem like a logical way to transform waste into a resource. But recent studies suggest that recycling plastic poses its own environmental and health risks, including the high levels of microplastics and harmful toxins produced by the recycling process that can be dangerous for people, animals and the environment.  
[…]
Microplastic release is not the only flaw in the system. Recycling plastics means working with unregulated toxic chemicals. 

Plastics are made with as many as 13,000 chemicals, according to a UN report this month, and 3,200 of those have “hazardous properties” that could affect human health and the environment. Many more have never been assessed and may also be toxic, according to a report from Greenpeace released last week. 

In addition, “only a very, very small portion of those chemicals are regulated globally”, said Therese Karlsson, science and technical adviser at the International Pollutants Elimination Network (IPEN). “Since there’s no transparency [in the market], there’s no way for people to know which plastics contain toxic chemicals and which don’t.” 

The risk these chemicals pose increases among recycled plastics, as products with unknown compositions are heated and mixed together. 
[…]
The share of plastic waste that is recycled globally is expected to rise to 17 percent by 2060, according to figures from the OECD. But recycling more will not address a major issue: after being recycled once or twice, most plastics come to a dead end.  

“There’s a myth with plastic recycling that if the quality is good enough the plastics can be recycled back into plastic bottles,” says Natalie Fée, the founder of City to Sea, a UK-based environmental charity.  

“But as it goes through the system, it becomes lower- and lower-grade plastic. It’s down-cycled into things like drain pipes or sometimes fleece clothing. But those items can’t be recycled afterwards.” 

It is therefore difficult to make the case that recycled plastic is a sustainable material, said Graham Forbes, Global Plastics Campaign leader at Greenpeace USA, in a statement this week. 

“Plastics have no place in a circular economy. It’s clear that the only real solution to ending plastic pollution is to massively reduce plastic production.”  

And it is impossible for increased recycling to keep pace with the amount of plastic waste being produced – which is expected to almost triple by 2060.  

“There’s no way that we can recycle our way out of this,” added Karlsson. “Not as it works today. Because today, plastic recycling is not working.” 

https://www.france24.com/en/environment/20230530-tackling-plastic-pollution-we-can-t-recycle-our-way-out-of-this

…so

That generative A.I. has largely supplanted crypto in the eyes of founders and venture capitalists alike is not exactly surprising. When OpenAI released ChatGPT late last year, it sparked a new craze at a time when the collapsing crypto and tech markets had left many investors and would-be entrepreneurs adrift, unsure of where to put their capital and time. Suddenly users everywhere were realizing that A.I. could now respond to verbal queries with a startling degree of humanlike fluency. “Large language models have been around for a long time, but their uses were limited,” says Robert Nishihara, a co-founder of Anyscale, a start-up for machine-learning infrastructure. “But there’s a threshold where they become dramatically more useful, and I think now it’s crossed that.”

One appeal of generative A.I. is that it offers something for every would-be entrepreneur. For the technically minded, there is research to be done. For the business types, it’s easy to create applications on top of the OpenAI platforms. For the philosophically inclined, A.I. offers interesting avenues through which to explore what it means to be conscious and human. And unlike crypto, especially now, A.I. is a more credible field to be in for mainstream techies. Its products have already achieved significant traction among consumers — ChatGPT is believed to be the fastest app ever to hit 100 million users — and some of the figures at its forefront are familiar faces, now in their second acts, like Sam Altman, formerly the president of the start-up accelerator Y Combinator, and Greg Brockman, formerly the chief technology officer at Stripe, the payments-processing company. In short, you can’t help thinking that, as one friend recently proclaimed to me, “Everyone in S.F. is either starting or running an A.I. company or starting or running an A.I. fund.”
[…]
A few themes characterize the sorts of projects the HF0 fellows have been working on. On the one hand, there are applications to automate tedious business tasks like copywriting or spreadsheet wrangling. A company called Fileread falls into this category. Its law-firm customers upload all the documents relevant to a particular case into an online portal; Fileread indexes those documents into a special database that enables users to search the documents not only for exact terms like “truck” or “James,” but also for broader questions like “who made the transaction?” or “what are the relevant cases?” Under the hood, Fileread first fetches the most relevant documents from its database, then adds those documents to a user’s question and sends the whole, long query to the OpenAI application programming interface, or A.P.I. Fileread then spits out an answer, powered by the same large language models behind ChatGPT.

Without A.I., identifying and crafting a legal narrative by piecing together textual evidence from thousands of sources is a painstakingly manual process. Most of Fileread’s customers specialize in business litigation, including antitrust and liability cases. Sometimes they are paid on contingency, which means when they succeed, they typically get a percentage of the award or settlement, but when they lose, they get nothing. Firms need the A.I. to efficiently search for evidence in the documents that might, for example, either establish or refute liability. “They don’t have the manpower or the budget to do unlimited document review,” says Chan Koh, a Fileread founder and an HF0 fellow. “They want to spend the minimal amount of effort in order to win the case.”

Other HF0 fellows have been creating applications that lean into A.I.’s seemingly human affect in order to tackle some psychological need. For instance, Brian Basham, who has worked in Google’s Brain division and since 2018 has been a life coach in California, is working on Thyself, a subscription service for “guided emotional inquiry” that currently uses A.I. and human coaches but will eventually transition fully to A.I. I met him and his employee Maverick Kuhn over dinner one night at the Archbishop’s Mansion. After Kuhn waxed rhapsodic about a four-week-long retreat he attended last summer, called Sleepawake, I asked him whether the experience would have been as great if the facilitators had said and done all the same things but been A.I.’s. “Probably not,” he conceded. “That would very much be a disembodied head.”

A.I. and emotional regulation might seem like an odd juxtaposition, but it makes sense that emotional labor — at the end of the day, just another form of labor — could be one of the first job categories to be transformed by automation. And yet, setting aside its effectiveness, there’s something odd about using A.I. to manage our human brains when it’s not clear that the A.I. brain is at all similar to ours. “We’re obviously trying to anthropomorphize A.I., make it in our image,” said Matthew Rastovac, the founder of Respell, a tool that lets you create A.I. apps without doing any coding. “Because we don’t really know how else to build and understand a new kind of intelligence. But I think it’s much more likely that it’s going to be like a reptile, in that it has its instincts, but we can’t understand what’s going on inside its brain and listen to its actual thoughts.” We were sitting on the roof of Atmosphere, a hacker house in Nob Hill that he helped found; all around us, San Francisco was enchanting in the afternoon light. Earlier, he paraphrased for me some lines that he liked from Season 2 of “Westworld” that spoke to how early we still are, and how blinkered, when it comes to understanding this technology: “Sanity is a very narrow sliver of the possibilities of mind. Because we have culturally accepted norms, we have a certain way of acting and thinking and speaking, and if you deviate from that a little too much, then you’re, at best, weird, and at worst, clinically insane.”
[…]
A.G.I. stands for artificial general intelligence, a phrase that has come to represent a potential dream goal for A.I.: a machine intelligence with the flexibility to handle any intellectual task that humans can. A.G.I. House, it turns out, is a $68 million mansion in the small town of Hillsborough, 25 minutes from downtown Palo Alto. The mansion has a long thoroughfare of ferns out front, a pool and a barbecue pit in the back. Rocky Yu, previously the chief executive of an augmented-reality start-up, runs A.G.I. House, overseeing both its 10 residents and a raft of community events. He is warm and smiley and exceedingly well connected in the local A.I. community.

The crowd at that night’s GPT-4 hackathon was so large as to render the Wi-Fi basically nonfunctional. Every room overflowed with hackers crowding around whiteboards. In the kitchen, Chinese takeout was laid out on a table. A smattering of investors were present to check out the demos, which started at 8 p.m. with short speeches from the organizers. The speeches were all variations on a theme: We are living in a momentous time. Maybe in a few decades from now, we’ll look back at all these seminal A.I. achievements and see that they all came from this house in Hillsborough.

As at HF0, the demos here alternated between business uses and personal applications — a chatbot that impersonates business gurus like Mark Cuban, the owner of the Dallas Mavericks basketball team and a judge on “Shark Tank,” the business-reality TV show, and that allows you to ask for business advice; or an A.I. sommelier that will take your dinner menu and suggest an appropriate wine pairing. Six months ago, any one of these projects might have seemed remarkable, but the arrival of ChatGPT has remade expectations. “The one pattern I’m starting to see is that ChatGPT is the killer app,” the technologist Diego Basch has written on Twitter. “None of the tools built on top of the A.P.I. have been as useful to me.” Indeed, if you are building something on top of OpenAI’s A.P.I., it does seem as though your app’s marginal value has to be extremely high if it is to avoid being bulldozed by either OpenAI itself or one of the big tech companies like Google and Microsoft (or even later-stage start-ups that are rapidly rolling out A.I.-enabled features in their products).

As two analysts at N.E.A., an investment firm, put it in a recent report, generative A.I. may not be as disruptive to established businesses, and beneficial to start-ups, as previous big shifts in tech platforms. “Unlike with the prior shifts, incumbents do not need to re-architect their entire products to adopt this new platform shift,” the analysts wrote. “In addition, this shift favors companies with bigger, proprietary data sets which can give an edge to more established companies.”

www.nytimes.com/2023/05/31/magazine/ai-start-up-accelerator-san-francisco.html

…anyway

…where was I with that thing about costs & benefits of tech & regulation?

“The bigger picture here is that a ban on publisher content would be a lose-lose situation for Facebook and for publishers,” said Jasmine Enberg, an analyst who covers social media for analytics firm Insider Intelligence. “ Despite what Meta says, news does generate a ton of engagement for Facebook in particular, which brings in ad dollars.”

…surely…unless I’m just under-caffeinated this morning…in which case go on ahead & tell me what I’m missing…the problem isn’t inherent in the tech itself any more than it is in the platforms for social media…it’s what people do with that stuff that makes it dangerous…not least the people making the most money…& outside of your big tech behemoths…& your video gaming industry…the biggest slice of internet-based revenue goes to straight-up crime…fraud alone reports some serious ballooning of misappropriated funds since the advent of the internet & mobile phones…much of which is accentuated by leveraging a sense of urgency

If California enacts the law and Meta follows through on its threat, it would mark the first time the company has blocked news content in the U.S.

https://www.washingtonpost.com/technology/2023/05/31/meta-california-news-bill/

…& let’s not over-correct here, either…because there’s a sound argument to be made that a lot of “news” content they didn’t block has a history of being exactly the sort of problematic that people are trying to sound desperately serious about when it’s pumped out by an artificial imagination

Recently, researchers asked two versions of OpenAI’s ChatGPT artificial intelligence chatbot where Massachusetts Institute of Technology professor Tomás Lozano-Pérez was born.

One bot said Spain and the other said Cuba. Once the system told the bots to debate the answers, the one that said Spain quickly apologized and agreed with the one with the correct answer, Cuba.

…I made a kinda-joke the other day about how socrates would maybe have been a fan of the dialectic aspects of how these things operate…but…I mean…depending on how you look at it

The finding, in a paper released by a team of MIT researchers last week, is the latest potential breakthrough in helping chatbots to arrive at the correct answer. The researchers proposed using different chatbots to produce multiple answers to the same question and then letting them debate each other until one answer won out. The researchers found using this “society of minds” method made them more factual.

…that’s about what that is…only…we don’t actually have a lot of first hand stuff to go on about what socrates would say about anything…we have to make do with what plato said he might have said…& that might not invalidate the model of a thought process on display…when it comes to some of the specifics…validity is on the eye of the beholder…& the artificial ones seem to blink a lot

Figuring out how to prevent or fix what the field is calling “hallucinations” has become an obsession among many tech workers, researchers and AI skeptics alike. The issue is mentioned in dozens of academic papers posted to the online database Arxiv and Big Tech CEOs like Google’s Sundar Pichai have addressed it repeatedly. As the tech gets pushed out to millions of people and integrated into critical fields including medicine and law, understanding hallucinations and finding ways to mitigate them has become even more crucial.

Most researchers agree the problem is inherent to the “large language models” that power the bots because of the way they’re designed. They predict what the most apt thing to say is based on the huge amounts of data they’ve digested from the internet, but don’t have a way to understand what is factual or not.

…might matter less if more people were better at that part…but I digress

Already, when Microsoft launched its Bing chatbot, it quickly started making false accusations against some of its users, like telling a German college student that he was a threat to its safety. The bot adopted an alter-ego and started calling itself “Sydney.” It was essentially riffing off the student’s questions, drawing on all the science fiction it had digested from the internet about out-of-control robots.

Microsoft eventually had to limit the number of back-and-forths a bot could engage in with a human to prevent it from happening more.

In Australia, a government official threatened to sue OpenAI after ChatGPT said he had been convicted of bribery, when in reality he was a whistleblower in a bribery case. And last week a lawyer admitted to using ChatGPT to generate a legal brief after he was caught because the cases cited so confidently by the bot simply did not exist, according to the New York Times.

Even Google and Microsoft, which have pinned their futures on AI and are in a race to integrate the tech into their wide range of products, have missed hallucinations their bots made during key announcements and demos.

None of that is stopping the companies from rushing headlong into the space. Billions of dollars in investment are going into developing smarter and faster chatbots and companies are beginning to pitch them as replacements or aids for human workers. Earlier this month OpenAI CEO Sam Altman testified at Congress saying AI could “cause significant harm to the world” by spreading disinformation and emotionally manipulating humans. Some companies are already saying they want to replace workers with AI, and the tech also presents serious cybersecurity challenges.
[…]
Hallucinations have also been documented in AI-powered transcription services, adding words to recordings that weren’t spoken in real life. Microsoft and Google using the bots to answer search queries directly instead of sending traffic to blogs and news stories could erode the business model of online publishers and content creators who work to produce trustworthy information for the internet.
[…]
Depending on how you look at hallucinations, they are both a feature and a bug of large language models. Hallucinations are part of what allows the bots to be creative and generate never-before-seen stories. At the same time they reveal the stark limitations of the tech, undercutting the argument that chatbots are intelligent in a way similar to humans by suggesting that they do not have an internalized understanding of the world around them.

…so-crates to the rescue?

Manakul and a group of other Cambridge researchers released a paper in March suggesting a system they called “SelfCheckGPT” that would ask the same bot a question multiple times, then tell it to compare the different answers. If the answers were consistent, it was likely the facts were correct, but if they were different, they could be flagged as probably containing made-up information.
[…]
“It doesn’t have the concept of whether it should be more creative or if it should be less creative,” Manakul said. Using their method, the researchers showed that they could eliminate factually incorrect answers and even rank answers based on how factual they were.

It’s likely a whole new method of AI learning that hasn’t been invented yet will be necessary, Manakul said. Only by building systems on top of the language model can the problem really be mitigated.
[…]
By limiting its search-bot to corroborating existing search results, the company has been able to cut down on hallucinations and inaccuracies, said Google spokeswoman Jennifer Rodstrom. A spokesperson for OpenAI pointed to a paper the company had produced where it showed how its latest model, GPT4, produced fewer hallucinations than previous versions.

Companies are also spending time and money improving their models by testing them with real people. A technique called reinforcement learning with human feedback, where human testers manually improve a bot’s answers and then feed them back into the system to improve it, is widely credited with making ChatGPT so much better than chatbots that came before it. A popular approach is to connect chatbots up to databases of factual or more trustworthy information, such as Wikipedia, Google search or bespoke collections of academic articles or business documents.

…any which way you look at it, though…there’s a saying or two about apples & trees that keep coming to mind

“We’ll improve on it but we’ll never get rid of it,” Geoffrey Hinton, whose decades of research helped lay the foundation for the current crop of AI chatbots, said of the hallucinations problem. He worked at Google until recently, when he quit to speak more publicly about his concerns that the technology may get out of human control. “We’ll always be like that and they’ll always be like that.”

https://www.washingtonpost.com/technology/2023/05/30/ai-chatbots-chatgpt-bard-trustworthy/

…exploiting loopholes in an attempt to gain advantage while others have to cope with more of the downside than you do…is arguably a pretty strong indicator of approaching something like a human, after all

For years, ships wanting to hide their whereabouts have resorted to turning off the transponders all large vessels use to signal their location. But the tankers tracked by The Times go beyond this, using cutting-edge spoofing technology to make it appear they’re in one location when they’re really somewhere else.

During at least 13 voyages, the three tankers pretended to be sailing west of Japan. In reality, they were at terminals in Russia and shipping oil to China.

The vessels are part of a so-called dark fleet, a loose term used to describe a hodgepodge array of ships that obscure their locations or identities to avoid oversight from governments and business partners. They have typically been involved in moving oil from Venezuela or Iran — two countries that have also been hit by international sanctions. The latest surge of dark fleet ships began after Russia invaded Ukraine and the West tried to limit Moscow’s oil revenue with sanctions.

“The type of spoofing we are seeing is uncommon and sophisticated,” said David Tannenbaum, a former sanctions compliance officer at the U.S. Treasury, referring to the tankers identified by The Times. “It definitely looks like evasion on all parts.”

To date, it’s been rare to prove the true location of a ship pretending to be somewhere else. But a Times analysis of publicly available shipping data, satellite imagery and social media footage helped clearly establish that the tankers were not where they claimed to be.

The ships most likely sell their Russian oil to China above a price limit set by the sanctions. Since neither country recognizes the sanctions, the tankers themselves are not in violation by spoofing or carrying the oil.

But the tankers still have motive to spoof: to maintain their insurance coverage, without which they cannot operate in most major ports. The only insurers financially able to cover tankers are mostly based in the West and bound by the sanctions. If a client ship were to carry Russian oil that’s sold above the price limit, the Western insurer would be in violation of the sanctions and must drop its coverage.

“It’s significant when you look at dollar terms,” said Samir Madani, co-founder of TankerTrackers.com, which monitors global shipping, who first alerted The Times to several of the suspicious ships. “It’s around $1 billion worth of oil that is going under the radar while using Western insurance, and they’re using spoofing in order to preserve their Western insurance.”

In addition to the three tankers transporting oil, Times reporters tracked another three vessels spoofing while off the coast of Russia, though it’s unclear what cargo they carried.

All six tankers are insured by a U.S.-based company, the American Club. The Times provided the company with the names of the tankers, as well as details about the voyages on which they spoofed.
[…]
The U.S. has […] created so-called safe harbor provisions to protect insurers from liability if they inadvertently cover ships violating sanctions. As of May 30, a regularly updated list of American Club’s clients posted on its website showed the company is most likely still insuring the six tankers.
[…]
To carry out their deception, the tankers can use military-grade equipment, or software, that is now commercially available. This technology makes it possible to manipulate a vessel’s reported location, which is broadcast by an automatic identification system, or AIS. The signals communicate a ship’s identification, location and route over a radio frequency picked up by other vessels, ground stations and satellites.
[…]
The U.S. Treasury’s Office of Foreign Assets Control has repeatedly warned American companies to watch AIS signals for evidence of deceptive behavior. In 2020, O.F.A.C. specifically advised insurers to research a vessel’s AIS history before providing coverage to avoid violating sanctions on various countries.

An even starker warning came in April, with an alert that spoofing around Kozmino, in particular, was most likely related to Russian sanctions evasion. It advised American companies, including insurers, to use “maritime intelligence services” to detect suspicious activity.
[…]
Experts say the vessels exhibit characteristics that should raise questions. Most are owned by a shell company established less than three years ago — some only after Russia invaded Ukraine in February 2022. These companies are Chinese-run, registered in Hong Kong and own just a single aging ship which was recently purchased.

“While none of these factors are inherently problematic on their own — and are quite commonplace — taken altogether, they paint a picture of a group of vessels and companies that warrants further investigation,” said Min Chao Choy, an analyst with C4ADS, a Washington-based nonprofit analyzing global security threats. She added that when factoring in that the tankers are also spoofing, they “fit a pattern commonly seen in maritime sanctions evasion activity.”
[…]
The spoofing tankers using American insurance show that the practice is not limited to Russian oil alone. The Times found that five of the tankers pretended to be elsewhere while visiting ports in Iran or Venezuela — or receiving oil from those countries through a ship-to-ship transfer at sea. At least two ships, the Cathay Phoenix and Eternal Peace, carried crude oil, a potential breach of sanctions.

And the Ginza, too, faked its whereabouts last fall, pretending to be off the coast of Oman. The Times found its real location after discovering a crew member’s Instagram video: The tanker was near an Iranian port. Satellite imagery also showed it docked at a berth for loading petrochemical products. The owner’s spokesperson said the company was unaware of this behavior, too.
[…]
Earlier this year, the American Club removed at least 15 vessels owned by an India-based company from its website, according to a report by Lloyd’s List. The company, Gatik Ship Management, owns a fleet of 50 newly acquired tankers dedicated to the Russian oil trade, the report said. The American Club declined to explain its reasoning for the decision to The Times.

https://www.nytimes.com/interactive/2023/05/30/world/asia/russia-oil-ships-sanctions.html

…it’s enough to make you question some things it’s generally not helpful to start doubting if you want to get anything done with your day…like if the only real difference between bad places in the world & better places is how well the better ones hide the bad shit…or some shit like that…but…perspective can be misleading…& just because this kind of shit is exactly as fucked up as it seems

Special counsel Jack Smith has obtained a 2021 recording in which Donald Trump appears to brag about having a classified document related to Iran, suggesting the former president understood both the legal and security concerns around his possession of such restricted information, multiple people familiar with the matter said Wednesday.

The recording was made at a meeting at Trump’s golf course in Bedminster, N.J., said the people, who like others interviewed for this article spoke on the condition of anonymity to discuss an ongoing criminal investigation. The audio features Trump describing a multi-page document that he claims is about possibly attacking Iran, expressing a desire to share that information with others but also making some kind of acknowledgment that he shouldn’t do so, the people said.
[…]
Trump’s lawyers have suggested that the former president either did not know he possessed classified documents after leaving the White House or could have declassified such material while in office.
[…]
The Washington Post reported last year that among the sensitive documents recovered by the FBI was a document describing Iran’s missile program. It’s unclear if that document is the same one described in the audio recording. The Post has also reported that investigators suspect Trump’s motive for keeping classified material after leaving the White House may have been mostly ego, and that he insisted the documents were his property, not the U.S. government’s.

For the Justice Department, evidence that Trump knew he had classified material, and understood the restrictions on sharing it, would be an important part of any charging decision.
[…]
The strange legal and national security saga of how hundreds of classified documents followed Trump to Florida after he left the White House began in 2020, when the National Archives and Records Administration began seeking the return of what it suspected were presidential records — historical documents that are government property.
[…]
In November, after Trump launched another bid for the White House, Attorney General Merrick Garland appointed Smith to lead the documents investigation, along with a more sprawling investigation into efforts to block the 2020 election results and events surrounding the Jan. 6, 2021, riot at the U.S. Capitol.

Trump’s attorneys have taken steps in recent weeks in the documents case — including outlining his potential defense to members of Congress and seeking a meeting with the attorney general — that suggest they believe a charging decision is getting closer.

https://www.washingtonpost.com/national-security/2023/05/31/trump-recording-classified-iran/

…& the term egregious might as well have been invented for these kinds of shennanigans

Former President Donald J. Trump is asking the judge overseeing his criminal case in Manhattan to step aside, citing ties between the judge’s family and Democratic causes, Mr. Trump’s lawyers said in a statement Wednesday.

The motion for recusal, which has not yet been filed publicly, represents the latest effort by Mr. Trump’s lawyers to move his case away from the judge, Juan M. Merchan of State Supreme Court in Manhattan.

…how many different ways do we need to make it clear that judge shopping is not okay…on principle…not…because it needs to be explained why stacking the bench is just as bad as cherry-picking jurisdictions in order to avail yourself of that shit the way that loon in the fourth circuit or the loose cannon in florida are trying to make a name for themselves for offering…I dunno…but apparently more than we have?

The Trump legal team also recently sought to shift the case, brought by the Manhattan district attorney, to federal court. On Tuesday, the district attorney, Alvin L. Bragg, filed court papers opposing that effort, and he is expected to oppose the effort to get Justice Merchan to recuse himself.

Mr. Bragg’s case centers on a hush-money payment to a porn star in the last days of the 2016 presidential campaign. The $130,000 payment, made by Mr. Trump’s former fixer, bought the silence of the porn star, who was otherwise poised to tell her story of a sexual encounter with Mr. Trump.

…leaving aside the part where he argues that he shouldn’t have to pay legal fees to fend off those charges because apparently it qualifies in his mind as presidential business rather than a personal matter…it’s not like the guy doesn’t have the other kind of record for this stuff

[…] their motion to recuse faces something of an uphill climb: The decision rests in the hands of Justice Merchan, who also presided over the unrelated tax fraud trial last year of Mr. Trump’s company. The company’s lawyers sought Justice Merchan’s recusal in that case as well, but he declined to step aside.

The company was convicted in December, and Justice Merchan ordered the maximum punishment, a fine of $1.6 million.

In the statement on Wednesday, Mr. Trump’s lawyers cited Justice Merchan’s actions in that case, in which they said he encouraged Mr. Trump’s former chief financial officer, Allen H. Weisselberg, to cooperate against the former president and his company.
[…]
Justice Merchan’s daughter, they noted, is a partner and the chief operating officer of Authentic Campaigns, a Democratic consulting firm that did work for President Biden’s 2020 campaign. The firm, they said, “stands to benefit financially from decisions Judge Merchan may make in this case.”

Under New York State rules on judicial conduct, a judge should disqualify himself or herself from a case if a relative within the sixth degree had “an interest that would be substantially affected by the proceeding.” Ms. Merchan’s work on Democratic campaigns does not give her enough of an interest to qualify, according to experts.

But in their statement, Mr. Trump’s lawyers also seized on modest personal donations that Justice Merchan had made to Democratic campaigns. During the 2020 presidential election, Justice Merchan donated $15 to the Democratic group Act Blue earmarked for Mr. Trump’s opponent, Joseph R. Biden Jr., as well as $10 each to two other Democratic groups, including one called “Stop Republicans.”

Justice Merchan has been under the protection of armed court officers at least since a grand jury voted to indict Mr. Trump on March 30, according to a person familiar with the arrangements.

https://www.nytimes.com/2023/05/31/nyregion/trump-trial-judge-juan-merchan.html

…but…for context…mexico isn’t the worst place in the world…& yet

The $55m contracts weren’t the only questionable deals that Montaño had uncovered while trawling through the government’s information portal. In early 2021, Montaño had noticed other contracts worth millions of dollars with companies and individuals across Mexico – many for vaguely defined products and services available locally such as cleaning, office furniture, construction and computer software.

On paper, the companies and contracts looked legitimate but there were multiple “red flags”, according to Muna Dora Buchahin Abulhosn, a forensic accountant who has led investigations into state-run embezzlement schemes.

A cursory search on Google Maps found companies awarded lucrative contracts were often located in residential streets, abandoned lots and shopping malls. Some addresses were linked to several companies – or didn’t exist; other companies had no functioning website despite multimillion dollar contracts.

Montaño’s reporting was potentially embarrassing for the PRI, which is desperate to hold on to the state in elections on 4 June. But investigating corruption can be deadly in Mexico, particularly for local reporters.

Last year 15 journalists were killed in Mexico, making it the most dangerous country for the media apart from Ukraine. The violence – and the impunity that fuels it – has a chilling effect, with reporters routinely silenced by threats, bribes and blacklists blocking access to jobs and information.

“The contracts were signed with companies far away to make it almost impossible for local journalists to physically verify. The government has so much control but I kept asking questions and downloading documents,” said Montaño. “That’s why I think I was kidnapped.”
[…]
By the time she was finished it was just after 7.30pm. A huge rainstorm had broken and Montaño was soaking wet, when a white car that looked like a shared taxi signalled for her to get in.

Almost immediately, a skinny man in the passenger seat pulled out a revolver. “Don’t scream and you won’t die,” he said. In the back, a second man covered her eyes with her Covid mask and pulled up her jersey to expose her stomach and chest.

The driver added: “You’re the journalist, aren’t you?”

Fearing for her life, Montaño denied she was a reporter, but the kidnappers knew where she lived and even where she’d left her car.

“Is your son home?” the driver asked as they pulled up at her gated housing complex.

The two assailants ransacked Montaño’s tiny home, before leaving her blindfolded on a dusty lot a few miles away at about 11pm. She had no phone and no money, but following a distant light, she found her way to a shopping mall and called her family.

She reported the kidnapping to the authorities immediately. It was only later that she realised the assailants had taken her laptops, phone, voice recorder, camera, notebooks and documents – but not the TV or other valuables.

“They stole my whole investigation. The message was clear, but I survived – and this information is too important to keep to myself. Before the people go to vote, they need to know.”

Over the past six months, the Guardian and Organized Crime and Corruption Reporting Project (OCCRP) have worked with Montaño as part of an initiative by the Paris-based non-profit Forbidden Stories to continue the work of threatened and murdered journalists.

https://www.theguardian.com/world/2023/may/31/mexico-corruption-journalist-investigation-kidnapped

…seriously…don’t know how many of you ever read that narcoland book but anabel hernández wound up in some similar crosshairs…& the level of corruption detailed by these ladies (& others in other places) is off the charts…but…well…slippery slope & all…anyway…if we gotta pick a side

There are very real — and substantial — policy differences separating the Democratic and Republican Parties. At the same time, what scholars variously describe as misperception and even delusion is driving up the intensity of contemporary partisan hostility.
[…]
At an extreme level, James L. MartherusAndres G. MartinezPaul K. Piff and Alexander G. Theodoridis wrote in the July 2019 article “Party Animals? Extreme Partisan Polarization and Dehumanization,” “a substantial proportion of partisans are willing to directly say that they view members of the opposing party as less evolved than supporters of their own party.”

In two surveys, the authors found that the mean score on what they called a “blatant difference measure” between Republicans and Democrats ranged from 31 to 36 points. The surveys asked respondents to rate members of each party on a 100-point “ascent of man” scale. Both Democrats and Republicans placed members of the opposition more than 30 points lower on the scale than members of their own party.

“As a point of comparison,” they wrote, “these gaps are more than twice the dehumanization differences found by Kteily et al. (2015) for Muslims, 14 points, and nearly four times the gap for Mexican immigrants, 7.9 points, when comparing these groups with evaluations of ‘average Americans.’”

A separate paper published last year, “Christian Nationalism and Political Violence: Victimhood, Racial Identity, Conspiracy and Support for the Capitol Attacks,” by Miles T. ArmalyDavid T. Buckley and Adam M. Enders, showed that support for political violence correlated with a combination of white identity, belief in extreme religions and conspiracy thinking.

“Perceived victimhood, reinforcing racial and religious identities and support for conspiratorial information,” they wrote, “are positively related to each other and support for the Capitol riot.”
[…]
In other words, misperceptions and delusions interact dangerously with core political and moral disagreements.

In March 2021, Michael Dimock, the president of the Pew Research Center, published “America Is Exceptional in Its Political Divide,” in which he explored some of this country’s vulnerabilities to extreme, emotionally driven polarization:

America’s relatively rigid, two-party electoral system stands apart by collapsing a wide range of legitimate social and political debates into a singular battle line that can make our differences appear even larger than they may actually be. And when the balance of support for these political parties is close enough for either to gain near-term electoral advantage — as it has in the U.S. for more than a quarter century — the competition becomes cutthroat, and politics begins to feel zero-sum, where one side’s gain is inherently the other’s loss.
[…]
various types of identities have become ‘stacked’ on top of people’s partisan identities. Race, religion and ideology now align with partisan identity in ways that they often didn’t in eras when the two parties were relatively heterogenous coalitions.

[…]
In separate analyses, Pew has demonstrated the scope of mutual misperception by Democrats and Republicans. In an August 2022 study, “As Partisan Hostility Grows, Signs of Frustration With the Two-Party System,” Pew found that majorities of both parties viewed the opposition as immoral, dishonest, closed-minded and unintelligent — judgments that grew even more adverse, by 13 to 28 points, from 2016 to 2022. In a June-July 2022 survey, Pew found that 78 percent of Republicans believed Democratic policies were “harmful to the country” and 68 percent of Democrats held a comparable view of Republican policies.

I asked Robb Willer, a sociologist at Stanford, about these developments, and he emailed back, “Americans misperceive the extent of policy disagreementantidemocratic attitudessupport for political violencedehumanization of rival partisans — again with the strongest results for perceptions of the views of rival partisans.”

Importantly, Willer continued, “misperceptions of political division are more than mere vapor. There is good reason to think that these misperceptions — or at least Democrats’ and Republicans’ misperceptions of their rivals — really matter.”

As the old sociological adage goes, situations believed to be real can become real in their consequences. It is likely that Democrats’ and Republicans’ inaccurate, overly negative stereotypes of one another are to some extent self-fulfilling, leading partisans to adopt more divisive, conflictual views than they would if they saw each other more accurately.

https://www.nytimes.com/2023/05/31/opinion/politics-partisanship-delusion.html

…I’d go on…but…I’m outta time…so someone else is gonna have to figure out whether this is an upswing or just the spin component of a roundabout

…still & all…a net positive might incorporate a big ol’ chunk of the not-positive…but I guess I’d take it over an unmitigated downside?

avataravataravataravataravataravataravataravataravataravataravatar

17 Comments

  1. …just clocked that I left a couple of links out of that mix…that I didn’t mean to…not the usual litany of ones I never thought I could fit in in the first place

    State elections officials have removed years-old guidance against moving state political money to federal super PACs, clearing the way for a fund previously run by DeSantis to do just that. [NBC]

    DeSantis signs Florida bill limiting the liabilities of private spaceflight companies [NBC]

    …meant to throw it in around where china was happy to talk to elon but not so much the US government…the chinese…the meatball from florida…friendly to the guy burning twitter to the ground in the name of false equivalence & bread & circuses…will wonders never cease…also meant to find a spot for this one

    …fun times

  2. Once the system told the bots to debate the answers, the one that said Spain quickly apologized and agreed with the one with the correct answer, Cuba.

    Post-Franco Spain has nothing to apologize for. The AI bots should know that. They are a warm and gracious people, although I was pickpocketed twice in Spain, but it was only €10 each time. Don’t carry a lot of cash with you in Spain, and leave your passport somewhere safe. Make a photocopy of the first couple of pages with your details. Store clerks will sometimes want some kind of ID if you’re paying with a foreign credit card and this will suffice.

    Plus, Spain has a lovely royal family, descended from the House of Bourbon no less (so, much more time-tested than the Saxe Coburg-Gothas/Windsors, who now reign in the UK), beautiful if occasionally overcrowded beaches (speaking of the British and the Germans), excellent food and wine at remarkably affordable prices, even though they got roped into the euro, and lots to see and do. I give the country 5 stars.

    Oddly enough, I’ve never been pickpocketed in Italy, which you sometimes hear is a hotbed for it. I know someone who took a crowded bus in Rome and, she reported, someone somehow sliced her purse from underneath and made off with her wallet. I asked for more details. Of course I did. First of all, it was August, so rookie mistake. Rome is very unpleasant in August, it can get very hot and dry, and the tourist:citizen ratio goes way up because the Romans flee and go on vacation. Then, she was on a public bus, I forget which route, but it’s one of the famous ones that happens to link something like half a dozen of Rome’s major sights, and if tourists know that it exists they will take it. Of course the more larcenous Romans are drawn to this bus route like moths to a flame, in search of easy prey. I’ve never been to Rome in August but what most visitors don’t realize is how small it is, the central core, anyway. So there’s no reason to take a bus or their metro if you’re reasonably mobile. The last time we were there I made BH go with me on a self-designed (by me) tour of the mura, the walls, at least the parts that still exist, and that wasn’t too taxing. And those mura encircle the heart of the city.

    Well, that was…really, kind of à propos of nothing…

    • Don’t worry, Matthew, I’ll get the thread back on track.

      We’ve tinkered with AI extensively. My boss used the Google AI to generate a description of our company, and while it correctly described the company and listed an award we’d won, it also listed two other awards we didn’t win.

      It was kind of disturbing because both awards were plausible. We just didn’t win them. I told my boss it was like “two lies and a truth.” My concern is eventually you’ll have lazy journalists who will just take the AI copy and run it. If they check the first award, which is legit, will they bother to check the other two, which aren’t? I’m pretty sure I know the answer.

      Then what happens when we get called out for falsifying awards we didn’t win and didn’t falsify? Yeah, you could prove all this stuff but how many customers could you lose if someone decides to run a story about us making up accomplishments? And would anyone see/read a retraction?

      This actually disturbed me more than conjecture about Skynet sending Terminators to kill us all.

      • Chris Rufo is relentless. He’s made a career out of this for at least half a decade and is tireless. He should team up with the “Libs of TikTok” lady. I don’t remember getting a particularly woke education. Of course it was simpler times. On the one hand: Slavery was wrong and shameful (although thankfully not widely practiced in the North.) On the other hand: Lucid writing and mathematics is not inherently racist. I will say that in high school, 9–12, at my level, we did one Shakespeare play a year, among lots of other stuff, so I thought that was a good thing. Plus we dipped into the sonnets one year, but I forget which year that was. I think I read recently that one of my favorite novels, Vonnegut’s Slaughterhouse 5, landed on one of those idiotic banned books lists.

        During my third year of (high school) German the teacher and I got through Günter Grass’s The Tin Drum (Die Blechtrommel) but it was sort of the cheaters’ edition: it was a bilingual edition. I was the only one in my entire high school who actually took a third year of German, so I attended class with the first- and second-year students and sort of served as a TA. When I wasn’t doing that I was crawling through Die Blechtrommel. And what the hell was the point of that anecdote? Oh yes, I believe The Tin Drum is also on a banned books list. Idiots.

        I wouldn’t screen an X-rated movie for second-graders, I understand inappropriate content, but I think if a teacher (and in my case, also my mother) can assess the maturity and the reading level of a student, hell, anything goes!

  3. I still believe that the AI bubble is no different from the crypto bubble, but in two points: a truly destructive AI is years if not decades away (right now it’s half-good at trawling the internet) and so it benefits people in the industry A LOT to make it sound vaguely threatening and secondly, the problem with AI is the same problem with crypto and social media and other startups, which is that they ultimately need to make profits and they’ll do whatever it takes no matter how detrimental to society to make that happen. That’s the threat!

  4. Y’all I lost my shit today on a coworker and asked if he had magically changed my job title without telling me to “his secretary.”

    I guess we’ll see if my attitude comes back to bite me in the ass here.

    • I don’t recommend it but I’ve damn sure done it. I lost my shit in a department meeting with an awful supervisor who was changing my work behind my back to make me look bad. She was afraid they would give me her job (and she sucked, so … plausible). I just started screaming at her one day in a meeting in front of my VP. Got called in to my VP’s office for an ass-chewing. But they didn’t want to take it to HR because HR had a huge file on my boss (she had run off 14 others before me).

      I eventually transferred to another department but I really regret not standing up and saying “I’m going to HR.” I didn’t realize how much leverage I had. Mostly HR will fuck you, but in retrospect I seriously regret not trying. Ended up getting fired eventually anyway, so I’d at least have gone down fighting.

    • When I was freelancing about a decade ago I worked very closely with a woman, to the point where I was put on the payroll as something like, “Part-time employee/Hours flexible” or something like that. That was excellent, because they paid my SS and unemployment taxes and took out taxes with every paycheck (which are onerous in New York.) Her husband swung by the office one day and she introduced him to me. He said, “Oh, so you’re X’s work-husband! I’m Y.” In front of the entire department. I had to ask what that meant.

    • I bet it felt good to call him on it . . . document his egregious actions, no matter how petty, for future kindling when you light the bonfire.

Leave a Reply