• Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
LOGIN
Friday, May 15, 2026
Top Posts
Costco: Compounding Power of Trust and Discipline
Uber: The Rulebreaker’s Playbook
Google: Search Box to Empires
Y Combinator: Accelerator or University
Investing Guidance – Oct 24, 2025
Investing Guidance – Oct 17, 2025
Investing Guidance – Nov 19, 2025
Intel: The Traitorous Eight
Investing Guidance – Nov 12, 2025
Investing Guidance – Nov 7, 2025
SUBSCRIBE NEWSLETTERS
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
Copyright 2021 - All Right Reserved
AI research papers are getting better, and it’s a big problem for scientists
Tech/AI

AI research papers are getting better, and it’s a big problem for scientists

by admin May 15, 2026
written by admin
268512_PHOTO-_Research_papers_are_overrun_with_AI_slop_CVirginia
268512_PHOTO-_Research_papers_are_overrun_with_AI_slop_CVirginia

Journal editors and peer reviewers are being flooded with AI-generated papers that are almost impossible to detect.

May 15, 2026, 11:00 AM UTC

Last summer, Peter Degen’s postdoctoral supervisor came to him with an unusual problem: One of his papers was being cited too much. Citations are the currency of academia, but there was something unusual about these. Published in 2017, the paper had assessed the accuracy of a particular type of statistical analysis on epidemiological data and had received a respectable few dozen citations in other research papers over the years, but now it was being referenced every few days, hundreds of times, placing it among the most cited papers of his career. Another professor might be thrilled. Degen’s adviser asked him to investigate.

Degen, a postdoctoral researcher at the University of Zurich Center for Reproducible Science and Research Synthesis, found that the citing papers all followed a similar pattern. Like the original, they were analyzing the Global Burden of Disease study, a publicly available dataset compiled by the Institute for Health Metrics and Evaluation at the University of Washington. But they were using the dataset to churn out a seemingly endless supply of predictions: about the future likelihood of stroke among adults over 20 years old, of testicular cancer among young adults, of falls among elderly people in China, of colorectal cancer among people who eat minimal whole grains, of disease X among population Y, and so on.

Searching on GitHub for code that would be used to do this sort of analysis, Degen followed some links and wound up on the Chinese social media site Bilibili, where he discovered a Guangzhou-based company touting tutorials on how to produce publishable research in under two hours using its software tools and AI writing assistance. These studies were not very good. Researchers who analyzed a subset of studies about headaches found they were rife with errors and misrepresentations. But they were also not as flagrantly wrong as AI-generated papers of the recent past, making them more difficult to filter out.

“It’s a huge burden on the peer-review system, which is already at the limit,” Degen said. “There’s just too many papers being published and there’s not enough peer reviewers, and if the LLMs make it so much easier to mass produce papers, then this will reach a breaking point.”

Optimists about generative AI have high hopes for its ability to produce future scientific breakthroughs — accelerating discovery, eliminating most types of cancer — but the technology is currently undermining one of the pillars of scientific research, inundating editors and reviewers with an endless stream of papers. Paradoxically, the better the technology gets at producing competent papers, the worse the crisis becomes.

For the past decade, academic publishing has been contending with so-called “paper mills,” black-market companies that mass-produce papers and sell authorship slots to academics, doctors, or others who hope to gain a competitive edge by having published research on their resumes. It has been a game of cat and mouse, with publishers — often pressed by so-called science sleuths, researchers who specialize in ferreting out fraudulent research — closing one vulnerability only to have the mills find a new one. Generative AI was a boon to the mills, helping them to skirt plagiarism detectors by creating wholly new images and text. Still, the technology’s telltale hallucinations meant that publishers could at least theoretically screen out much of their work. In practice, papers still got through, only to get retracted when sleuths encountered a diagram of a rat with inexplicably gargantuan genitals labeled “testtomcels” or prose sprinkled with “as an AI assistant”s that someone forgot to delete.

But now AI has improved to the point where it can produce convincing papers almost wholesale, allowing desperate academics in need of a publication to mill papers of their own. The result is a deluge of scientific slop that threatens to swamp publishing, peer review, grant making, and the research system as it exists today.

Matt Spick, a lecturer in health and biomedical data analytics at the University of Surrey and an associate editor at Scientific Reports, first noticed the phenomenon when he received three strikingly similar papers analyzing the US National Health and Nutrition Examination Survey (NHANES), another public dataset. He checked Google Scholar and realized that it wasn’t a coincidence: There had been a sudden explosion in papers citing NHANES that all followed a similar formula, each purporting to discover an association between, for example, eating walnuts and cognitive function or drinking skim milk and depression.

“If you’ve got enough computing power, you go through and you measure every single pairwise association, and eventually you find some that haven’t been written on before and you just publish: There is a correlation between this and that,” Spick said. These correlations are often misleading simplifications of phenomena with multiple causes or random statistical flukes. “One was that how many years you spend in education will cause postoperative hernia complications. That is just a random correlation. What am I supposed to do with that? Leave school early so that I won’t get a postoperative hernia complication later?”

Over the years, sleuths have developed a variety of methods for detecting inauthentic papers. Some search for “tortured phrases,” instances where someone was trying to skirt plagiarism detectors by feeding an existing paper through a synonym generator, which often has the effect of turning technical terms like “reinforcement learning” into nonsense like “reinforcement getting to know,” to cite one recent example. Other sleuths track duplicated images, perform network analysis of authors, or check citations for hallucinated publications, a classic sign of LLM use. Spick searches for masses of papers following the same template as they analyze public datasets.

These papers may not necessarily be wrong, though they are often misleading. Nor are they strictly speaking fraudulent. They’re just useless, and suddenly very easy to make. Last year, several journals began restricting submissions of papers analyzing public datasets, citing a flood of redundant research.

Spick fears these measures may be fighting the last battle. In recent months, AI companies have released a range of “agentic” science assistants capable of analyzing data, generating hypotheses, and writing research papers with a high degree of autonomy. While a possible step toward the goal of AI-accelerated science, these systems also come with novel risks. When Carnegie Mellon researchers tested several agentic tools, they found that they sometimes invented data or used misleading techniques, but that these errors were only apparent upon close analysis of the full workflow; the final papers looked polished.

Announcing an AI paper writing assistant earlier this year, OpenAI’s then-vice president for science, Kevin Weil, predicted, “I think 2026 will be for AI and science what 2025 was for AI and software engineering.” Spick and some colleagues, curious what it could do, gave the tool, called Prism, some data from an already published paper documenting ripening times of eggplants and peppers. Prism analyzed the data, proposed a new statistical method that could be applied to it, and wrote an entire paper complete with charts and correct citations.

“We were all looking at each other like, ‘What the [expletive], this is actually a decent piece of work!’” Spick recalled. Unlike the generated papers he’d encountered previously, this one didn’t follow a template, nor was it using a single well-known database. It took 25 minutes and 50 seconds to produce.

“I’m genuinely not sure at what point we will suddenly realize that more are getting through than we realize because we can’t easily tell the difference anymore,” Spick said.

This raises some philosophical questions, Spick said, like: Does it matter who or what writes the paper if the information is accurate? And should science be in the business of publishing every possible fact?

“Part of science is supposed to be the filter. We’re supposed to publish the stuff that we think is interesting, not publish literally everything that we can possibly find,” Spick said. “Because if we do that, science is just spamming the world with all the data, irrespective of whether it constitutes actual new knowledge or not, and in any kind of medium-term time frame, it’s almost impossible to work out what’s meaningful and what isn’t.”

This is the immediate practical challenge posed by AI agents. They threaten to overwhelm the human systems that create and organize knowledge. Research funders are contending with onslaughts of proposals perfectly tailored to their particular grant, unable to parse which projects represent the next step in years of work and which were generated in minutes. Conference organizers, journal editors, and peer reviewers are all struggling to sort through a flood of material that all seems good enough at first glance to warrant a close read. There is an enormous and growing asymmetry between the time it takes to produce new work and the time it takes a subject-matter expert to vet it.

For Marit Moe-Pryce, the managing editor of the international relations journal Security Dialogue, submissions are up 100 percent over where they were a year before. Just as problematic: All the submissions have become pretty good. Gone are the blatant hallucinations and leftover prompts; everything has suddenly become coherent, well structured, and stylistically similar, difficult to say whether it is a wholly generated paper, an experienced academic, or a young scholar using AI as an editor.

“The main problem that we see currently from the desk is that the fraudulent side and the academic side are conflating, which ends up with a big gray mass of articles that we as editors need to sit and try to figure out, ‘What is this? Is this something that we need to engage with? Is it not?’” Moe-Pryce said.

One paper made it past at least 10 editors and two rounds of peer review before she noticed a fake citation — a very plausible one, involving several former editors of the journal on a topic they could have written about but never did. She then found several more. She doesn’t know at what stage of revision the hallucinations were introduced, but the close call underscored the level of care required to ensure nothing false gets published. Now that models increasingly cite real papers, she has to read for whether the works cited are the ones an expert would actually use, AI not yet having mastered the difference between canonical literature and more peripheral work.

“It’s incredibly detailed, and this is a normal part of the editorial work. The difference is that now you have to do that for all the rubbish that comes through the door,” Moe-Pryce said. “That’s why our workload becomes so unmanageable.”

Academic papers go through a multi-stage review process before publication. First, manuscripts are triaged for obvious problems, then sent to a journal’s editor, who decides whether it might be worth publishing. The editor then sends it to an associate editor with experience in the field, who again vets it before recruiting two or three subject-matter specialists — the “peers” in peer review — to read the paper and write responses. The editors and reviewers are typically working for free, volunteering their time in addition to their primary academic job.

The review system was already struggling under increasing volumes of submissions, and now AI is increasing those volumes while also making the bad ones more difficult to filter out. Moe-Pryce now spends more time sorting papers before deciding what to send out for review, and prospective reviewers, swamped themselves, are less and less likely to respond. Where she previously could send four queries out and get three replies, it now takes her a dozen tries to get two people. Increasingly, she reaches out to 20 reviewers and hears nothing.

“It’s fatigue. Academic journals have mushroomed, and then you have AI helping everyone fraudulent or not generate more, faster, so you have a massive increase in volume,” she said. “AI currently holds the potential to bring down the publishing system as we know it.”

The journal Accountability in Research has seen a 60 percent surge in submissions this year, according to David Resnik, an associate editor at the journal. Ironically, he has been besieged by likely AI-generated papers about fraudulent academic papers that have mined public data compiled by the organization Retraction Watch.

He, too, is struggling to find reviewers. At times, he’s had to send out 20 requests just to get two responses — and he’s suspected that some of the responses he’s received are AI-generated themselves. He has reason to be suspicious. A survey conducted by the publishing company Frontiers last year found that more than half of researchers have used AI assistance in their peer review.

“I’m very worried about this straining, breaking the back of the peer-review system,” said Resnik.

AI agents arrive at a time when the quality filters of academia are already struggling to cope with a superabundance of papers. The number of scientific papers published has grown exponentially in recent years, according to an analysis of data published in Quantitative Science Studies, while the number of PhDs who might review them has not. Unfortunately, the authors attribute this explosion in productivity not to rapid progress in science but to the fact that commercial and professional incentives align to publish the maximum quantity of papers.

Many journals have shifted to an “open access” model where they earn revenue by charging authors processing fees to have their papers published, as opposed to charging for subscriptions. In earnings calls, publishing companies tout the recent 20 percent or more increase in submissions as a positive growth story. Universities and funding agencies, meanwhile, look at researchers’ publication metrics when deciding whom to fund or promote, which means researchers are under pressure to “publish or perish.” Nor is it only traditional academics who are under this pressure to publish. Overseas medical students can improve their chance at a US residency program by having a few peer-reviewed papers on their resume. In China, medical doctors have strong incentives to publish despite neither having the time nor resources to conduct research, making quick paper generation an attractive option.

If you introduce an infinite paper-writing machine to a system that defines productivity by the number of papers written, people will use it to write a lot of papers. A study published in Nature this year found that scientists who adopted AI published three times more papers and received nearly five times more citations than those who didn’t. They also became research project leaders 1.37 years earlier than those who did not use AI. While individually beneficial, the embrace of AI to mass-produce papers may be detrimental to science as a collective endeavor, beyond exhausting journal editors and peer reviewers. The same study found a collective narrowing of focus as these newly productive scientists gravitated toward well-studied fields with abundant existing data for AI to synthesize.

There are no easy solutions to this problem. In 2022, the scientific organization STM launched an initiative called Integrity Hub to contend with paper mills. Since then, it has been engaged in an “arms race” with AI, according to Joris van Rossum, the project’s program director — assembling automated tools that check for plagiarism, then tortured phrases, then fake citations — but the group must now consider more sweeping remedies.

“We anticipate a future where it’s going to be more realistic to enable submitters to demonstrate authenticity rather than trying to detect fabrication,” he said. That is, once fraudulent manuscripts are impossible to detect, publishers will have to find a way for researchers to prove their work is real — perhaps by working with instrument manufacturers to develop ways of watermarking their images, he said, or having researchers submit more of the data behind their work so it can be analyzed for suspicious signals.

This would entail changing the way research is done on a massive scale, and while it might stem outright fraud, it would do little to reduce the volume problem. Using AI to assist with peer review, as some have proposed — and some reviewers are already doing, permitted or not — raises a nest of other possible risks. Studies have found that models often continue to cite retracted studies as valid and write superficially good reviews while overlooking methodological problems. AI reviewers also appear to prefer AI-generated writing.

“It’s not really a tractable problem,” said Reese Richardson, a postdoctoral fellow at Northwestern University who studies mass-produced papers. “I think that the only way out of this situation is to actually change the way that the scientific enterprise awards prestige and awards resources. As long as we have this hyper-competitive, hyper-unequal rat race where people’s productivity and their worth as scientists is being measured by how many publications they put out and how many times they get cited, it’s just going to incentivize this behavior.”

Vincent Larivière, the editor-in-chief of Quantitative Science Studies, had a similar diagnosis. His journal has seen a 40 percent increase in submissions this year.

“We need a reform of what matters in science,” Larivière said. The conflation of scientific productivity with publication counts has had a distorting effect on science, causing research to gravitate toward small, tractable problems that are guaranteed to result in something publishable. AI could do great things, he said — help cure cancer, develop fusion energy — but right now it is being used to generate papers to “pad CVs.”

“Of course we need more science,” he said, “but do we need more papers?”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Joshua Dzieza
May 15, 2026 0 comments
0 FacebookTwitterPinterestEmail
Trump-Xi summit: the 3 big takeaways from historic meeting in Beijing
Economy

Trump-Xi summit: the 3 big takeaways from historic meeting in Beijing

by admin May 15, 2026
written by admin

The national flags of the United States and China hang in front of the portrait of late communist leader Mao Zedong at Tiananmen Gate in Beijing on May 15, 2026.
Brendan Smialowski | Afp | Getty Images

BEIJING — U.S. President Donald Trump’s closely watched visit to China this week has gone a long way toward strengthening a fragile trade truce with Beijing and stabilizing the bilateral relationship.

While the visit was delayed by more than a month due to the Iran war, Trump’s two-day summit with Chinese President Xi Jinping wrapped up Friday with plans for another meeting this fall.

Here’s what’s changed since the leaders met:

U.S.-China geopolitical alignment

Xi’s warning to Trump that mishandling Taiwan would put the U.S.-China relationship into “great jeopardy,” according to official English-language state media, dominated headlines at the start of talks.

Oil prices also rose after Trump told Fox News in a pre-recorded interview that China has agreed to buy U.S. oil and would help with Iran negotiations. He did not reveal when purchases would begin or at what volume.

China has yet to confirm plans to buy U.S. oil, while Washington has yet to say anything on Taiwan.

“I do think each side has delivered. There was no substantive discussion on Taiwan, though, which is not surprising,” said Yue Su, principal economist, China, at the Economist Intelligence Unit. “More discussion on Iran highlighted that they do have common ground. The fact that both sides want to describe the meeting as a win shows goodwill, at least.”

“There are limits to what China can realistically do, as the Iranian regime is operating in survival mode and will prioritize its own interests and agenda above all else,” she said.

Trade truce holds

The U.S. and Chinese sides have not yet released details on specific agreements. But Trump’s invitation to Xi to visit the U.S. on Sept. 24 means the two leaders can talk in person again before the expiration of the one-year trade truce set in October 2025.

The agreement lowered tariffs and rolled back rare earths restrictions after an escalation in tensions between the two countries earlier in 2025.

Xi said the U.S. and China agreed to constructive “strategic stability” as a framework for the next three years, according to state media. 

“Strategically, Beijing appears to be trying to turn Trump’s transactional willingness to stabilize ties into a longer-term operating framework for U.S.-China relations,” said Jack Lee, analyst at China Macro Group, noting the framework could become a baseline on dealing with Beijing for the next U.S. president.

Wins for business

Trump told Fox News that China will order 200 Boeing jets, which he said was more than the 150 units the company had expected. But that was less than half the 500 planes that many initially expected.

Nvidia also reportedly got the green light from the U.S. to sell its H200 chips to major Chinese companies, sending tech stocks higher.

Both Boeing CEO Kelly Ortberg and Nvidia CEO Jensen Huang accompanied Trump to Beijing. The executives and more than a dozen U.S. business leaders — including Apple CEO Tim Cook and Tesla’s Elon Musk — participated in a meeting Thursday with Chinese Premier Li Qiang.

Opening remarks and readouts offered no details beyond China’s pledge to open up its market further to foreign business, which has occurred gradually over recent decades.

The U.S. business delegation was far smaller than the more than 30 leaders that joined Trump on his trip to Saudi Arabia last year.

“I don’t think the purpose was to have every CEO sign a deal,” said Gary Dvorchak, Blueshirt Group managing director. “I think the purpose was just to kind of flex America’s muscles and just show from an economic standpoint what a powerhouse we are.”

“It also shows a high level of unity amongst the American government and private sectors,” he said.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

May 15, 2026 0 comments
0 FacebookTwitterPinterestEmail
The world is on track to miss its health targets
Tech/AI

The world is on track to miss its health targets

by admin May 15, 2026
written by admin

Every year the World Health Organization publishes a global health statistics report. It features the numbers behind world health trends and, importantly, assesses whether we’re on track to reach ambitious goals set in 2015. It’s a bit like a health grade.

The 2026 report was published on Wednesday. And the results aren’t looking brilliant. While we are seeing some improvements, they are uneven, and they’re far too slow.

The targets themselves are part of the United Nations’ Sustainable Development Goals, a sprawling and ambitious plan focused on improving life around the world. The 17 goals were set to tackle poverty and climate change and to boost education, gender equality, health, and well-being, among many other quality of life issues. Those targets were meant to be met by 2030.

Perhaps they were a little too ambitious. Here are the numbers and statistics that stood out to me on this year’s world health report card.

1.3 million new cases of HIV in 2024

Before the SDGs, there were the Millennium Development Goals. One MDG target was to halt and reverse the spread of HIV—and that target was exceeded by 2015. Back then, we were considered on track to “end the AIDS epidemic by 2030.”

How depressing, then, to see that in 2024 there were an estimated 1.3 million new cases of HIV. That’s 40% lower than the figure from 2010. But it’s still 1.3 million additional people with HIV. The SDG target is to reduce HIV incidence by 90% by 2030—we’re not likely to meet it.

10.7 million new cases of TB

The picture is even bleaker for tuberculosis, which ranks 10th on the WHO’s list of top global causes of death. The goal was to reduce cases by 80% between 2015 and 2030. So far, cases have only fallen by a measly 12%. And when you break the change down by region, the Americas saw an increase of 13%

An 8.5% rise in malaria cases

And then there’s malaria, the mosquito-borne disease with a 7% fatality rate. The European region has been free of malaria since 2015, but the disease is a significant concern in many countries in the Global South, particularly in Africa. The goal was to lower rates by 90% between 2015 and 2030. In 2024, there were an estimated 282 million cases of malaria globally—representing an 8.5% increase in incidence rates.

Antimalarial drug resistance is a major challenge here—forms of the malaria virus that are resistant to drugs have been confirmed or suspected in eight countries in Africa, according to a separate WHO report. Mosquitoes that are resistant to commonly used insecticides are present in nine African countries. And climate change, which can alter mosquito habitats, may be making things worse.

42.8 million children are wasting

We’re not meeting child health targets, either. Take malnutrition, for example. As of 2024, the global prevalence of wasting in children was 6.6%—that’s a staggering 42.8 million children who are literally wasting away because of a lack of adequate food. On the other end of the spectrum, 5.5% of children are now considered overweight. Both figures were meant to be below 5% by 2030, which now seems unlikely.

Vaccination rates are dropping in the Americas

Progress in improving childhood vaccination coverage has stalled. Globally, an estimated 76% of children are getting their second dose of a measles vaccine—a figure far below the the approximately 95% needed to prevent outbreaks. The Americas currently has lower rates of vaccine coverage for three of the four “core” vaccines than it did in 2015.

22.1 million pandemic-related deaths

And of course the pandemic affected progress toward health goals in more direct ways: 7 million people died of covid-19. The WHO report estimates that, for each of these, there were an additional two “excess” deaths related to the pandemic, due to disruptions in health care, for example. That puts the total figure at 22.1 million pandemic-related deaths.

A woman dies every two minutes from “maternal causes”

Maternal mortality rates fell by about 40% between 2020 and 2023. But today’s rate equates to 712 maternal deaths every single day. That’s one every two minutes. The WHO report notes that we’d have to reduce the mortality rate by almost 15% per year in order to meet the 2030 target. This seems incredibly unlikely, particularly given the recent decimation of US funding for global aid programs, which is expected to result in thousands of additional maternal deaths.

Progress has also slowed in reducing the risk of death from noninfectious diseases like cancer, diabetes and cardiovascular disease. “Overall, neither the world nor any WHO region is currently on track to meet the 2030 SDG target,” the report states.

2.1 billion people struggle to afford health care

Despite plans to make health care more affordable, a significant chunk of the population is being pushed into poverty by health-care costs. In 2022, 2.1 billion people faced financial hardship due to health spending—and 1.6 billion of them were living in or had been pushed into poverty.

Across the board, there have been some important improvements in global health. But the achievements have not gone far enough. “The good news is that there is progress,” says Danaei. “But as always, the glass is half empty.”

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

May 15, 2026 0 comments
0 FacebookTwitterPinterestEmail
CIA chief visits Cuba as energy crisis worsens
Global

CIA chief visits Cuba as energy crisis worsens

by admin May 14, 2026
written by admin

“During the meeting, Director Ratcliffe and Cuban officials discussed intelligence cooperation, economic stability, and security issues, all against the backdrop that Cuba can no longer be a safe haven for adversaries in the Western Hemisphere,” the official added.

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
Men use "vocal fry" more than women, counter to stereotype
Tech/AI

Men use “vocal fry” more than women, counter to stereotype

by admin May 14, 2026
written by admin

Vocal fry, aka “creaky voice,” is a distinctive drop in pitch, usually at the end of sentences, associated with the speech patterns of young women in particular. Britney Spears is the go-to example of the trend, having famously used it in her 1998 smash hit, “Hit Me Baby (One More Time),” and she’s far from the only one.

But what if that popular gender-based stereotype is wrong? Jeanne Brown, a graduate student at McGill University, has found that vocal fry is actually more common in men than women, detailing her experimental findings in a talk at this week’s meeting of the Acoustical Society of America in Philadelphia. Per Brown, we perceive it as more prominent in young women.

Vocal fry is the lowest of the human vocal registers, the others being the modal and falsetto registers, as well as the whistle register. It’s caused when the vocal cords slacken, leading to irregular vibration and an audible cracking or rattling sound as air is released in spurts. Vocal fry is characterized by very low fundamental frequencies of around 70 Hz. (The lowest end of the range of human hearing is 20 Hz.)

Ten years ago, I reported on an experiment by John Nix, a voice professor at the University of Texas, San Antonio, who concluded that singers like Spears, Katy Perry, and Lady Gaga use vocal fry in pop music because it enhances expressiveness. “Unamplified styles, such as classical music, tend to disguise effort and express emotion in more subtle ways,” Nix told me at the time. “Amplified styles, such as popular music, tend to display effort as something genuine, intimate, raw, exciting, and emotional. Fry may be one way to communicate such effort, or honest, raw emotions.” Nor is vocal fry exclusively used by female singers: Justin Bieber, Tim Storms (who holds the world record for lowest note produced by a human), and gospel bassists like Mike Holcomb have also used it.

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
The Team Behind Netflix's 'Chef's Table' Is Launching Its First Food Festival This Summer
Lifestyle

The Team Behind Netflix’s ‘Chef’s Table’ Is Launching Its First Food Festival This Summer

by admin May 14, 2026
written by admin

Chef groupies, gird your loins: This August, 70 internationally acclaimed chefs and food world luminaries are headed to Utah.

The Chef’s Table Festival, from the team behind Netflix’s longest-running documentary series of the same name, will launch 100 events in 30 participating restaurants across Park City over the course of four days (August 13 to 16). If you’re a fan of the show, you’ll be pleased to know the event promises immersive experiences, demos, and excursions alongside many chefs who appeared throughout the series. The event is in partnership with American Express and Resy, naturally.

Taking inspiration from the prestigious Ein Prosit food and wine festival in Udine, Italy, Chef’s Table invites festival guests to live “a day in the life” of the participating chefs and engage the local community, according to Justin Connor, Chef’s Table Projects president. “We loved that there were no big tents, no long lines for small bites and plastic utensils,” he says, comparing Ein Prosit’s focus on tasting menus, wine and food master classes, and intimate experiences to large-scale American food festivals.

The Chef’s Table Festival has already tapped Argentine chef and author Francis Mallmann; eighth-generation Italian butcher Dario Cecchini; Peruvian restaurateur Virgilio Martínez; Álvaro Clavijo of Bogotá’s El Chato, No. 1 in Latin America; James Beard semifinalist Fariyal Abdullahi of NYC’s Hav & Mar; Gilberto Cetina of Michelin-starred seafood destination Holbox; renowned Chilean pastry chef Camila Fiol; and Serigne Mbaye, the young chef bridging Senegalese cuisine and New Orleans creole comfort at the 2024 James Beard Best New Restaurant: Dakar. Other participating chefs include the legendary Nancy Silverton, Gaggan Anand, and Franco Pepe, who have all featured on the show and its spin-offs.

Image may contain Nancy Silverton Adult Person Food Food Presentation Bread Accessories Glasses Plate and Jewelry

Restaurateur and chef Nancy Silverton will be part of Chef’s Table’s inaugural food festival in Park City, Utah, in August.

Photo courtesy of Chef’s Table

Guests can “choose their own adventures” by purchasing different tiered packages and making selections from curated experiences, which include foraging, fly-fishing, butchery, cooking classes and more. Chef’s Table Concierge will then tailor bespoke itineraries for the weekend based on each ticket package.

“We wanted to install chefs into restaurant spaces and let them create entirely new concepts for the weekend for people to enjoy,” Connor says. “I call it the ‘un-festival.’ We’re trying to build something that feels permanent and has all the trappings of permanence, but it really is ephemeral.”

Initially conceived in 2023 as a way to mark the show’s 10th anniversary last year, show creator David Gelb says the festival is a dream for Chef’s Table fans as well as an opportunity for chefs to “take some big swings” creatively, outside of the pressures of their own restaurants.

NEW YORK, NEW YORK - APRIL 23: David Gelb attends Chef's Table Legends special event at José Andrés' Oyamel at Hudson Yardson April 23, 2025 in New York City. (Photo by Roy Rochlin/Getty Images for Netflix)
How Chef’s Table Creator David Gelb Revolutionized Food TV

In an interview with Bon Appétit, Gelb reflects on the show’s impressive run, including how the first episode nearly fell apart.

“The biggest challenge of Chef’s Table is the audience only get to watch and hear the stories, but not taste the food,” Gelb says. “[The festival] is closer to the experience of what eating in these restaurants would be, all brought to one town. ”

While the festival promises intimate access and storytelling on-and-off the plate, cinematic views, and unique surprises, Gelb and Connor hope attendees also walk away with an appreciation and respect for the hard work that extends beyond the back of house.

“It’s a hard industry; it’s difficult to be in the profession right now, but I see it as a celebration of why we come together to eat, why we go to restaurants,” Gelb says. “That’s paramount and one of the most important things that makes us human. We come together around a hearth, we tell stories, and we eat.”

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
U.S. can hold AI talks with China because ‘we are in the lead,’ Bessent tells CNBC as nations plan safety protocol
Economy

U.S. can hold AI talks with China because ‘we are in the lead,’ Bessent tells CNBC as nations plan safety protocol

by admin May 14, 2026
written by admin

In this article

  • GOOGL
Follow your favorite stocksCREATE FREE ACCOUNT

The U.S. can talk to China about AI because “we are in the lead,” U.S. Treasury Secretary Scott Bessent told CNBC, as the countries unveiled a protocol on best practices for the rapidly improving technology.

“The two AI superpowers are gonna start talking. We’re gonna set up a protocol in terms of how do we go forward with best practices for AI to make sure non-state actors don’t get a hold of these models,” Bessent told Joe Kernen on Thursday, on the sidelines of President Donald Trump‘s two-day meeting in Beijing with Chinese President Xi Jinping.

“The reason we are able to have wholesome discussions with the Chinese on AI is because we are in the lead,” he added. “I do not think we would be having the same discussions if they were this far ahead of us,” he said.

U.S.-based Anthropic has alarmed many in Washington and other countries with the Mythos AI model, which is supposed to have powerful cyberattack capabilities. The company said it would initially release it to select business partners.

BEIJING, CHINA – MAY 14: China’s President Xi Jinping (R) and US President Donald Trump pose for a photo at the Temple of Heaven in Beijing on May 14, 2026. Xi warned Trump that the issue of Taiwan could push their two countries into “conflict” if mishandled, a stark opening salvo as a superpower summit set to tackle numerous thorny issues began in Beijing on May 14. (Photo by Brendan Smialowski – Pool/Getty Images)
China Pool | Getty Images News | Getty Images

Bessent told CNBC he anticipates a big “step-function jump” in upcoming large language model releases from Google‘s Gemini and OpenAI.

Washington has also sought to limit China’s AI development by restricting sales of advanced semiconductors, primarily from Nvidia, to the country. The chipmaker’s CEO, Jensen Huang, joined Trump’s delegation to China as a late addition.

When asked about a Reuters report that Washington had cleared sales of Nvidia’s H200 AI chips to several major Chinese technology firms, Bessent said there had been “a lot of back and forth” on the matter.

Trump and Xi wrapped up their first major meeting of this week’s China trip at 12 p.m. local time Thursday. Beijing’s readout said the Chinese leader emphasized that Taiwan is the most important issue for bilateral relations, and warned against mishandling the issue.

Beijing claims that Taiwan, a democratically self-ruled island, is part of its territory.

Bessent also told CNBC that Trump would say more on the issue of Taiwan “in the coming days.”

“Trump … understands the sensitivities around all this, and anyone who’s been saying other otherwise does not understand the negotiating style of Donald Trump,” he added.

Bessent’s week in Asia

Trump’s trip to China this week is the first time a sitting U.S. president has visited the country since 2017, when Trump visited during his first term. The summit kicked off Thursday morning and is due to wrap up Friday.

Ahead of the Trump-Xi meeting, Bessent met with Chinese Vice Premier He Lifeng in South Korea on Wednesday.

China’s Commerce Ministry described the preliminary talks as an effort to resolve trade issues and “further expand pragmatic cooperation,” according to a CNBC translation of the Chinese.

In a brief post on X Thursday morning, Bessent shares a picture of himself with He Lifeng, saying they had discussed “the economic and trade relationship between our nations.”

Read more Trump-Xi meeting coverage

  • Five things to watch in Asia as Trump prepares to meet China’s Xi this week
  • For Chinese exporters, Iran worries eclipse tariff woes as Trump, Xi prepare to meet
  • Trump is going to China hoping to talk trade. Iran may steal the show
  • Trump puts Taiwan arms sales, Hong Kong jailed activist Lai on agenda ahead of meeting with Xi
  • From Singapore to Brussels, world leaders eye Trump-Xi summit from afar
  • Jensen Huang joins Trump’s China trip after the U.S. president called the Nvidia CEO
  • Trump in China: Traders predict a tariff truce extension and Boeing aircraft purchases

The Treasury Department did not immediately respond to a request for comment on the Seoul meeting.

Bessent also visited Tokyo before joining Trump’s Beijing trip. The Treasury Secretary said in separate X posts that he discussed critical minerals and investment agreements with South Korean President Lee Jae Myung and Japanese Prime Minister Sanae Takaichi.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
Android’s latest AI feature predicts what you’ll do next
Tech/AI

Android’s latest AI feature predicts what you’ll do next

by admin May 14, 2026
written by admin

‘Contextual suggestions’ will recommend actions based on your daily habits.

‘Contextual suggestions’ will recommend actions based on your daily habits.

May 14, 2026, 10:37 AM UTC
Google Pixel 10 Lineup
Google Pixel 10 Lineup
Jess Weatherbed
Jess Weatherbed is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.

Google is rolling out a new AI-powered “contextual suggestions” feature to Android users that recommends actions based on your daily habits, Android Authority reports. The feature is designed to predict your next action based on your location and habits — such as allowing music streaming apps to suggest your usual playlist when you arrive at the gym for your regular workout.

Contextual suggestions were previously available in the Play Services beta, but now Google seems to have expanded it to the stable channel. While Google hasn’t announced that the feature has officially launched, some reporters at Android Authority and 9to5Google are seeing that it’s available on Pixel 10 series devices running Android 16 and appears to be enabled by default.

We don’t know what the notifications themselves will look like. Screenshots of the settings interface shared by the publications show that users can manage what data is accessed by the feature, such as disabling its ability to use your device location. The privacy section of the contextual suggestions settings says that the feature works “in an encrypted space on your device,” and that your data isn’t shared with Google, apps, or third-party services.

“In this space, AI learns from the data and makes predictions about what might be helpful to you,” Google says in the feature description. “For example, if you often cast sports games to your living room TV on Saturdays, your device can suggest casting at the right time.”

As Android Authority notes, the feature shares some similarities with the Magic Cue feature that launched with Google’s Pixel 10 series, which proactively suggests contextual information such as addresses and contact information that you might want to paste into apps and conversations. Google hasn’t mentioned if contextual suggestions will be more broadly available on non-Pixel Android phones, but a support page says the feature requires Android 14 or up to support audio and video casting.

We have reached out to Google to clarify the launch timeline and regional/device support. For now, you can check if contextual suggestions are available on your own Android phone by heading to Settings > Google Services > All services > Other.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Jess Weatherbed

Most Popular

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
The shock of seeing your body used in deepfake porn 
Tech/AI

The shock of seeing your body used in deepfake porn 

by admin May 14, 2026
written by admin

When Jennifer got a job doing research for a nonprofit in 2023, she ran her new professional headshot through a facial recognition program. She wanted to see if the tech would pull up the porn videos she’d made more than 10 years before, when she was in her early 20s. It did in fact return some of that content, and also something alarming that she’d never seen before: one of her old videos, but with someone else’s face on her body.

“At first, I thought it was just a different person,” says Jennifer, who is being identified by a pseudonym to protect her privacy. 

But then she recognized a distinctly garish background from a video she’d shot around 2013, and she realized: “Somebody used me in a deepfake.”

Eerily, the facial recognition tech had identified her because the image still contained some of Jennifer’s features—her cheekbones, her brow, the shape of her chin. “It’s like I’m wearing somebody else’s face like a mask,” she says. 

“It’s like I’m wearing somebody else’s face like a mask.”

Conversations about sexualized deepfakes—which fall under the umbrella of nonconsensual intimate imagery, or NCII—most often center on the people whose faces are featured doing something they didn’t really do or on bodies that aren’t really theirs. These are often popular celebrities, though over the past few years more people (mostly women and sometimes youths) have been targeted, sparking alarm, fear, and even legislation. But these discussions and societal responses usually are not concerned with the bodies the faces are attached to in these images and videos.

As Jennifer, now 37 and a psychotherapist working in New York City, says: “There’s never any discussion about Whose body is this?” 

For years, the answer has generally been adult content creators. Deepfakes in fact earned their name back in November 2017, when someone with the Reddit username “deepfakes” uploaded videos showing faces of stars like Scarlett Johansson and Gal Gadot pasted onto porn actors’ bodies. The nonconsensual use of their bodies “happens all the time” in deepfakes, says Corey Silverstein, an attorney specializing in the adult industry. 

But more recently, as generative AI has improved, and as “nudify” apps have begun to proliferate, the issue has grown far more complicated—and, arguably, more dangerous for creators’ futures. 

Porn actors’ bodies aren’t necessarily being taken directly from sexual images and videos anymore, or at least not in an identifiable way. Instead, they are inevitably being used as training data to inform how new AI-generated bodies look, move, and perform. This threatens the livelihood and rights of porn actors as their work is used to train AI nudes that in turn could take away their business. And that’s not all: Advancements in AI have also made it possible for people to wholly re-create these performers’ likenesses without their consent, and the AI copycats may do things the performers wouldn’t do in real life. This could mean their digital doubles are participating in certain sex acts that they haven’t agreed to do, or even that they’re perpetrating scams against fans. 

Adult content creators are already marginalized by a society that largely fails to protect their safety and rights, and these developments put them in an even more vulnerable position. After Jennifer found the deepfake featuring her body, she posted on social media about the psychological effects: “I’ve never seen anyone ask whether that might be traumatic for the person whose body was used without consent too. IT IS!” Several other creators I spoke with shared the mental toll that comes with knowing their bodies have been used nonconsensually, as well as the fear that they’ll suffer financially as other people pirate their work. Silverstein says he hears from adult actors every day who “are concerned that their content is being exploited via AI, and they’re trying to figure out how to protect it.” 

One law professor and expert in violence against women calls these creators the “forgotten victims” of NCII deepfakes. And several of the people I spoke with worry that as the US develops a legal framework to combat nonconsensual sexual content online, adult actors are only at risk of further injury; instead of helping them, the crackdown on deepfakes may provide a loophole through which their content and careers could be stripped from the internet altogether.

How deepfakes cause “embodied harms”

During his preteen years in the 1970s, Spike Irons, now a porn actor and president of the adult content platform XChatFans, was “in love” with Farrah Fawcett. Though Fawcett did not pose nude, Jones managed to get his hands on what looked like pictures of her naked. “People were cutting out faces and pasting them on bodies,” Irons says. “Deepfakes, before AI, had been going around for quite a while. They just weren’t as prolific.”

The early public internet was rife with websites capitalizing on the idea that you could use technology to “see” celebrities naked. “People would just use Microsoft Paint,” says Silverstein, the attorney. It was a simple way to mash up celebrities’ faces with porn. 

People later used software like Adobe After Effects or FakeApp, which was designed to swap two individuals’ faces in images or videos. None of these programs required serious expertise to alter content, so there was a low barrier to entry. That, plus the wealth of porn performers’ videos online, helped make face-swap deepfakes that used real bodies prevalent by the 2010s. When, later in the decade, deepfakes of Gal Gadot and Emma Watson caused something of a broader panic, their faces were allegedly swapped onto the bodies of the porn actors Pepper XO and Mary Moody, respectively.

But it wasn’t just high-profile actors like them whose bodies were being used. Jennifer was “a very minor performer,” she says. “If it happened to me, I feel like it could happen to anybody who’s shot porn.” Since he started his practice in 2006, Silverstein says, “numerous clients” have reached out to report “This is my body on so-and-so.” 

Both people whose faces appear in NCII deepfakes and those whose bodies are used this way can feel serious distress. Experts call this type of damage “embodied harms,” says Anne Craanen, who researches gender-based violence at the UK’s Institute for Strategic Dialogue, an organization that analyzes extremist content, disinformation, and online threats. 

The term reflects the fact that even though the content exists in the virtual realm, it can cause physiological effects, including body dysmorphia. The face-swapped entity occupies the uncanny valley, distorting self-perception. After discovering their faces in sexual deepfakes, many people feel silenced, experts told me; they may “self-censor,” as Craanen puts it, and step back from public-facing life. Allison Mahoney, an attorney who works with abuse survivors, says that people whose faces appear in NCII can experience depression, anxiety, and suicidal ideation: “I’ve had multiple clients tell me that they don’t sleep at night, that they’re losing their hair.” 

Independent creators aren’t just “having sex on camera.” For someone to rip off their work “for their own entertainment or financial gain fucking sucks.”

Though the impact on people whose bodies are used hasn’t been discussed or studied as often, Jennifer says that “it’s just a really terrible feeling, knowing that you are part of somebody else’s abuse.” She sees it as akin to “a new form of sexual violence.”

The uncertainty that comes with not being aware of what your body is doing online can be highly unsettling. Like Jennifer, many adult actors don’t really know what’s out there. But some devoted followers know the actors’ bodies well—often recognizing tattoos, scars, or birthmarks—and “very quickly they bring [deepfakes] to the adult performer’s attention,” says Silverstein. Or performers will stumble upon the content by chance; some 20 years ago, for instance, the first such client to tell Silverstein her body was being used in a deepfake happened to be searching Nicole Kidman online when she found that one of the results showed Kidman’s face on her porn. “She was devastated, obviously, because they took her body,” he says, “and they were monetizing it.” 

Otherwise, this imagery may be found by an organization like Takedown Piracy, one of several copyright enforcement companies serving adult content creators. US copyright violations can be challenging to prove if someone’s body lacks distinguishing features, says Reba Rocket, Takedown Piracy’s chief operating and marketing officer. But Rocket says her team has added digital fingerprinting technology to clients’ material to help flag and remove problematic videos, often finding them before clients realize they’re online. 

By capturing “tens of thousands of tiny little visual data points” from videos, digital fingerprinting creates unique corresponding files that can be used to identify them, Rocket says—kind of like an invisible watermark. The prints remain even if pirates alter the videos or replace performers’ faces. Takedown Piracy has digitally fingerprinted more than half a billion videos and the organization has gotten 130 million copyrighted videos taken down from Google alone (though, of those videos, Rocket hasn’t tracked how many of these specifically include someone else’s face on a performer’s body). 

Besides copyright, a range of legal tools can be used to try and combat NCII, says Eric Goldman, a law professor at Santa Clara University. For example, victims can claim invasion of privacy. But using these tools isn’t particularly straightforward, and they may not even apply when it comes to someone’s body. If there aren’t, for instance, unique markers indicating that a body in a deepfake belongs to the person who says it does, US law “doesn’t really treat [this content] as invasion of privacy,” Goldman says, “because we don’t know who to attribute it to.”

In a 2018 study that reviewed “judicial resolution” of cases involving NCII, Goldman found that one successful way plaintiffs were able to win cases was to assert “intentional affliction of emotional distress.” But again, that hinges on the ability to clearly identify the person in the content. Relevant statutes, he adds, might also require “intent to harm the individual,” which may be hard to show for people whose bodies alone are featured.

“AI girls will do whatever you want”

In the last few years, Silverstein says, it’s become less and less common to see the bodies of real adult content creators in deepfakes, at least in a way that makes them clearly identifiable. 

Sometimes the bodies have been manipulated using AI or simpler editing tools. This can be as basic as erasing a birthmark or changing the size of a body part—minor edits that make it impossible to identify someone’s image beyond a reasonable doubt, so even porn actors who can tell that an altered image used their body as a base won’t get very far in the legal realm. “A lot of people are like, That looks like my body,” says Silverstein, but when he asks them how, they’ll reply, It just does. 

At the same time, other users are now creating NCII with wholly AI-generated bodies. In “nudify” apps, anyone with a minimal grasp of technology can upload a photo of someone’s clothed body and have it replaced with a fake naked one. “So [much] of this content being created is just someone’s face on an AI body,” Silverstein says.

Such apps have drawn a ton of attention recently, from Grok “nudifying” minors to Meta running ads for—and then suing—the nudify app Crushmate. But there’s been relatively little attention paid to the content being used to train them. They almost certainly draw on the more than 10,000 terabytes of online porn, and performers have virtually zero recourse. 

One reason is that creators aren’t able to demonstrate with any certainty that their content is being used to train AI models like those used by nudify apps. “These things are all a black box,” says Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics. But “given the ubiquity” of adult content, he adds, it’s a “reasonable assumption” that online porn is being used in AI training. 

“It’s just not at all difficult to come up with pornographic data sets on the internet,” says Stephen Casper, a computer science PhD student at MIT who researches deepfakes. What’s more, he says, plenty of shadowy online communities provide “user guides” on how to use this data to train AI, and in particular programs that generate nudes. 

It’s not certain whether this activity falls within the US legal definition of “fair use”—an issue that’s currently being litigated in several lawsuits from other types of content creators—but Casper argues that even if it does, it’s ethically murky for porn created by consenting adults 10 years ago to wind up in those training data sets. When people “have their stuff used in a way that doesn’t respect or reflect reasonable expectations that they had at that time about what they were creating and how it would be used,” he says, there’s “a legitimate sense in which it’s kind of … nonconsensual.” 

Adult performers who started working years ago couldn’t possibly have consented to AI anything; Jennifer calls AI-related risks “retroactively placed.” Contracts that porn actors signed before AI, adds Silverstein, might provide that “the publisher could do anything with the content using technology that now exists or here and after will be discovered.” That felt more innocuous when producers were talking about the shift from VHS to DVD, because that didn’t change the content itself, just the way it was conveyed. It’s a far different prospect for someone to use your content to train a program to create new content … content that could replace your work altogether. 

Of course, this all affects creators’ bottom line—not unlike the way Google’s AI overviews affect revenue for online publishers who’ve stopped getting clicks when people are content with just reading AI-generated summaries. Performers’ “concern is … it’s another way to pirate [their] content,” says Rocket. 

After all, independent creators aren’t just “having sex on camera,” as the adult content creator Allie Eve Knox says. They’re paying for filming equipment and location rentals, and then spending hours editing and marketing. For someone to then rip off and distort that content “for their own entertainment or financial gain,” she says, “fucking sucks.” 

KIM HOECKELE

Tanya Tate, a longtime adult content creator, tells me about another highly unsettling AI-created situation: She was recently chatting with a fan on Mynx, a sexting app, when he asked her if she knew him. She told him no, and “his eyes just started watering,” Tate says. He was upset because he thought she did know him. Turns out he’d sent $20,000 to a scammer who’d used an AI-generated deepfake of Tate to seduce him. 

Several men, Tate subsequently learned, had been scammed by an AI version of her, and some of them began blaming her for their losses and posting false statements about her online. When she reported one particularly aggressive harasser to the police, they told her he was exercising his “freedom of speech,” she says. Rocket, too, is familiar with situations where AI is used to take advantage of fans. “The actual content creator will get nasty emails from these people who’ve been scammed,” she says.

Other porn actors say they fear that their likenesses have been used without consent to do other things they wouldn’t do. One, Octavia Red, tells me she doesn’t do anal scenes, “but I’m sure there’s tons of deepfake anal videos of me that I didn’t consent to.” That could cost her, she fears, if viewers choose to watch those videos instead of subscribing to her websites. And it could cause fans to develop false expectations about what kind of porn she’ll create.

“I saw one AI creator saying, ‘Well, AI girls will do whatever you want. They don’t say no,’” says Rocket. “That horrifies me … especially if they’re training those AI models on real people. I don’t think they understand the damage to mental health or reputation that that can create. And once it’s on the internet, it’s there forever.” 

Efforts to “scrub adult content from the internet”

As AI technology improves, it’s increasingly difficult for people to discern any type of real video from the best AI-generated ones on their own. In one 2025 study, UC Berkeley’s Farid found that participants correctly identified AI-generated voices about 60% of the time (not much better than random chance), while advances like false heartbeats make AI-generated humans tougher than ever to spot.

Nevertheless, most lawyers and legal experts I spoke with said copyright laws are still adult performers’ best bet in the US legal system, at least for getting their face-swapped content taken down. For his clients, Silverstein says, he tries to figure out the content’s origins and then issue takedown requests under the Digital Millennium Copyright Act, a 1998 law that adapted copyright law for the internet era. “Even recently, I had a performer who has an insanely well-known tattoo,” he says, and with a DMCA subpoena he managed to identify the poster of the content, who voluntarily removed it. 

But this way of working is becoming increasingly rare.

These days it’s nearly “impossible,” Silverstein says, to determine who produced a deepfake, because many platforms that host pirated content operate facelessly. They’re also often based in places that “don’t really care about US law when it comes to copyrights,” says Rocket—places like Russia, the Seychelles, and the Netherlands. 

While governments in the EU, the UK, and Australia have said they will ban or restrict access to nudify apps, it’s not an easily executed proposition. As Craanen notes, when app stores remove these services, they often simply reappear under different names, providing the same services. And social platforms where people share NCII deepfakes, argues Rocket, are slacking in getting them removed. “It’s endless, and it’s ridiculous, because places like Twitter and Facebook have the same technology we do,” Rocket says. “They can identify something as an infringement instantly, but they choose not to.”

(Apple spokesperson Adam Dema emailed, “’nudification’ apps are against our guidelines” in the app store, and it has “proactively rejected many of these apps and removed many others,” flagging a reporting portal for users. A Google spokesperson emailed, “Google Play does not allow apps that contain sexual content,” noting it takes “proactive steps to detect and remove apps with harmful content” and has suspended hundreds of apps for violating its policy. Meta spokesperson shared a blog post about actions it’s taken against nudify apps, but did not respond to follow-up questions about copyrighted material. X did not respond to a request for comment.)

As porn performers are forced to navigate AI-related threats, the only current federal law to address deepfakes may not help them much—and could even make matters worse. The Take It Down Act, which became US law last year, criminalizes publishing NCII and requires websites to remove it within 48 hours. But, as Farid notes, people could weaponize the measure by reporting porn that was made legally and with consent and claiming that it’s NCII. This could result in the content’s removal, which would hurt the performers who made it. Santa Clara’s Goldman points to Project 2025, the Heritage Foundation’s policy blueprint for the second Trump administration, which aims to wipe porn from the web. The Take It Down Act, he argues, “allows for the coordinated effort to scrub adult content from the internet.” 

US lawmakers have a history of hurting sex workers in their attempts to regulate explicit content online. State-level age verification laws are an example; visitors can pretty easily get around these measures, but they can still result in reduced revenue for adult performers (because of lower traffic to those sites and the high price of age-checking services they have to purchase). 

“They’re always doing something to fuck with the porn industry, but not in a way that actually helps sex workers,” says Jennifer. “If they do something, they’re taking away your income again—as opposed to something like giving you more rights to your image, [which] would be tremendously helpful.” 

But as generative AI plays an increasingly large role in NCII deepfakes, the types of images to which adult performers have rights moves deeper into a gray area. Can actors lay claim to AI images likely trained on their bodies? How about AI-generated videos that impersonate them, like the one that tricked Tanya Tate’s fan?

The biggest challenge will be creating “legitimate, effective laws that will absolutely protect content creators from abusing their likeness to train and create AI,” Rocket says. “Absent that, we’re just going to have to keep pulling content down from the internet that’s fake.”

In the meantime, a few porn actors tell me, they’re trying to take advantage of copyright laws that weren’t really made for them; they’ve signed with platforms that host their AI-generated duplicates, with whom fans pay to chat, in part so they’ll have contracts that protect ownership of their AI likenesses. When I spoke with the actor Kiki Daire in September 2025 for a story on adult creators’ “AI twins,” she said she “own[ed] her AI” because she’d signed a contract with Spicey AI, a site that hosted AI duplicates of adult performers. If another company or person created her AI-generated likeness, she added, “I have a leg to stand on, as far as being able to shut that down.”  

Even this, though, is not a sure thing; Spicey AI, for instance, shut down several months after I spoke with Daire, so it’s unlikely that her contract would hold. And when I spoke in October with Rachael Cavalli, another adult actor who had signed with an AI duplicate site in hopes it’d help protect her AI image, she admitted, “I don’t have time to sit around and look for companies that have used my image or turned something into a video that I didn’t actually do … it’s a lot of work.” In other words, having rights to your AI image on paper doesn’t make it easier to track down all the potentially infinite breaches of those rights online.

If she’d known what she knows about technology today, Jennifer says she doesn’t think she would have done porn. The risks have increased too much, and too unpredictably. She now does in-person sex work; it’s “not necessarily safer,” she says, “but it’s a different risk profile that I feel more equipped to manage.” 

Plus, she figures AI is unlikely to replace in-person sex workers the way it could porn actors: “I don’t think there’s going to be stripper robots.” 

Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.

May 14, 2026 0 comments
0 FacebookTwitterPinterestEmail
Court overturns Alex Murdaugh's murder convictions and orders new trial
Global

Court overturns Alex Murdaugh’s murder convictions and orders new trial

by admin May 13, 2026
written by admin

“Both the State and Murdaugh’s defense skillfully presented their cases to the jury as the trial court deftly presided over this complicated and high-profile matter,” the justices wrote. “However, their efforts were in vain because Colleton County Clerk of Court Rebecca Hill placed her fingers on the scales of justice, thereby denying Murdaugh his right to a fair trial by an impartial jury.”

May 13, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Comments

No comments to show.

Follow Us

Recent Posts

  • AI research papers are getting better, and it’s a big problem for scientists

    May 15, 2026
  • Trump-Xi summit: the 3 big takeaways from historic meeting in Beijing

    May 15, 2026
  • The world is on track to miss its health targets

    May 15, 2026
  • CIA chief visits Cuba as energy crisis worsens

    May 14, 2026
  • Men use “vocal fry” more than women, counter to stereotype

    May 14, 2026

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Categories

  • Business (18)
  • Economy (429)
  • Global (448)
  • Investing (8)
  • Lifestyle (116)
  • Tech/AI (1,198)
  • Uncategorized (10)

Our Company

We’re dedicated to telling true stories from all around the world.

  • Ilulissat 3952, Greenland
  • Phone: (686) 587 6876
  • Email: [email protected]
  • Support: [email protected]

About Links

  • About Us
  • Contact
  • Advertise With Us
  • Media Relations
  • Corporate Information
  • Compliance
  • Apps & Products

Useful Links

  • Privacy Policy
  • Terms of Use
  • Closed Captioning Policy
  • Accessibility Statement
  • Personal Information
  • Data Tracking
  • Register New Account

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Latest Posts

Men use “vocal fry” more than women, counter to stereotype
The Team Behind Netflix’s ‘Chef’s Table’ Is Launching Its First Food Festival This Summer
U.S. can hold AI talks with China because ‘we are in the lead,’ Bessent tells CNBC as nations plan safety protocol
Android’s latest AI feature predicts what you’ll do next

@2025 – All Right Reserved. Designed and Developed by BusinessStory.org

Facebook Twitter Instagram Linkedin Youtube Email
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact