• Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
LOGIN
Tuesday, April 28, 2026
Top Posts
Costco: Compounding Power of Trust and Discipline
Uber: The Rulebreaker’s Playbook
Google: Search Box to Empires
Y Combinator: Accelerator or University
Investing Guidance – Oct 24, 2025
Investing Guidance – Oct 17, 2025
Intel: The Traitorous Eight
Investing Guidance – Nov 19, 2025
Investing Guidance – Nov 12, 2025
Investing Guidance – Nov 7, 2025
SUBSCRIBE NEWSLETTERS
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
Copyright 2021 - All Right Reserved
Google and Pentagon allegedly reach an agreement for 'any legal' application of AI
Tech/AI

Google and Pentagon allegedly reach an agreement for ‘any legal’ application of AI

by admin April 28, 2026
written by admin

The confidential agreement reportedly does not grant Google the power to prohibit how the government utilizes its AI models.

The confidential agreement reportedly does not grant Google the power to prohibit how the government utilizes its AI models.

Apr 28, 2026, 11:09 AM UTC
Photo illustration of Sundar Pichai in front of the Google logo
Photo illustration of Sundar Pichai in front of the Google logo
Jess Weatherbed
Jess Weatherbed is a news writer concentrating on creative sectors, technology, and online culture. Jess launched her career at TechRadar, reporting on news and hardware evaluations.

Google has entered into a classified agreement permitting the US Department of Defense to deploy its AI models for “any lawful government purpose,” The Information reports. The pact was disclosed shortly after Google staff requested CEO Sundar Pichai prevent the Pentagon from employing its AI amid apprehensions that it could be utilized in “inhumane or extremely detrimental ways.”

If the agreement is validated, it would align Google with OpenAI and xAI, which have also established classified AI arrangements with the US government. Anthropic was additionally on that list until it was blacklisted by the Pentagon for declining the Department of Defense’s requests to eliminate weapon and surveillance-related safeguards from its AI systems.

Citing a single unnamed source “familiar with the situation,” The Information states that the deal asserts that both parties have concurred that the search giant’s AI systems should not be employed for domestic mass surveillance or autonomous weaponry “without suitable human supervision and control.” However, the contract also indicates it does not grant Google “any authority to oversee or deny lawful governmental operational choices,” implying that the agreed limits are more of a casual agreement than enforceable commitments.

In a statement to Reuters, a Google representative expressed that the firm maintains the view that AI should not be utilized for domestic mass surveillance or autonomous weaponry without proper human oversight. “We consider that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, signifies a responsible approach to supporting national security,” Google informed the outlet.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Jess Weatherbed

Most Popular

April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
Assault of the lethal script novices
Tech/AI

Assault of the lethal script novices

by admin April 28, 2026
written by admin
rogers---script-kiddies-ANIMATION
rogers---script-kiddies-ANIMATION

Following the events of Mythos, AI-enhanced hobbyist hackers are poised to launch their attacks.

Apr 28, 2026, 11:00 AM UTC

In August of last year, top cybersecurity teams gathered in Las Vegas to showcase their AI-driven bug detection technologies at DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). These tools analyzed 54 million lines of actual coding that DARPA had imbued with fabricated flaws. The teams managed to uncover most of the synthetic bugs, but their automated systems exceeded expectations — they discovered more than a dozen flaws that DARPA had not even introduced.

Before the seismic security shift brought on by Anthropic’s latest model, Claude Mythos — which appears to uncover vulnerabilities in every software it examines — there was already a trend of automated systems becoming adept at pinpointing coding issues. Concerns are intensifying that AI might not only identify these issues but could also be leveraged to exploit them, giving every individual around the globe access to hacking potential.

This situation is far from hypothetical. For years, this kind of low-skill hacker, termed a script kiddie, has created chaos by executing scripts obtained from the web or copied from exploit toolkits. They lacked the understanding or technical skill to generate these scripts on their own. Nevertheless, they managed to deface websites and spread malware.

The current landscape marks a significant escalation, empowering individuals without technical backgrounds to utilize AI to magnify their abilities in ways that were previously unachievable with basic scripts. The implications are likely to be significantly broader.

“A tidal wave is approaching. We can all see it,” stated Dan Guido, CEO and co-founder of cybersecurity firm Trail of Bits, a runner-up in the challenge. “Will you surrender or will you take action?”

Beyond Project Glasswing, Anthropic is actively working to thwart the misuse of its software by those with malicious intent. A week after unveiling Mythos, the firm introduced Claude Opus 4.7, which for the first time integrated safeguards designed to prevent harmful cybersecurity inquiries. (Security experts wishing to employ the model defensively can apply to the company’s Cyber Verification Program.)

The launch of Mythos sent ripples through the industry, but signs of AI’s cybersecurity capabilities were evident before this announcement. In June 2025, the autonomous offensive security platform XBOW surpassed human hackers to lead the leaderboard on HackerOne, a bug bounty platform, highlighting significant advancements in AI’s ability to detect vulnerabilities.

By the time AIxCC took place, “there were already 10 to 20 distinct bug-detecting systems capable of identifying vastly more bugs than we could possibly address,” Guido remarked. “This is not a fresh challenge.”

AI excels at recognizing patterns, and it’s becoming progressively simpler for individuals to uncover variations of both known and unknown bugs. Furthermore, crafting exploits is now more manageable.

“With AI tools and minimal to no human intervention, you can discover a zero-day vulnerability in widely utilized software,” said Tim Becker, senior security researcher at Theori, also a finalist in the contest.

The anxiety is evident across the sector, with advancements in models — along with enhanced comprehension of their functions — occurring at breakneck speed.

Open-weight models, or those with publicly accessible trained parameters (known as weights), also present a risk. In fact, sophisticated threat actors are more inclined to run their deployments privately to keep the exploits concealed from Anthropic or OpenAI servers, Becker remarked, as Anthropic may retain data to prevent abuse. The industry is bracing for the potential fallout of what may follow. Other model creators might not exercise the same caution as Anthropic, potentially making their robust new tools readily available to the public.

“Regardless of Mythos, this is inevitable,” Guido asserts.

Mythos signifies an advancement in exploit creation, yet current models possess capabilities as well. Security researchers are already employing more readily available models to report vulnerabilities to companies before they can be exploited externally. This presents a dual risk of malicious actors deploying them for harmful purposes, like engineering exploits for repressive regimes or compromising sensitive data.

Experts in the field anticipate that advancements in AI security will result in a surge of new exploits. Malicious actors could direct AI to identify bugs in niche software that previously received little attention for exploitation.

“Now, because the effort required is minimal, you can target lower-tier items. You can develop exploits for software used by just one company. You can craft exploits for software that exists in a unique configuration utilized by a single corporation. And you can accomplish this dynamically. For instance, during a breach in a hospital, if there’s a barrier between you and your objective, you can simply direct an LLM to that barrier and command, ‘Identify a flaw here,’ and it will iterate until successful. It will uncover a vulnerability, spot a configuration, and execute an exploit for a weakness that has never been discovered before, all with minimal input from the user… the hacker… the script kiddie,” remarked Guido.

This escalates the abilities of script kiddies, he explains, as they can act spontaneously without needing to memorize weaknesses in various UNIX utilities, instead relying on the pretraining embedded within the tool they utilize. They’ll be capable of rapidly cycling through exploits aimed at targeting vulnerabilities at machine speed, something beyond human capacity — even beyond that of a script kiddie.

Determining the exact extent to which this enhances attacker abilities is challenging, although there clearly appears to be a connection. Security researchers can assist in understanding the magnitude of bugs being discovered.

Before Becker began focusing on automatic bug discovery through AI, he specialized in vulnerability research, locating zero-days and notifying maintainers. He stated that it previously took him weeks or even months to identify a high-impact vulnerability in a new codebase, whereas now it requires mere hours.

“I simply feed the code into our AI bug-detection tool, and within hours, I receive a report containing several potential vulnerabilities, most of which end up being valid issues,” he claimed. “The entry barrier to diving into a new million-line codebase and spotting a bug is far lower than it used to be.”

Each rollout of a new automated tool has triggered a wave of concern regarding potential exploitation, regardless of whether these tools are text-to-image generators or open-source systems like the exploit development and delivery framework Metasploit. The anxiety dates back to 1995, when a free software vulnerability scanner named SATAN (an acronym for Security Administrator Tool for Analyzing Networks) was introduced.

Often, automated tools do not bring about the level of chaos that had been anticipated or foretold, due to preventive measures, low uptake rates among attackers, or other variables.

Joshua Saxe, CTO and co-founder of Security Superintelligence Labs, noted in a blog post that exploits themselves do not initiate cyberattacks, and that the adoption of AI vulnerability research tools has been gradual.

“There seems to be an unspoken understanding that once a new adversarial tool becomes accessible… we will instantaneously witness illicit behavior with it. It’s a mindset that overlooks the need to consider what humans are genuinely doing,” he told The Verge.

Saxe highlights that resistance may arise among various factions of attackers adopting these tools into their existing workflows and organizational cultures. “There’s a substantial human and organizational facet here,” he remarked.

“It’s possible certain attacker groups will swiftly embrace these new tools, or the uptake may be rather sluggish.” Some might continue breaching networks through phishing or leveraging exploits they already possess, while others could begin forming new exploits via these tools.

While the pace of adoption is uncertain, companies can take proactive measures to brace for the impending wave of vulnerability reports.

Katie Moussouris, founder and CEO of Luta Security, coined the term “Vulnapalooza” in a blog entry featuring a concert poster and a festival survival guide for security teams, emphasizing this moment as critical for organizations to bolster their vulnerabilities. Their recommendations mirror conventional best practices: segmentation, refining identity and access management, employing memory-safe code, and implementing phishing-resistant authentication alongside up-to-date software.

The Cloud Security Alliance published a rapid strategy briefing focused on formulating a “Mythos-ready” security strategy that encapsulates several of these concepts. The report highlighted the necessity of not only addressing vulnerabilities but also determining which ones to prioritize. However, the urgency to keep pace with machine-speed threats is novel, and the volume of bug reports is already surging, necessitating preparation for an uptick in incidents and their swift containment and mitigation.

Moussouris points out that many individuals in cybersecurity roles have faced layoffs due to AI’s efficiencies, even though these efficiencies are precisely the reason human oversight becomes increasingly essential. Organizations will require human threat hunters, threat intelligence analysts, and incident responders to handle the influx of new exploits. Additionally, they’ll need people to determine which patches demand prioritization and execution.

“We lack an equivalent AI-based defense system to automate all of these functions, and I believe we will need substantial staffing increases,” she articulated. Organizations must also construct secure software and secure network architecture to avoid falling into a ceaseless cycle of patching. “More secure software must be developed initially. We cannot simply respond to incidents as a route to resilience.”

Organizations unable to expand their hiring could at least streamline their vendor onboarding procedures to facilitate quicker engagement of personnel or services when necessary. “Being ensnared in a lengthy vendor procurement process while under siege is a situation to avoid,” Moussouris advised.

Despite widespread concerns about vulnerabilities, Moussouris contends that the anticipated “vulnpocalypse” may actually manifest as a “patchpocalypse.”

“The model has already uncovered thousands of vulnerabilities, and the tsunami of patches that will arise from this collaborative effort will present significant challenges,” she noted.

Organizations that delay patching their systems might face unwelcome surprises. Prolonged inaction increases the risk of active attacks targeting vulnerabilities that AI has identified, potentially employing exploits crafted by the models themselves.

“The timeframe from when a vulnerability is disclosed to when exploit code becomes available has effectively shrunk to nearly zero, representing a significant adjustment that individuals must incorporate into their risk assessments and timelines for action,” she elucidated.

There is a chance to leverage AI to accelerate the remediation or mitigation processes. Becker mentioned that Theori is developing a commercial tool named Xint, which has been operational on open-source codebases, manually reporting critical findings to maintainers and providing detailed reports with remediation guidance at its own expense, serving both as a community defense initiative and to showcase the tool’s capabilities. The current version of Xint was able to identify all the bugs Mythos did while analyzing the same codebases. It also detected 12 additional zero-day vulnerabilities absent from Anthropic’s announcement.

However, addressing these bugs won’t be as swift as discovering them, as it necessitates engineers with extensive familiarity with the codebase to ascertain if the patches represent the optimal solutions or if they could compromise maintainability or clarity in the future. Occasionally, a patch may offer a solution but not the most effective one, thus requiring human time and effort to finalize the remedies.

The considerable increase in reported bugs can lead to a lengthy queue of items to be patched, particularly for open-source maintainers, who may find it challenging to manage the influx.

While not every bug holds value for an attacker, sifting through the pile to identify which ones warrant immediate attention can be nearly as challenging as the fixes themselves.

“Much of the prioritization must be contextual,” Moussouris remarked. For instance, a highly adverse bug existing internally that would be difficult for an outsider to access may rank lower in urgency than a less critical bug that is visible on the organization’s perimeter.

In addition to the prioritization of bugs, organizations must decide when to implement patches that might limit functionality and potentially induce downtime, and when it might be prudent to postpone. The fewer security measures they have in place, the more time they will require for patching.

Simply issuing a patch makes it easier for attackers to reverse-engineer the solution and exploit vulnerabilities they may not have previously identified in unpatched devices. This necessitates that consumers adjust to frequent updates of their software as the volume of critical security fixes escalates significantly. Likewise, organizations should invest in secure architectures to minimize the number of patches they need to oversee initially.

However, as Moussouris encapsulates it, there’s no need for despair. “This doesn’t have to be viewed as the worst possible scenario,” she advised The Verge. “You can approach this as a chance to fortify defenses and secure funding to address issues that have been deferred.”

Regardless of the attitude organizations adopt, they must be ready. The stakes are heightened, and even script kiddies now have substantially more potential to discover and exploit vulnerabilities. Companies require a robust strategy to contend with the emerging threat of AI-facilitated assaults.

“The year 2026 is pivotal; it’s the year to either thrive or fail,” warned Guido. Organizations need to safeguard their systems immediately while they still have an opportunity to proactively manage risks. “If they fail to act, we may reach the end of 2026 amidst chaos.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Yael Grauer
April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
UPS surpasses Wall Street forecasts on both revenue and earnings
Economy

UPS surpasses Wall Street forecasts on both revenue and earnings

by admin April 28, 2026
written by admin

In this report

  • UPS
Track your preferred stocksCREATE FREE ACCOUNT
A UPS delivery person is seated in their vehicle on April 15, 2026, in the Flatbush area of Brooklyn, New York City.
Michael M. Santiago | Getty Images

United Parcel Service announced its first-quarter earnings on Tuesday, exceeding expectations on both revenue and profits.

Shares of the logistics company fell approximately 3% during premarket trading.

The following details showcase the company’s performance in the first quarter, compared to Wall Street’s forecasts, according to analyst surveys by LSEG:

  • Earnings per share: $1.07 adjusted vs. $1.02 projected
  • Revenue: $21.2 billion vs. $20.99 billion projected

For the quarter ending March 31, UPS recorded a net income of $864 million, translating to $1.02 per share, down from $1.19 billion or $1.40 per share the previous year. After adjusting for extraordinary items, the firm reported a profit of $906 million, or $1.07 per share.

“The first quarter of 2026 represented a pivotal shift for UPS where we needed to flawlessly implement several key strategic initiatives, and we did,” stated CEO Carol Tomé. “Now that we have that behind us, we anticipate a return to revenue growth and profit improvements, alongside expanded operating margins in the second quarter of this year.”

For its guidance for the full year 2026, the firm reaffirmed its overall revenue projection at $89.7 billion and non-GAAP adjusted operating margin of 9.6%.

Within its domestic sector, UPS indicated a revenue drop of 2.3%, mainly attributed to a predicted decline in volume.

UPS is undergoing a transformation plan and boosting automation within its operations. In the first quarter, the company reported achieving $600 million in savings through its network efficiency plan, with the goal of reaching $3 billion in annual savings by 2026.

Company leaders will conduct a conference call at 8:30 a.m. ET.

Select CNBC as your chosen source on Google and stay updated with the most reliable name in business news.

April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
GM increases 2026 outlook following $500 million tariff reimbursement, exceeding Wall Street's profit forecasts.
Economy

GM increases 2026 outlook following $500 million tariff reimbursement, exceeding Wall Street’s profit forecasts.

by admin April 28, 2026
written by admin

In this piece

  • GM
Track your preferred stocksSET UP A FREE ACCOUNT
The global headquarters of General Motors in Detroit’s Hudson area, Michigan, United States, captured on Monday, January 12, 2026.
Jeff Kowalsky | Bloomberg | Getty Images

DETROIT — General Motors has updated its 2026 forecast after significantly exceeding Wall Street’s initial earnings forecasts for the first quarter, thanks to approximately $500 million from the U.S. Supreme Court ruling to cancel and refund specific tariffs imposed under President Donald Trump’s administration.

The company’s performance in the first quarter, contrasted with average projections from LSEG, is as follows:

  • Earnings per share: $3.70 adjusted vs. $2.62 anticipated
  • Revenue: $43.62 billion vs. $43.68 billion expected
    GM’s tariff benefit under the International Emergency Economic Powers Act was largely anticipated by Wall Street, but the precise sum it would receive remained unclear. It is part of a total of $160 billion in potential reimbursements to companies after the Supreme Court ruled the tariffs unlawful in February with a 6-3 decision.

The automaker has yet to receive IEEPA reimbursements, but plans to and chose to account for it during the first quarter.

The Detroit-based manufacturer revised its 2026 forecast to reflect adjusted earnings before interest and taxes ranging from $13.5 billion to $15.5 billion, or $11.50 to $13.50 per share, an increase of $500 million, or 50 cents per share, from prior estimates; net income attributable to shareholders between $9.9 billion and $11.4 billion, up from $10.3 billion to $11.7 billion; and automotive operating cash flow projected between $16.8 billion and $20.8 billion, raised from $19 billion to $23 billion.

Excluding the tariff adjustment, the company’s adjusted earnings in the first quarter would still have exceeded expectations and marked an increase of about 7.5% from a year prior. GM CEO Mary Barra noted in a letter to shareholders that this quarter exceeded the company’s forecasts.

GM’s first-quarter results for 2025 showed revenues of $44.02 billion, net income attributable to shareholders of $2.78 billion, and adjusted earnings before interest and taxes of $3.49 billion.

Select CNBC as your preferred source on Google and stay updated with the most trusted name in business news.

April 28, 2026 0 comments
0 FacebookTwitterPinterestEmail
US political violence creates a well-known cycle - this time it's intensified.
Global

US political violence creates a well-known cycle – this time it’s intensified.

by admin April 27, 2026
written by admin

Erika Kirk, the spouse of the conservative figure Charlie Kirk, who was tragically shot and killed last September, was in a state of distress. Congressman Steve Scalise, the majority leader in the House of Representatives who endured life-threatening injuries during a shooting at a baseball rehearsal with Republican colleagues in 2017, was escorted away by security personnel.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Put it in pencil: NASA's Artemis III mission will not launch before late 2027
Tech/AI

Put it in pencil: NASA’s Artemis III mission will not launch before late 2027

by admin April 27, 2026
written by admin

At this point, the earliest Artemis III now appears to be pushed to late 2027.

“Both vendors, SpaceX and Blue Origin, have provided responses indicating they can support a late‑2027 rendezvous, docking, and an interoperability test of their landers ahead of a landing attempt in 2028,” Isaacman said Monday.

Each company holds multibillion‑dollar agreements to design and deliver human‑rated landers to NASA for Artemis flights. Both platforms will need on‑orbit refueling to travel to the Moon, a complexity not required for missions confined to Earth orbit.

“Taxpayers are making a substantial investment in the Human Landing System (HLS) capabilities of both SpaceX and Blue Origin,” Isaacman testified to the House Appropriations subcommittee that oversees NASA’s budget. “I’d also emphasize that both firms are investing significantly more of their own resources as well.”

Starship and Blue Moon are much larger than the Apollo lunar lander and could eventually be refueled at the Moon to carry out repeated trips between the surface and orbiting crew and cargo vessels.

“That capability enables not just a return to the Moon, but the actual construction of a lunar base, delivering large amounts of mass to the surface affordably and at scale, along with all the other benefits of a reusable rocket,” Isaacman said. “We’re very thankful for that.”

Preparing Starship and Blue Moon for crewed missions presents major hurdles. On Apollo 9, two astronauts flew the lunar module on a test sortie, separating from the command module with the third crewmember for over six hours before docking again in low Earth orbit. To perform a comparable test on Artemis III, Starship or Blue Moon would need an advanced independent life‑support system, human‑rated engines, a proper cockpit and flight controls, and a docking interface. SpaceX and Blue Origin have disclosed few specifics about the status of those systems in development and production.



An artist’s depiction of NASA’s Orion docked to SpaceX’s Starship lunar lander near the Moon.

Credit:
NASA/SpaceX

An artist’s depiction of NASA’s Orion docked to SpaceX’s Starship lunar lander near the Moon.


Credit:

NASA/SpaceX

NASA could opt for a scaled‑back Artemis III that includes a rendezvous and docking but omits an independent crewed flight of the lunar lander. Agency leaders must weigh those choices in the coming months, guided by how rapidly and successfully SpaceX advances Starship Version 3 flights and by Blue Origin’s planned uncrewed Blue Moon cargo landing near the Moon’s south pole.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Elon Musk and Sam Altman are heading to court regarding the future of OpenAI.
Tech/AI

Elon Musk and Sam Altman are heading to court regarding the future of OpenAI.

by admin April 27, 2026
written by admin

Following a prolonged legal battle, Elon Musk and OpenAI’s CEO Sam Altman are set to go to trial this week in Northern California over a case that may have significant ramifications. With OpenAI’s much-anticipated IPO on the horizon, the court could decide if the company can continue as a for-profit entity and may even remove its current executive team, including Altman.

Musk is suing OpenAI, claiming that Altman and OpenAI president Greg Brockman misled him into financing the company in its formative years by promising to keep it as a nonprofit focused on developing AI for humanity’s benefit, only to subsequently transition the organization to operate a for-profit subsidiary. Musk was a co-founder of OpenAI with Altman and others in 2015, but departed in 2018 following a contentious power conflict.

Musk is pursuing as much as $134 billion in damages from OpenAI and Microsoft, one of OpenAI’s major financial supporters. He is also requesting the court to dismiss Altman and Brockman from their positions and to revert OpenAI to its nonprofit status. Musk has requested that any awarded damages be allocated to OpenAI’s nonprofit rather than to him directly.

A jury of nine will provide an advisory verdict, a non-binding suggestion, to assist the judge in evaluating Musk’s allegations against Altman. Musk, Altman, and Brockman will testify. Former OpenAI chief scientist Ilya Sutskever, former OpenAI CTO Mira Murati, and Microsoft CEO Satya Nadella are also anticipated to testify. Personal texts, unfiltered diary entries, and ongoing manipulations regarding the founding and expansion of OpenAI are expected to surface.

In a sector rife with secrecy, the trial will offer a rare glimpse for the public into the inner workings of the companies creating the most groundbreaking technology ever made.

What is the conflict about?

When OpenAI was first established as a nonprofit, supported by a $38 million contribution from Musk, the organization promised to develop open-source technology for the benefit of the public, free from the pressure to achieve financial returns. However, over time, the organization felt that increasing competition could threaten its ability to disclose how it produces its AI models and that a nonprofit framework would struggle to secure sufficient funding for ongoing AI development. (MIT Technology Review initially reported on the internal disputes within OpenAI regarding its mission.)

The court has previously determined that in 2017, Altman and Brockman aimed to create a for-profit division, while Musk suggested merging OpenAI with his electric vehicle company, Tesla. When Musk threatened to withdraw funding, Altman and Brockman assured him they intended to keep the company a nonprofit. Musk contends that they pursued plans to shift to a for-profit model without keeping him informed. According to OpenAI, Musk acknowledged that the organization required a for-profit branch and even expressed a desire to be its CEO. 

Even if Musk can demonstrate that he was deceived by Altman and Brockman, he may lack the legal standing to sue them for restructuring the organization to create a for-profit subsidiary. Some legal experts are perplexed by why the judge permitted him to file this claim. “The notion that Elon Musk can sue simply because he was a donor or previously served on the board is quite intriguing,” remarks Jill Horwitz, a law professor specializing in nonprofit law at Northwestern University. “Usually, it’s the responsibility of the attorneys general to bring such claims to uphold charitable objectives. And that process has already occurred.” 

In October 2025, the attorneys general of California, where OpenAI is located, and Delaware, where OpenAI is registered, reached an agreement with OpenAI to sanction its new corporate arrangement under a set of conditions. For instance, a committee focused on safety and security at the nonprofit would oversee safety-related choices made by the for-profit subsidiary. Opponents of the restructuring, including Musk, AI safety advocates, and civil society organizations, have attempted to halt it. 

California’s attorney general has refused to participate in Musk’s lawsuit, stating that the office did not find how his action serves the public interest.

However, whether the agreement binds OpenAI to its nonprofit mission remains uncertain. “Elon Musk needs to demonstrate… what the shortcomings are in the agreement made by OpenAI with the attorneys general,” explains Rose Chan Loui, director of UCLA School of Law’s philanthropy and nonprofit program. Even with the established terms, holding OpenAI accountable is contingent on “how effectively they can enforce them and how much visibility they have into OpenAI’s operations.”

More critically, legal specialists assert that the case is being reviewed under the incorrect legal framework. Musk contends that Altman and Brockman violated OpenAI’s charitable trust by establishing a proprietary, for-profit subsidiary. Consequently, the court has been examining the claim under the trust law. “But OpenAI is not a trust. OpenAI is a corporation. Therefore, they should ideally consider… the law related to charitable nonprofit organizations,” Chan Loui suggests.

What’s at stake?

Despite the legal complexities, the trial’s outcome could shake up the AI industry. Any of the remedies Musk is pursuing could cripple OpenAI as it strives to go public by year’s end. OpenAI, which has a valuation exceeding $850 billion, has labeled the legal battle with Musk as a possible threat to its operations. Musk’s competing company xAI, known for the chatbot Grok, is forecasted to also go public as part of his rocket venture SpaceX as soon as June. Should Musk win, xAI, combined with SpaceX, which is valued at $1.25 trillion, may gain a substantial edge in the AI competitive landscape.

Moreover, the trial has highlighted the profound divide between Musk and the organization he once co-founded. An OpenAI representative directed MIT Technology Review to a statement issued on X: “This lawsuit has consistently been an unfounded and envious attempt to undermine a competitor.” While Musk’s attorneys did not respond immediately for comment, he has tweeted on X that “Scam Altman lies as easily as he breathes.” 

MIT Technology Review will provide continuous coverage of Musk v. Altman until it concludes. Follow @techreview or @michelletomkim on X or @michelletomkim on Bluesky for the latest updates.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Open-source package with 1 million monthly downloads exfiltrated users' credentials
Tech/AI

Open-source package with 1 million monthly downloads exfiltrated users’ credentials

by admin April 27, 2026
written by admin

The maintainers are asking everyone who installed version 0.23.3 to carry out the following steps immediately:

1. Verify which version you have installed:

pip show elementary-data | grep Version

2. If it reports 0.23.3, remove it and install the safe release:

pip uninstall elementary-data

pip install elementary-data==0.23.4

Explicitly pin elementary-data==0.23.4 in your requirements and lockfiles.

3. Remove any cache files to prevent leftover artifacts.

4. Inspect any machine where the CLI may have run for the malware marker file: if found, the payload ran on that host.

macOS / Linux: /tmp/.trinny-security-update

Windows: %TEMP%.trinny-security-update

5. Rotate all credentials that could have been accessed from environments where 0.23.3 ran — dbt profiles, warehouse credentials, cloud provider keys, API tokens, SSH keys, and any .env contents. CI/CD runners are especially at risk because they often expose many secrets at runtime.

6. Engage your security team to look for signs of unauthorized use of exposed credentials. The relevant IOCs are listed at the bottom of this post.

Over the last ten years, supply-chain attacks against open source repositories have become more frequent. In many incidents a malicious package has triggered a chain of compromises: first affecting users and then enabling further breaches via the compromised users’ environments.

HD Moore, a hacker with more than forty years of experience and the founder and CEO of runZero, warned that user-created repository workflows, like GitHub Actions, are well known for containing vulnerabilities.

“It’s a major problem for open source projects with open repos,” he said. “It’s very easy to unintentionally create dangerous workflows that an attacker’s pull request can exploit.”

He noted that this package can help detect such weaknesses.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Russian troops validate exit from northern Mali town following separatist assaults
Global

Russian troops validate exit from northern Mali town following separatist assaults

by admin April 27, 2026
written by admin

In the town of Tessit, located to the south of Gao, JNIM reported that Mali’s military had capitulated to their fighters, as per Reuters. Their announcement specified that they were permitting Malian troops to surrender their arms and retreat safely. The military has yet to respond to these assertions, which the BBC is unable to verify independently.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
The overlooked phase connecting excitement and earnings
Tech/AI

The overlooked phase connecting excitement and earnings

by admin April 27, 2026
written by admin

This piece was first published in The Algorithm, our weekly newsletter focused on AI. To receive stories like this directly in your inbox, subscribe here.

In February, I picked up a pamphlet at an anti-AI demonstration in London. I can’t confirm if the authors intended to reference South Park’s underpants gnomes. However, if they did, they hit the mark: “Step 1: Cultivate a digital super intelligence,” it stated. “Step 2: ? Step 3: ?”

Created by Pause AI, a global activist organization that co-hosted the protest, it concluded with this urgent request to the audience: “Pause AI until we clarify what Step 2 is.” 

In the South Park episode “Gnomes,” first aired in 1998, Kenny, Kyle, Cartman, and Stan discover a clan of gnomes that sneak out at night to pilfer underpants from drawers. Why? The gnomes unveil their business proposal. “Phase 1: Gather underpants. Phase 2: ? Phase 3: Revenue.”

The gnomes’ business strategy has since become one of the classics of internet memes, employed to parody everything from startup plans to policy suggestions. Memelord supreme Elon Musk once referenced it in a discussion about funding a Mars mission. Currently, it reflects the situation of AI. Companies have developed the technology (Step 1) and promised revolution (Step 3). The path to get there remains a significant uncertainty.

According to Pause AI, Step 2 should entail some form of regulation. However, what it specifically entails and who will implement it are subjects of discussion.

Conversely, AI proponents are convinced that Step 3 signifies deliverance and tend to overlook the intermediate phase. They perceive us racing toward bright horizons on the momentum of an “economically transformative technology,” as OpenAI’s leading scientist, Jakub Pachocki, expressed to me a few weeks back. They have a general idea of where they want to head—mostly: The details are unclear and still somewhat distant. Yet, everyone has a different approach. Will they all succeed? Will anyone?

For every grand assertion about the future, there exists a more realistic evaluation of how actual implementation occurs—one that tempers the enthusiasm. Consider two recent analyses. One, by Anthropic, forecasted which job categories will be most impacted by LLMs. (A key insight: Managers, architects, and media professionals should brace for changes; groundskeepers, construction workers, and those in hospitality, not so much.) However, their forecasts are primarily conjectures, based on the tasks LLMs appear adept at rather than their actual performance in real-world settings.   

Another analysis, issued in February by researchers at Mercor, an AI hiring startup, assessed various AI agents powered by leading models from OpenAI, Anthropic, and Google DeepMind on 480 workplace tasks commonly performed by human bankers, consultants, and attorneys. Every agent they evaluated fell short in fulfilling most of its responsibilities.   

Why is there such stark divergence in opinions? Several elements contribute. Initially, it’s essential to examine who is making the statements (and their motives). Anthropic has a vested interest. Furthermore, many individuals asserting that a substantial change is imminent have come to that conclusion primarily based on the rapid advancement of AI coding tools. However, not all tasks can be solved solely through coding. Other studies have indicated that LLMs struggle with strategic decision-making, for instance.

Moreover, when they are implemented, these tools are not simply placed into a pristine environment. They must function in spaces filled with people and established processes. Occasionally, incorporating AI may exacerbate issues. Certainly, perhaps those processes need to be dismantled and redesigned around the new technology for it to achieve transformative potential, but this will demand time (and courage).  

That significant gap? It exists precisely where Step 2 should be. The lack of consensus on what is forthcoming—and how—creates an information vacuum that is filled by the latest outrageous assertion of the week, evidence notwithstanding. We are so detached from any genuine comprehension of what lies ahead and how it will unfold that a single social media post can (and does) disrupt markets.

We require fewer speculations and more empirical data. However, this will necessitate transparency from the model developers, collaboration between researchers and companies, and innovative methods to assess this technology that inform us of what truly happens when it is deployed in real-world scenarios.

The tech sector (and consequently the global economy) relies on the promised expectation that AI will indeed be transformative. But that is not yet a guaranteed assumption. The next time you encounter bold predictions about the future, keep in mind that most enterprises are still trying to figure out what to do with their underpants.

April 27, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Comments

No comments to show.

Follow Us

Recent Posts

  • Google and Pentagon allegedly reach an agreement for ‘any legal’ application of AI

    April 28, 2026
  • Assault of the lethal script novices

    April 28, 2026
  • UPS surpasses Wall Street forecasts on both revenue and earnings

    April 28, 2026
  • GM increases 2026 outlook following $500 million tariff reimbursement, exceeding Wall Street’s profit forecasts.

    April 28, 2026
  • US political violence creates a well-known cycle – this time it’s intensified.

    April 27, 2026

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Categories

  • Business (17)
  • Economy (410)
  • Global (429)
  • Investing (8)
  • Lifestyle (107)
  • Tech/AI (1,143)
  • Uncategorized (10)

Our Company

We’re dedicated to telling true stories from all around the world.

  • Ilulissat 3952, Greenland
  • Phone: (686) 587 6876
  • Email: [email protected]
  • Support: [email protected]

About Links

  • About Us
  • Contact
  • Advertise With Us
  • Media Relations
  • Corporate Information
  • Compliance
  • Apps & Products

Useful Links

  • Privacy Policy
  • Terms of Use
  • Closed Captioning Policy
  • Accessibility Statement
  • Personal Information
  • Data Tracking
  • Register New Account

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Latest Posts

US political violence creates a well-known cycle – this time it’s intensified.
Put it in pencil: NASA’s Artemis III mission will not launch before late 2027
Elon Musk and Sam Altman are heading to court regarding the future of OpenAI.
Open-source package with 1 million monthly downloads exfiltrated users’ credentials

@2025 – All Right Reserved. Designed and Developed by BusinessStory.org

Facebook Twitter Instagram Linkedin Youtube Email
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact