• Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
LOGIN
Friday, April 17, 2026
Top Posts
Costco: Compounding Power of Trust and Discipline
Uber: The Rulebreaker’s Playbook
Google: Search Box to Empires
Y Combinator: Accelerator or University
Investing Guidance – Oct 24, 2025
Investing Guidance – Oct 17, 2025
Intel: The Traitorous Eight
Investing Guidance – Nov 19, 2025
Investing Guidance – Nov 12, 2025
Investing Guidance – Nov 7, 2025
SUBSCRIBE NEWSLETTERS
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact
Copyright 2021 - All Right Reserved
How robots acquire knowledge: A concise, modern chronicle
Tech/AI

How robots acquire knowledge: A concise, modern chronicle

by admin April 17, 2026
written by admin

Robotic engineers once envisioned grand projects but executed modest creations. They aspired to replicate or surpass the intricate design of the human form, yet often ended up perfecting robotic appendages for manufacturing facilities. Target C-3P0; result in a Roomba. 

For many in this field, the true goal was the sci-fi robot—capable of navigating its environment, adapting to varying conditions, and engaging with humans in a safe and beneficial manner. For those with a social focus, such a device could assist individuals with mobility challenges, mitigate feelings of isolation, or take on tasks deemed perilous for people. For those motivated by finance, it symbolized an unending supply of labor without wages. Regardless, a lengthy track record of setbacks left many in Silicon Valley wary of investing in helpful robotics.

Times have changed. The robots are still to be constructed, yet investment has surged: Companies and backers poured $6.1 billion into humanoid robotics in 2025 alone, quadruple the amount from 2024. 

What triggered this shift? A breakthrough in machine learning and interaction with the environment. 

Imagine desiring a set of robotic arms in your home solely for the task of folding laundry. How would it acquire this skill? You could begin by establishing guidelines. Assess the fabric to determine its resilience against tearing. Recognize the collar of a shirt. Position the gripper on the left sleeve, elevate it, and fold it inward by a specific measurement. Repeat this for the right sleeve. If the shirt is turned, adjust the plan accordingly. If the sleeve is twisted, rectify it. The number of instructions would quickly become vast, but a comprehensive understanding could yield dependable outcomes. This was the original art of robotics: foreseeing every scenario and encoding it ahead of time.

By around 2015, the forefront began to approach things differently: Create a digital simulation of the robotic arms and garments, rewarding the program each time it folds successfully and providing a penalty when it fails. This technique allows it to improve by experimenting with various methods through trial and error, with countless repetitions—just as AI became adept at playing games.

The introduction of ChatGPT in 2022 sparked the ongoing surge. Trained on extensive textual data, large language models operate not through trial and error but by learning to anticipate the next word in a sentence. Adaptations of similar models for robotics soon began to process images, sensor data, and the robot’s joint positions, making predictions on the subsequent actions the machine should perform, executing dozens of motor commands each second.

This paradigm shift—favoring AI models that take in significant amounts of data—appears effective whether the helpful robot is designed for human interaction, environmental navigation, or even complex tasks. Coupled with innovative strategies for executing this new method of learning, like deploying robots before they achieve perfection to learn from their working environment, Silicon Valley roboticists are once again setting their sights high. Here’s how this came to pass. 


Jibo

Jibo

A moving social robot engaged in conversation well before the era of large language models.

A robotics researcher at MIT, Cynthia Breazeal, introduced the armless, legless, faceless robot Jibo to the public in 2014. It resembled, in many ways, a lamp. Breazeal’s goal was to develop a family-oriented social robot, which garnered $3.7 million through a crowdfunding effort. Early preorders were priced at $749.

The initial Jibo could introduce itself and perform a little dance for children, but its capabilities were limited. The vision was always for it to evolve into a sort of physical assistant capable of managing everything from schedules and emails to storytelling. It attracted a loyal user base, but eventually, the company ceased operations in 2019.

A robot resembling a lowercase letter "i"
A crowdfunding initiative commenced in 2014, resulting in 4,800 Jibo preorders.
COURTESY OF MIT MEDIA LAB

In hindsight, Jibo fundamentally lacked advanced language capabilities. Competing against Apple’s Siri and Amazon’s Alexa, all those technologies were reliant on extensive scripting at that time. Generally, when spoken to, software would convert spoken words into text, interpret the user’s needs, and generate responses from preapproved snippets. While those snippets might be engaging, they were also repetitive and ultimately dull—blatantly robotic. This posed a particular challenge for a robot intended to be social and family friendly. 

What has transpired since then is a transformation in how machines generate language. Voice modes from leading AI developers are now captivating and impressive, and numerous hardware startups are attempting (and stumbling) to create products that capitalize on this development. 

However, this introduces a new danger: While scripted dialogues tend not to deviate, those generated by AI can easily spiral out of control. Some well-known AI toys have, for example, conversed with children about finding matches and knives. 


OpenAI

Dactyl

A robotic hand trained via simulations aims to emulate the unpredictability and variation present in the real world.

By 2018, every prominent robotics laboratory was endeavoring to abandon the old scripted regulations and educate robots through trial and error. OpenAI sought to instruct its robotic hand, Dactyl, in a virtual environment— utilizing digital representations of the hand and the palm-sized cubes Dactyl was tasked with manipulating. The cubes featured letters and numbers on their surfaces; the model might task the robot to “Rotate the cube so the red side with the letter O faces upwards.”

Here lies the challenge: A robotic hand may excel at achieving this within its simulated realm, but when that program is implemented on a physical version in the actual world, slight discrepancies can lead to issues. Colors may appear different, or the malleable rubber in the robot’s fingertips may prove stretchier than anticipated in simulation.

a Dactyl robot hand holds a Rubik's cube
Dactyl, part of OpenAI’s initial robotics initiative, was trained in simulation to tackle Rubik’s Cubes.
COURTESY OF OPENAI

The answer lies in domain randomization. This involves creating millions of simulated environments that vary slightly and randomly from each other. In each instance, friction may be less, lighting harsher, or colors darker. When exposed to enough variations, robots will be better equipped to manipulate the cube in the real world. This method proved effective with Dactyl, and a year later, it utilized the same fundamental strategies to accomplish a more complex task: solving Rubik’s Cubes (though it managed only 60% success overall, dropping to 20% with particularly challenging scrambles). 

Nevertheless, the limitations of simulation indicate that this approach is less significant today than it was in 2018. OpenAI discontinued its robotics division in 2021 but has recently revived it—reportedly with a focus on humanoid robots. 


Google DeepMind

RT-2

Training on images sourced from the internet enables robots to convert language into action.

Circa 2022, Google’s robotics team engaged in some unconventional activities. They dedicated 17 months to giving individuals robotic controllers and videotaping them as they performed tasks ranging from picking up bags of chips to opening jars. Ultimately, the team documented 700 distinct tasks.

The objective was to construct and validate one of the first large-scale foundational models for robotics. Similar to large language models, the aim was to feed vast amounts of text, tokenize it into a format manageable for an algorithm, and then produce an output. Google’s RT-1 processed data regarding what the robot was observing and the positions of its various arm components; it then converted instructions into motor commands for robot movement. When it had encountered tasks previously, it executed 97% of them correctly; it managed to succeed at 76% of the instructions it had not seen before. 

a robot at a table of small toys
The RT-2 model, for Robotic Transformer 2, utilized internet data to assist robots in interpreting their visual inputs.
COURTESY OF GOOGLE DEEPMIND

The subsequent model, RT-2, was released the following year and advanced even further. Instead of relying on data specifically related to robotics, it expanded its training to encompass a broader range of images from the internet, such as the vision-language models many researchers were exploring during that time. This allowed the robot to better understand the placement of various objects within its environment.

“All these other options became available,” states Kanishka Rao, a roboticist at Google DeepMind who oversaw both versions. “We gained capabilities such as ‘Place the Coke can near the picture of Taylor Swift.’” 

In 2025, Google DeepMind further integrated the realms of large language models and robotics, unveiling a Gemini Robotics model with enhanced proficiency in comprehending natural language commands. 


Covariant

RFM-1

An AI model enabling robotic arms to function like teammates.

In 2017, prior to OpenAI shutting down its initial robotics team, a segment of its engineers initiated a project called Covariant, aiming to create practical robots not inspired by sci-fi but focused on developing a robotic arm capable of picking up and moving items within warehouses. Following the establishment of a system based on foundational models analogous to Google’s, Covariant implemented this technology in warehouses such as those belonging to Crate & Barrel and used it as a data-gathering tool. 

By 2024, Covariant introduced a robotics model, RFM-1, that could interact like a colleague. For instance, if you presented it with several sleeves of tennis balls, you could then instruct it to relocate each sleeve to a designated area. The robot could even respond—perhaps foreseeing challenges in gripping an item and asking for guidance on which suction cups to utilize. 

This kind of interaction had been explored in experiments, but Covariant was implementing it on a larger scale. The company now had cameras and data collection devices installed at every client location, continuously feeding additional data for the model’s training.

a warehouse robot arm lifts object with many suckers to place in a bin
A Covariant robot showcases “induction”—the typical warehouse task of placing items onto sorters or conveyors.
COURTESY OF COVARIANT

It wasn’t flawless. During a demonstration in March 2024 involving an array of kitchen items, the robot struggled when tasked with “returning the banana” to its original spot. It attempted to displace a sponge, then an apple, followed by several other items before finally succeeding in the assignment. 

It “doesn’t grasp the new concept” of retracing its actions, cofounder Peter Chen noted at the time. “However, it serves as a good example—it might not perform optimally in areas lacking robust training data.”

Chen and co-founder Pieter Abbeel were promptly hired by Amazon, which is currently licensing Covariant’s robotics model (Amazon did not respond to inquiries about its applications, though the company reportedly operates around 1,300 warehouses in the U.S. alone). 


Agility Robotics

Digit

Businesses are assessing this humanoid in practical environments.

The new financial resources being channeled to robotics startups predominantly target robots designed not as lamps or arms, but resembling humans. Humanoid robots are intended to integrate smoothly into existing workspaces and jobs currently occupied by humans, negating the necessity to retrofit assembly lines for new configurations like large arms. 

However, this is easier said than done. In the few instances where humanoids are seen in actual warehouses, they often remain confined to testing areas and pilot projects. 

Digit humanoid robot placing a plastic bin on a conveyor belt
Amazon and other companies are utilizing Digit to assist in moving shipping containers.
COURTESY OF AGILITY ROBOTICS

That said, Agility’s humanoid, Digit, seems to be performing actual tasks. Its design—featuring exposed joints and a notably non-human head—leans more towards functionality than science-fiction aesthetics. Amazon, Toyota, and GXO (a logistics behemoth serving clients like Apple and Nike) have all deployed it—marking it as one of the initial instances of a humanoid robot perceived by companies as providing real cost advantages rather than mere novelty. The Digits are mainly occupied with lifting, transporting, and arranging shipping totes.

Currently, Digit remains far from the human-like assistant that Silicon Valley anticipates; for instance, it can lift only 35 pounds—with every enhancement making it stronger also resulting in a heavier battery that requires more frequent recharging. Safety organizations indicate that humanoids must adhere to stricter safety standards compared to most industrial robots, due to their mobility and potential proximity to humans. 

Nonetheless, Digit illustrates that the transformation in robot training is not converging on a singular methodology. Agility employs simulation strategies akin to those utilized by OpenAI for its hand, and the company has collaborated with Google’s Gemini models to help its robots adapt to new settings. This is the ultimate outcome of over a decade of experimentation in the industry: it is expanding its scale.

April 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Finance ministers and leading bankers express significant worries regarding the Mythos AI model.
Global

Finance ministers and leading bankers express significant worries regarding the Mythos AI model.

by admin April 17, 2026
written by admin

He remarked: “The outcome might be the advancement of AI and modeling, which facilitates the identification of current weaknesses in core IT systems, and consequently, cybercriminals – the malicious entities – could attempt to take advantage of them.”

April 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Singer D4vd taken into custody on suspicion of killing a teenage girl
Global

Singer D4vd taken into custody on suspicion of killing a teenage girl

by admin April 17, 2026
written by admin

Rivas Hernandez – who resided approximately 75 miles (120km) from the location where her remains were found – had been last reported missing by her relatives in April 2024, though it was not the first instance she had fled from their Lake Elsinore residence. As a first-generation daughter of parents who immigrated from El Salvador, neighbors recognized her as a girl who would frequently visit the corner store to purchase candy and soda, according to the Los Angeles Times.

April 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Nvidia competitor informs CNBC that it is looking for a minimum of $100 million in investment as the European AI chip sector surges.
Economy

Nvidia competitor informs CNBC that it is looking for a minimum of $100 million in investment as the European AI chip sector surges.

by admin April 17, 2026
written by admin

In this piece

  • TSM
  • TSM
  • NVDA
  • ASML
Track your preferred stocksCREATE FREE ACCOUNT

European chip startups innovating alternative technology to Nvidia’s graphics processing units (GPUs) are attracting significant funding rounds as they aim to expand during the AI surge.

The Dutch firm Euclyd, supported by the former CEO of leading chipmaking equipment company ASML, is currently negotiating with investors for a round of at least 100 million euros ($118 million), as stated by its founder Bernardo Kastrup in an exclusive interview with CNBC.

Additionally, the U.K. startup Optalysys plans to raise over $100 million later this year, while British firm Fractile and France’s Arago are reportedly seeking nine-figure funding. Fractile declined to comment, and Arago did not reply to a request for comment. So far in 2026, backers have already invested more than $200 million into the Netherlands’ Axelera and the U.K.’s Olix.

Nvidia has quickly established itself as the most valuable company globally, as its GPUs, initially designed for gaming, have been adapted for training AI models, but attention is now shifting to the most effective ways to implement those models, referred to as AI inference.

While the U.S. chip leader is also working on semiconductor systems for this function, a new wave of European startups has emerged, claiming their technology can achieve this more efficiently.

“Inference is the key focus now, and the current GPU architecture was not designed for it in the most significant ways at scale,” remarked Patrick Schneider-Sikorsky, director at the Nato Innovation Fund (NIF), which has invested in Fractile, during his interview with CNBC.

“The geopolitical factors are clear with U.S. export controls, concentration risks surrounding [chipmaker] TSMC and an urgent demand for European sovereign computing, all steering investment toward local silicon.”

Alumni from ASML

Euclyd is creating AI chips that function within a system claiming to achieve 100x greater power efficiency for inference than Nvidia’s recent Vera Rubin chips. Nvidia did not respond to a request for comment from CNBC.

The Dutch startup, established in 2024 by prior ASML director Kastrup, with former ASML CEO Peter Wennink serving as an advisor and investor, has already secured a seed round under 10 million euros and is now seeking additional funds to scale its technology and start supplying its initial clients.

Euclyd is constructing chip systems to supplant GPUs, but with an alternate architecture, according to Kastrup. While GPUs require time and energy to transfer data through memory, Euclyd’s chips will process data in multiple locations, which Kastrup asserts will boost efficiency for AI inference.

The company’s silicon systems for foundational models will diminish the energy, cost, and footprint of AI data center infrastructure, he explained. However, unlike Nvidia’s chips, Euclyd’s systems have yet to be validated through large-scale commercial deployments.

Euclyd’s prototype system. Credit: Euclyd.

Euclyd is actively pursuing this goal. It has developed a chip for AI inference and is currently working on a multi-chiplet system — expected to operate faster than the current version of its product — which it aims to have ready by 2028. Kastrup indicated it is in talks with four prospective clients, two of which the company aims to start supplying next year and the other two the following year.

Olix, which is creating photonics-based processors for AI, also plans to target initial customers next year, although it is still in a research and development stage, as stated by Taavet Hinrikus, a partner at Plural, an investor in the firm, during his chat with CNBC.

Photonic processors are chip systems that utilize light to transfer data and, in some instances, to execute computations.

The startup aims to cater to any clients requiring inference services, Hinrikus remarked, including hyperscalers and government bodies. Olix did not respond to a request for comment.

The electronic framework of chips, inclusive of GPUs, is truly “reaching the limits” in terms of size, Hinrikus said. Chip manufacturers are striving to miniaturize processors to allow more components on wafers and enhance the economics of running systems on them.

“The heat generated by [current chips] is becoming a significant issue. We firmly believe that photonics platforms will represent the next paradigm,” he added.

Nvidia is also exerting considerable effort to maintain its competitive edge. The chip titan invested over $18 billion in research and development in its last complete financial year, concluding in January 2026. In December, it acquired assets from AI inference startup Groq for $20 billion and announced in March it had invested $4 billion in two companies advancing photonics technology.

Obstacles for European startups persist

European startups encounter challenges.

“Chip development timelines are extended, the journey from tape-out to mass production is challenging, and Europe’s foundry ecosystem still requires maturation,” remarked NIF’s Schneider-Sikorsky.

Axelera CEO Fabrizio Del Maffeo communicated to CNBC that European governments are still “cautious” when it comes to investing in products from emerging companies, lacking an equivalent to DARPA, the U.S. Department of Defense agency that funds startups and other technological initiatives.

Europe also lacks frameworks to encourage the consumption of locally produced goods and fragmented labor regulations across borders complicate the recruitment of European talent, he noted.

European AI chip startups are trailing in funding, having raised $800 million so far in 2026, contrasted with $4.7 billion for their U.S. counterparts, as reported by Dealroom.

In the U.S., Cerebras Systems secured $1 billion in February, with $500 million funding rounds for MatX, Ayar Labs, and Etched occurring this year.

Nonetheless, European startups developing chips for AI inference to compete with Nvidia are increasingly attracting investor interest.

“We’re observing this in deal flow and in the dialogues we’re having with founders in the domain,” Carlos Espinal, managing partner at Seedcamp, which backed chip startup Vaire Computing, shared with CNBC. “It’s no longer a fringe investment. It’s evolving into a central aspect of how people conceptualize AI infrastructure.”

Designate CNBC as your preferred source on Google and ensure you don’t miss any moment from the most trusted name in business news.

April 17, 2026 0 comments
0 FacebookTwitterPinterestEmail
Netflix shares drop as the streaming service confirms its outlook, with Reed Hastings announcing his departure from the board.
Economy

Netflix shares drop as the streaming service confirms its outlook, with Reed Hastings announcing his departure from the board.

by admin April 16, 2026
written by admin

In this article

  • NFLX
Follow your favorite stocksCREATE FREE ACCOUNT
Reed Hastings, the co-founder of Netflix and former CEO, visited Sydney to confer with executives from other subscription streaming platforms on February 25, 2022.
Wolter Peeters | Fairfax Media | Getty Images

Netflix shares dropped 9% in after-hours trading on Thursday following the streaming leader’s first-quarter earnings announcement and a significant governance update.

The company exceeded Wall Street forecasts for revenue, posting $12.25 billion for the first quarter, surpassing the $12.18 billion projected by analysts from LSEG and showing a 16% increase from the $10.54 billion reported for the same quarter last year.

Thursday was the company’s first earnings report since it retracted its proposed acquisition of Warner Bros. Discovery’s streaming and film assets in February.

Netflix noted a net income of $5.28 billion, which translates to $1.23 per share, nearly double the $2.89 billion or 66 cents per share reported in the same quarter last year. The company attributed this to a higher-than-expected operating income and the $2.8 billion termination fee received after the WBD deal collapsed.

Reported earnings per share were not directly comparable to the analyst prediction of 76 cents due to the effects of the termination fee.

Nonetheless, Netflix retained its prior full-year revenue forecast of between $50.7 billion and $51.7 billion.

The company anticipates a 13% increase in revenue for the second quarter and reiterated its earlier caution that content expenditure will be predominantly in the first half of the year due to timing of title launches. Netflix also indicated that it expects the second quarter to present the highest year-over-year growth rate in content amortization for 2026 before declining in the latter half of the year.

Despite relinquishing its planned acquisition for WBD’s assets, that potential deal will still impact Netflix’s financials this year. Chief Financial Officer Spencer Neumann remarked on Thursday that although some planned expenses related to the deal will not “fully materialize,” others initially set for 2027 will be adjusted to occur in 2026. He noted that the company remains “in the ballpark … of the total that we were projecting for total M&A-related costs in the year.”

On Thursday, Netflix also disclosed that Reed Hastings, the co-founder of Netflix and current chairman, will step down from the board in June when his term ends.

Hastings resigned from his role as CEO in 2023. Greg Peters, who held the position of chief operating officer, assumed the co-CEO role alongside Ted Sarandos.

“Netflix has transformed my life in numerous ways, and my most cherished memory was January 2016, when we allowed virtually everyone on the planet to access our service,” Hastings stated in the company’s shareholder letter on Thursday. Hastings will now dedicate his efforts to philanthropy and other interests, as reflected in the letter.

On Thursday, an analyst posed a question regarding whether Hastings’ departure was connected to the proposed WBD deal.

Sarandos dismissed that notion, asserting that Hastings was “a strong supporter of that deal. He advocated for it with the board. The board was unanimous.”

Examining internal strategies

Netflix on Thursday reaffirmed that it remains on course to achieve $3 billion in advertising revenue by 2026, signifying a doubling year-over-year as this new revenue stream demonstrates growth.

The company initially launched its lower-priced, ad-supported tier in 2022 and has since emphasized this path for revenue enhancement — even as it raises subscription costs and tightens regulations on password sharing to boost subscriber numbers.

In January, Netflix announced it had achieved 325 million paid subscribers worldwide. Netflix no longer discloses quarterly updates on its subscriber count.

It stated on Thursday that “slightly higher-than-anticipated subscription revenue” fueled an 18% increase in operating income in the first quarter.

Additionally, last month Netflix announced an increase in prices across all its streaming plans once more.

“Our recent price adjustments have been well received, reflecting the strong value we offer our members,” the company shared in the shareholder letter on Thursday.

Co-CEO Peters mentioned during Thursday’s call that the price hikes were always part of the company’s annual strategy. While Peters noted that the implementation of these price changes is ongoing, initial results are consistent with prior observations following price changes — such as members canceling subscriptions or opting for cheaper plans.

“We aim to provide increased value to our members … wisely invest the revenue we generate, and, at times, when we’ve enhanced value, we request our members to contribute more so we can further invest in delivering them even greater entertainment value,” Peters expressed.

The company noted on Thursday that its foray into video podcasts, along with the airing of the World Baseball Classic, significantly contributed to its “primary internal quality engagement metric” achieving an all-time high in the first quarter.

Live sports have become integral to Netflix’s platform, and on Thursday co-CEO Sarandos mentioned that the company is currently negotiating with the NFL to “broaden the partnership.” Although Netflix does not have a standard NFL package, it has streamed NFL games on Christmas Day for several years.

Correction: This story has been updated after LSEG amended its assessment of Netflix’s earnings per share. Reported EPS is not comparable to analyst estimates due to the influence of the WBD termination fee.

Choose CNBC as your preferred source on Google and never miss a moment from the most trusted name in business news.

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
YouTube's mobile application now allows you to share videos with timestamps.
Tech/AI

YouTube’s mobile application now allows you to share videos with timestamps.

by admin April 16, 2026
written by admin

However, the Clips feature is being discontinued.

However, the Clips feature is being discontinued.

Apr 16, 2026, 11:32 PM UTC
acastro_STK092_04
acastro_STK092_04
Jay Peters
Jay Peters is a lead reporter focusing on technology, gaming, and beyond. He joined The Verge in 2019 following nearly two years at Techmeme.

YouTube is implementing some modifications that may influence how you distribute videos from the mobile application. From the app, you will now be able to share videos starting at a designated timestamp, facilitating easier sharing of specific portions of a video while on your mobile device. Nevertheless, this adjustment will replace the Clips feature that permitted the creation of a shareable clip from a video.

You will still have access to any Clips you have previously created. However, moving forward, “the option to define an end time or add a custom description when sharing will no longer exist,” YouTube states. The company mentions that while clipping is “a significant method for creators to connect with new audiences,” it acknowledges that “various third-party tools with enhanced clipping capabilities and authorized creator programs are currently available to utilize this function across different video platforms.”

The company first launched the Clips feature in 2021.

Follow topics and authors related to this article for more content like this in your customized homepage feed and to get email notifications.

  • Jay Peters

Trending Now

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
Lucasfilm releases final trailer for The Mandalorian and Grogu at CinemaCon
Tech/AI

Lucasfilm releases final trailer for The Mandalorian and Grogu at CinemaCon

by admin April 16, 2026
written by admin

Lucasfilm unveiled the final trailer for The Mandalorian and Grogu at CinemaCon last night to enthusiastic applause. And it’s easy to see why — the preview contains the hallmarks of the best of the Star Wars franchise.

As previously reported, Grogu (formerly “Baby Yoda”) captured viewers’ affections from his first appearance in season one of The Mandalorian, and the bond between the small green creature and his father-figure bounty hunter, the titular Mandalorian Din Djarin (Pedro Pascal), has only grown. After the 2023 Hollywood strikes delayed production on season 4, director Jon Favreau received the go-ahead to expand the story into this spinoff film.

Per the official logline:

The tyrannical Empire has fallen, yet Imperial warlords remain scattered across the galaxy. As the fledgling New Republic works to defend everything the Rebellion fought for, it has enlisted the aid of legendary Mandalorian bounty hunter Din Djarin (Pedro Pascal) and his young apprentice Grogu.

Besides Pascal, the cast includes Sigourney Weaver as Ward, a seasoned pilot, colonel, and leader of the New Republic’s Adelphi Rangers. Jeremy Allen White portrays Rotta the Hutt (Jabba’s son, first seen in 2008’s The Clone Wars), Jonny Coyne returns from The Mandalorian season 3 as an Imperial warlord heading a surviving faction of the Galactic Empire, and Dave Filoni is back as New Republic X-wing pilot Trapper Wolf. Expect appearances from Garazeb (“Zeb”) Orrelios (Steve Blum) from the Star Wars Rebels animated series, Embo from The Clone Wars, and Anzellans from The Rise of Skywalker. There’s also a shiny new version of Mando’s ship (destroyed in S2).

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
Intel updates non-Ultra Core CPUs with new silicon for the first time
Tech/AI

Intel updates non-Ultra Core CPUs with new silicon for the first time

by admin April 16, 2026
written by admin

Intel’s Core Ultra laptop CPUs have led its lineup since it abandoned the older generational naming and the i3/i5/i7/i9 scheme a few years ago. The Core Ultra Series 1, Series 2, and Series 3 models have showcased the newer CPU and GPU architectures and more advanced fabrication technology.

Intel has also sold non‑Ultra Core processors, but they’ve seldom been compelling—largely because both the Series 1 and Series 2 parts were built on Intel’s older Raptor Lake architecture. Raptor Lake was the codename for the 13th‑generation Core family in 2023, and many Raptor Lake chips were essentially the same silicon used in 2022’s 12th‑generation Core lineup.

The Raptor Lake rebranding couldn’t continue indefinitely. Intel’s new, non-Ultra Core Series 3 processors are fabricated on fresh silicon, marking a return to an era when both high-end and midrange Intel parts shared many of the same improvements despite differing performance levels.



“Wildcat Lake” has some similarities to Panther Lake, but it is a more modest and simplified design.

Credit:
Intel

“Wildcat Lake” has some similarities to Panther Lake, but it is a more modest and simplified design.


Credit:

Intel

These chips carry the codename “Wildcat Lake,” and although they share some traits with the Core Ultra Series 3 CPUs (aka Panther Lake), the non‑Ultra parts rely on a simpler architecture with noticeably less compute capability.

Each part is built from two silicon tiles: a compute tile that contains a CPU with up to two Cougar Cove P‑cores and four Darkmont E‑cores; an integrated GPU with one or two of Intel’s latest Xe3 GPU cores; and typically an NPU rated as high as 17 trillion operations per second (TOPS). A separate platform controller tile, manufactured on an unspecified non‑Intel process, provides up to two Thunderbolt 4 ports, Wi‑Fi 7 and Bluetooth 6.0 support, and six PCIe 4.0 lanes for external connectivity. All chips support up to 48GB of LPDDR5X‑7467 or up to 64GB of DDR5‑6400, and are specified with a 15 W base power level and a 35 W maximum boost power level.

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
Next year, Google smart glasses featuring the Gucci brand are set to be released.
Tech/AI

Next year, Google smart glasses featuring the Gucci brand are set to be released.

by admin April 16, 2026
written by admin

Google appears to be counting on luxury fashion labels to make AI smart glasses popular.

Google appears to be counting on luxury fashion labels to make AI smart glasses popular.

Apr 16, 2026, 8:31 PM UTC
Aura_4_TAS_XR_ Nov 06 2025_113
Aura_4_TAS_XR_ Nov 06 2025_113
Stevie Bonifield
Stevie Bonifield is a news writer focusing on all aspects of consumer technology. Stevie began at Laptop Mag, covering news and reviews related to hardware, gaming, and AI.

Google is reportedly collaborating with Gucci to create a pair of AI smart glasses fashionable enough for people to genuinely desire to wear them. According to Reuters, the parent company of Gucci, Kering, aims to launch the glasses in 2027.

Google’s initial Android XR glasses, “Project Aura,” are anticipated to debut this year. They showcase a similar design to Meta’s Ray-Ban glasses, featuring thick, black plastic frames. This will mark Google’s second attempt at smart glasses, following the notorious failure of Google Glass more than a decade ago.

Last year, Google also revealed glasses collaborations with Warby Parker and Gentle Monster, but those brands lack the high-profile status of Gucci. In contrast to numerous other tech items, smart glasses must possess style to succeed, and tying up with a luxury brand like Gucci may assist Google in rivaling the Meta Ray-Ban series. Collaborations with fashion houses also enable technology firms to label their glasses differently — as Snap CEO Evan Spiegel recently mentioned, “the Meta brand, I believe, isn’t something individuals wish to have close to their face.”

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

  • Stevie Bonifield

Most Popular

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
Handling enterprise AI as a functional layer
Tech/AI

Handling enterprise AI as a functional layer

by admin April 16, 2026
written by admin

Supplied byEnsemble

A division exists in enterprise AI that is less discussed than others. The public discourse often revolves around foundational models and metrics—GPT compared to Gemini, reasoning assessments, and minor improvements. However, the more significant and lasting advantage is structural: who controls the operating layer where intelligence is utilized, regulated, and enhanced. One approach regards AI as an on-demand service; the other integrates it as an operating layer—the amalgamation of operational software, data collection, feedback mechanisms, and governance that interfaces between models and actual work—that accumulates with usage.

Model vendors such as OpenAI and Anthropic offer intelligence as a service: encounter a problem, access an API, receive a solution. This intelligence is general-purpose, predominantly stateless, and only loosely linked to the daily operations where decisions are enacted. It is highly effective and increasingly interchangeable. The key difference is whether intelligence resets with each prompt or builds over time.

Established entities, on the other hand, can apply AI as an operating layer: tools across operations, feedback loops from human decisions, and governance that transforms individual tasks into reusable policies. In this framework, every exception, correction, and authorization serves as an opportunity to learn—and intelligence can advance as the platform absorbs a greater volume of the organization’s tasks. The organizations poised to define the enterprise AI landscape are those able to integrate intelligence directly into operational platforms and equip those platforms so that work produces actionable signals.

The dominant narrative suggests agile startups will surpass established players by creating AI-native solutions from the ground up. If AI is primarily seen as a modeling issue, this narrative is valid. However, in numerous enterprise contexts, AI presents itself as a systems challenge—encompassing integrations, permissions, assessment, and change management—where the advantage lies with those already embedded in high-volume, critical operations and who can turn that position into learning and automation.

The reversal: AI performs, humans judge

An AI-driven platform reverses this model. It takes in a problem, deploys accrued domain expertise, independently executes what it can with high certainty, and assigns specific sub-tasks to human specialists when the situation necessitates discernment that the system cannot yet consistently provide.

However, reversing human-AI interaction goes beyond a simple UI redesign—it necessitates foundational material. It is feasible only when the platform is anchored by a bedrock of domain expertise, behavioral insights, and operational know-how amassed over time.

The three cumulative assets incumbents possess

AI-native startups commence with an unblemished architectural foundation and can act swiftly. What they struggle to replicate is the foundational material that secures domain AI at scale:

  • Exclusive operational data
  • A sizable workforce of domain specialists whose routine decisions yield training responses
  • Accumulated implicit knowledge regarding how complex tasks are actually executed

Service companies already possess all three assets. However, these components do not serve as barriers independently. They provide an edge only when a company can effectively transform chaotic operations into AI-ready insights and organizational knowledge—then reintegrate the outcomes into operations ensuring continuous improvement.

Systematizing expertise into reusable insights

In the majority of service organizations, expertise is implicit and fleeting. The top operators know things they cannot readily express: heuristics evolved over time, intuitions in edge cases, and pattern recognition that functions beneath conscious cognition.

At Ensemble, the approach for tackling this issue is knowledge distillation. The organized transformation of expert judgment and operational choices into machine-readable training signals.

In the realm of healthcare revenue cycle management, for instance, systems can be initiated with explicit domain knowledge and then broaden their scope through structured daily engagement with operators. In Ensemble’s methodology, the system detects gaps, crafts targeted inquiries, and verifies answers across multiple experts to capture both consensus and nuances of edge cases. It subsequently combines these inputs into a dynamic knowledge repository that embodies the situational reasoning underpinning expert-level performance.

Transforming choices into a learning loop

Once a system is sufficiently refined to earn trust, the next question is how it can enhance itself without relying on periodic model updates. Each time a proficient operator makes a decision, they produce more than a finished task. They generate a potential labeled example—context coupled with an expert response (and occasionally an outcome). At scale, across thousands of operators and millions of decisions, that flow can fuel supervised learning, evaluation, and targeted reinforcement—educating systems to function more like experts under real conditions.

For instance, if an organization handles 50,000 cases per week and captures merely three high-quality decision points for every case, that results in 150,000 labeled examples weekly without establishing a separate data-collection initiative.

A more sophisticated human-in-the-loop design incorporates experts within the decision-making process, allowing systems to understand not just what the correct answer was, but also how to navigate ambiguity. Practically, humans step in at decision branches—selecting from AI-suggested options, rectifying assumptions, and redirecting processes. Each involvement becomes a high-value training insight. When the platform identifies an edge case or divergence from the anticipated process, it can request a brief, structured explanation, capturing decision-making factors without the need for extensive free-form reasoning records.

Striving for expertise amplification

The aim is to integrate the extensive expertise of thousands of domain specialists—their knowledge, decisions, and reasoning—into an AI platform that enhances the capabilities of every operator. When executed effectively, this yields a level of performance that neither humans nor AI can achieve alone: increased consistency, enhanced throughput, and measurable operational improvements. Operators can concentrate on more impactful tasks, supported by an AI that has already performed the analytical groundwork across thousands of similar prior cases.

The broader implication for enterprise leaders is clear. Advantages in AI won’t solely hinge on access to general-purpose models. It will arise from an organization’s capacity to capture, refine, and build upon its knowledge, data, decisions, and operational judgment, while establishing the necessary controls for environments that carry substantial stakes. As AI transitions from experimentation to foundational infrastructure, the most enduring advantage may belong to those companies that understand their operations well enough to instrument them and can translate that comprehension into systems that evolve with usage.

This content was produced by Ensemble. It was not authored by the editorial staff of MIT Technology Review.

April 16, 2026 0 comments
0 FacebookTwitterPinterestEmail
Newer Posts
Older Posts

Recent Comments

No comments to show.

Follow Us

Recent Posts

  • How robots acquire knowledge: A concise, modern chronicle

    April 17, 2026
  • Finance ministers and leading bankers express significant worries regarding the Mythos AI model.

    April 17, 2026
  • Singer D4vd taken into custody on suspicion of killing a teenage girl

    April 17, 2026
  • Nvidia competitor informs CNBC that it is looking for a minimum of $100 million in investment as the European AI chip sector surges.

    April 17, 2026
  • Netflix shares drop as the streaming service confirms its outlook, with Reed Hastings announcing his departure from the board.

    April 16, 2026

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Categories

  • Business (17)
  • Economy (388)
  • Global (407)
  • Investing (8)
  • Lifestyle (102)
  • Tech/AI (1,085)
  • Uncategorized (10)

Our Company

We’re dedicated to telling true stories from all around the world.

  • Ilulissat 3952, Greenland
  • Phone: (686) 587 6876
  • Email: [email protected]
  • Support: [email protected]

About Links

  • About Us
  • Contact
  • Advertise With Us
  • Media Relations
  • Corporate Information
  • Compliance
  • Apps & Products

Useful Links

  • Privacy Policy
  • Terms of Use
  • Closed Captioning Policy
  • Accessibility Statement
  • Personal Information
  • Data Tracking
  • Register New Account

Newsletter

Join the BusinessStory newsletter for fresh insights, market analysis, and new stories!

Latest Posts

Netflix shares drop as the streaming service confirms its outlook, with Reed Hastings announcing his departure from the board.
YouTube’s mobile application now allows you to share videos with timestamps.
Lucasfilm releases final trailer for The Mandalorian and Grogu at CinemaCon
Intel updates non-Ultra Core CPUs with new silicon for the first time

@2025 – All Right Reserved. Designed and Developed by BusinessStory.org

Facebook Twitter Instagram Linkedin Youtube Email
  • Home
  • Investing
  • Global
  • Business
  • Economy
  • Tech/AI
  • Lifestyle
  • About Us
  • Contact