Yeyyyy New Site is out!!!!

AI this week: self-training models, new healthcare payments, and the next compute land grab

AI Daily Desk

A quick look at where AI is moving next: automated fine-tuning, Medicare payment pathways for AI agents, and increasingly ambitious bets on where future compute should live.

Several recent stories point to the same broad trend in AI: the industry is pushing simultaneously on model capability, real-world deployment, and the infrastructure needed to support both.

From tools that automate fine-tuning, to new payment mechanisms for AI in healthcare, to increasingly bold ideas about where to place compute capacity, the latest reporting shows how quickly the AI stack is expanding.

Adaption AutoScientist article image

Automating model improvement

According to TechCrunch, Adaption has introduced AutoScientist, a tool designed to help models adapt to specific capabilities quickly through an automated approach to conventional fine-tuning.

Even from that brief description, the pitch is clear: reduce the manual effort involved in specializing models, and make capability tuning faster and more systematic.

Adaption's new AutoScientist tool is designed to let models adapt to specific capabilities quickly through an automated approach to conventional fine-tuning.

Healthcare may be opening a payment path for AI agents

Medicare AI payment model article image

One of the biggest practical barriers to AI in healthcare has not just been technical feasibility, but reimbursement. TechCrunch reports that Medicare's ACCESS model creates a mechanism to pay for AI-supported work that previously had no clear governmental payment path.

The examples cited include an AI agent that monitors a patient between visits, checks in by phone, coordinates a housing referral, or helps ensure medication pickup.

There is no governmental mechanism to pay for an AI agent that monitors a patient between visits, calls to check in, coordinates a housing referral, or makes sure someone picks up their medication. ACCESS creates that mechanism for the first time.

If that mechanism scales, it could matter as much as any underlying model advance, because it ties AI functionality to an actual budget line.

The race for AI compute is getting more experimental

Mini data centers at home

Mini data center at home article image

Ars Technica reports on a new pitch to place small-scale data center capacity at people's homes. The stated goal is to speed up AI compute deployment while compensating residents.

The plan aims to speed up AI compute deployment while compensating residents.

The idea reflects the intensity of current demand for compute: instead of waiting for conventional buildouts, companies are exploring distributed and unconventional hosting models.

Data centers in orbit

Orbital data centers article image

At the far edge of that experimentation, TechCrunch reports that Google and SpaceX are in talks about putting data centers into orbit. The framing is explicit: space is being pitched as a future home for AI compute, even though current costs are still much higher than terrestrial alternatives.

Google and SpaceX are in talks to build data centers in orbit, pitching space as the future home for AI compute, even as costs today remain far higher than on the ground.

Taken together, the home-hosted and orbital concepts show how central compute availability has become to the AI story.

AI keeps spreading into operational systems

AI pothole detection article image

Not all AI expansion is about frontier labs or infrastructure moonshots. TechCrunch also reports that Samsara has developed an AI model for municipal road maintenance, using trucks and computer vision to detect different kinds of potholes and estimate how quickly they are deteriorating.

Fleet management company Samsara has developed an AI model to detect different kinds of potholes and gauge how fast they're deteriorating.

This is a useful reminder that AI adoption is also being driven by operational use cases with measurable cost implications.

Capital continues to flow across the AI ecosystem

A* fund article image

On the funding side, TechCrunch reports that Kevin Hartz's A* has closed a third fund with $450 million. The firm takes a generalist approach, but AI applications are among the categories it backs.

The article says the average check size will be between $3 million and $5 million, with the aim to back at least 30 startups.

The firm takes a generalist approach, backing companies across categories such as AI applications, fintech, healthcare, and security. The average check size for this fund will be between $3 million and $5 million, with the aim to back at least 30 startups.

That kind of capital formation suggests investors still see room for new entrants across multiple layers of the market.

Governance and market access remain part of the AI story

OpenAI control questions resurface

OpenAI control testimony article image

TechCrunch also covered testimony from Sam Altman about Elon Musk's past interest in controlling OpenAI's initial for-profit structure. Altman said that focus gave him pause, arguing that OpenAI was dedicated to keeping advanced AI out of the hands of a single person.

Altman said that Musk's focus on controlling the initial for-profit gave him pause because OpenAI was dedicated to keeping advanced AI out of the hands of a single person.

The broader takeaway is that ownership and control still sit near the center of AI's governance debates.

Anthropic warns on secondary share access

Anthropic secondary shares article image

Separately, TechCrunch reports that Anthropic is warning investors about secondary platforms offering access to its shares, stating that such sales or transfers are void and will not be recognized on the company's books and records.

"Any sale or transfer of Anthropic stock, or any interest in Anthropic stock, offered by these firms is void and will not be recognized on our books and records," the company's support page reads.

That underscores how demand for exposure to leading AI companies is colliding with private-market restrictions.

What these stories suggest

  • Model development is becoming more automated.
  • Deployment is increasingly tied to real payment and operations models, especially in healthcare and public infrastructure.
  • Compute scarcity is driving unconventional infrastructure ideas, from neighborhoods to orbit.
  • Capital and governance questions remain tightly linked to the future of major AI companies.

In short, AI's next phase is not just about better models. It is also about who pays, where systems run, who owns access, and how quickly capacity can be built.

References & Credits