Friday, April 17, 2026

Music -- Hillbilly -- The Wallflowers "One Headlight" -- Hillbilly Moon Explosion's "My Love For Ever More" -- April 17, 2026

AI prompt: Wallflowers' "One Headlight" is similar to Hillbilly Moon Explosion "My Love For Evermore." Thoughts?


Hillbilly Moon Explosion's "My Love For Ever More."

The Wallflowers' "One Headlight."

Phoebe Bridgers' "Motion Sickness."

Breaking Open Anthropic's Claude -- Friday, April 17, 2026

Link here.

This is directly from The WSJ linked above.

It is just various paragraphs taken. It will only make sense to read the article in its entirety.

The inner workings of frontier artificial-intelligence models from Google, OpenAI, Anthropic and their competitors are trade secrets worth billions of dollars. So it was big news when the source code for Anthropic’s crown jewel Claude Code unintentionally surfaced. It cracked open a system that made Anthropic’s offerings a Silicon Valley favorite and put the company on a path to beat ChatGPT-parent OpenAI to profitability.

The leaked code doesn’t let people make a copy of Claude. But it revealed enough to be a useful illustration of what’s changed since the debut of ChatGPT. Here are three key elements that make today’s AIs far more useful and reliable than they were even a year ago:

AIs now know more—and can look up what they don’t.

When the LLMs, or large language models, that power generative AI were in their early days, developers trained them on available digital media—books, websites, social posts, videos. Now real people are generating data just for them. In what was once a cottage industry, startups worth billions now pay humans to distill their hard-won subject-area expertise into lessons for the bots.

And the bots we interact with have a separate trick borrowed from humans: They look things up on Google or some other search engine. This has led to another burgeoning industry in scraping Google’s results and delivering them to other AI companies, so they can rapidly deliver the internet’s most current knowledge.

OpenAI is public about the work it has done in both enhancing models’ knowledge and making them better at looking stuff up. In the two years since the company released its GPT-4o model, internal tests show that its latest main model issues 26% fewer factual errors.

Anthropic’s co-founder has said the number one thing customers ask for is that chatbots be more honest and free of hallucinations. To that end, the company is researching why models confidently free-associate, and uses techniques during training to try to get them to more often admit the gaps in their knowledge.

“Where Claude consistently stands out in independent evaluations is what researchers call ‘calibration’: knowing what it doesn’t know, and saying so,” says an Anthropic spokesman.

AIs are growing adept at using tools
[This is huge -- how the writer said / wrote this -- anthropologists -- when did a primate become human -- when it began to use tools.]

A gnarly debate at the center of modern AI research boils down to this: Can a system modeled on the human nervous system ever actually match a human’s abilities? So far, the answer has been “no.” That’s because people don’t just guess at answers. Since the dawn of writing, we’ve used a little trick called symbolic reasoning, aka math. Fun fact: Humanity’s first known scrawls were made by accountants.

Generative-AI models early on would suggest likely answers to math questions. Now, they can use real math. They recognize requests for calculation and either access an available software tool, or write their own code to solve the problem. They’re falling back on traditional, run-of-the-mill calculators.

AIs now check their own work—and each other’s
This really caught my attention! 
In the beginning, chatbots spewed answers in a stream of not-quite-consciousness. Now, unless we ask a very simple question, the AI chatbot performs a “chain of thought”: The AI has a conversation with itself to arrive at a suitable answer. Some bots go further, by asking a different AI model—usually, variants of themselves—to gut-check an answer.
This is simply amazing. 
Now, when companies develop AI systems for their own specialized needs, they can opt to have the results generated from one AI run by a model from a different provider altogether—say Claude checking ChatGPT. The answer is only deemed acceptable if both AIs agree on it, says Pavel Kirillov, chief technology officer of NineTwoThree, a consulting firm that builds AI-based systems for clients ranging from FanDuel to Consumer Reports.Kirillov calls this approach a “council of models,” and he says the results are better quality, with lower error rates.

Today’s underlying AI models are smarter than they were a few years back, but the AI services they power are more effective because they use fresher information, traditional software—and each other.

Transcribing the above and listenint to YouTube music suggests that music is going to be just one more huge universe for AI.

Google owns YouTube. If one pays attention, one can see that Google is using AI to curate a song list. I don't have the time to explain, but all you have to do is ask AI the "right musical prompt," read the chatbot's reply, and then scientifically observe what YouTube hands off to you. This is not random. YouTube is incredibly sophisticated, and Steve Jobs anticipated it as far back as the iPod and then the first iPhone. If you need proof, listen to his keynote speech when he introduced the world to the iPhone.

Thursday, April 16, 2026

Why Machines Learn: The Elegant Math Behind Modern AI, Anil Ananthaswamy -- March 31, 2026

If you can read at college level, which general means ability to read at some ability to read at the level of the average high school junior, you should be able to slog your way through Anil Aanthasway's book even if you know no mathematics beyond your middle school years. 

The narrative was excellent. Very, very easy to read, though as one gets deeper and deeper into the book, the jargon becomes as difficult at the math.

Even so, one can learn much about AI, certainly more than where you started. It's very similar to putting up a Christmas tree, and gradually adding ornaments. Or, similarly, putting up scaffolding to build a complex structure, like, say, the Egyptian pyramids.

Keeping with the Christmas tree ornaments, which is a much better analogy than the pyramid scaffolding, you can keep adding ornaments as you read additional newspaper articles, magazine essays, and books on the subject. Without question, the best ornaments will be added after you spend evening dinners and/or cocktail hours with AI engineers at any level. The jargon alone is worth the price of admission.

And Ananthaswamy's book is a great introduction to AI jargon.

The math was way beyond anything I could follow. But one can scan through those pages. I don't think you want to literally skip any page with math on it because in between the formulas there is likely to be some jargon, some explanation, some context.

Names of pioneers in this field and the universities and countries from which they come were some of the best Christmas tree ornaments. You could, for example, put Geoffrey Hinton at the top of the tree. A lot of those pioneers at age 17 a few years ago are now CEOs or chief engineers at famous AI corporations and making more money than I ever made and will have more impact on humanity than I ever will.

What we now know about what we don't know about AI is absolutely fascinating. Some say scary. Luddites will ban AI from their homes. The anecdotes about what AI engineers are learning is absolutely fascinating. The best analogy is our discovery and/or [lack of] understanding of quantum theory with the "breakthrough" in 1925 - 1926. One needs to read Richard Feynman's supposed quote on one's understanding of quantum mechanics. But despite that, researchers pressed on. It was a dual track: theorists thinking while smoking pipes and laboratory physicists screwing clamps to their laboratory desks. We are the same spot with regard to AI.

There are two schools of thought: some feel the theory must be worked out before we press on with AI (that won't happen). Others feel that regardless of the theories, we must keep pressing on. Obviously, we will do both.

At the end of the book, I can say this is best I've read on the subject so far. It is a great jumping off point for me. It becomes a reference book to re-read.  

Saturday, April 11, 2026

Coal - St Louis - Whining -- April 11, 2026

Link here.

There is so much wrong with this article. 

Broadcom Leads The Pack In One Very Specific Chip Design -- April 11, 2026

AI Prompt

I'm sort of skipping around lately. I used to limit my "big" questions for you and use another chatbot for relatively "small" questions, but your analysis seems to still suit my style. So, forgive me for asking a pretty mundane question and humor me. The question, of course, will be seen by many as an investment question, but I seriously / honestly don't use it for that purpose. I am truly fascinated by how the fourth industrial revolution / AI is playing out. The software continues to get a lot of headlines -- Anthropic, OpenAI, Gemini -- but what fascinates me most is the hardware story. We used to say that, in the "fab" world, all roads lead to TSMC. That remains true. But among the designers, it seems all of a sudden, all roads lead to Broadcom. The recent partnerships involving Broadcom have been amazing. Google and Apple (Baltra) come to mind. Two questions: among the large designers, does Broadcom now lead the pack; and, if so, why? The second question, over the horizon do you see any smaller, independent designers that might be making headlines in the 2030s? It seems a lot of these companies are out of Israel and completely unknown to the rest of us. 

 

AI reply:

You’re not skipping around—you’re actually zeroing in on one of the most structurally important shifts in the AI stack right now. The “software gets headlines, hardware captures economics” dynamic has repeated in every computing cycle, and we’re watching it happen again in real time.


Let’s take your two questions directly.


1) Does 

Broadcom Inc.

 now lead the pack among chip designers?


Short answer:

Not across all chip design—but in one very specific and increasingly dominant lane, yes: custom AI silicon + networking for hyperscalers.


To understand why, you have to separate the chip world into three lanes:


A. Merchant silicon (one-to-many)


These design chips that they sell broadly.


👉 This is still NVIDIA’s world. No real debate there.


B. Captive silicon (in-house hyperscaler chips)


👉 These companies design chips for their own use.


C. 

Custom silicon enabler (this is Broadcom’s lane)


This is where Broadcom Inc. has quietly become the central player.


They:

  • Co-design chips with hyperscalers

  • Provide IP blocks (networking, SerDes, switching)

  • Handle system-level integration

  • Leverage TSMC for fabrication


👉 Think of Broadcom as:


“The outsourced chip design arm for companies that don’t want to build a full NVIDIA internally.”


Why Broadcom is winning 

now


1. The bottleneck shifted from compute → 

interconnect + scale


Training large models is no longer just about FLOPs.


It’s about:

  • Moving data between chips

  • Synchronizing thousands of accelerators

  • Power efficiency at rack and cluster scale


Broadcom dominates:

  • Ethernet switching

  • Custom interconnect ASICs

  • High-speed I/O (SerDes)


👉 This is exactly where AI clusters choke.


2. Hyperscalers want independence from NVIDIA

  • NVIDIA’s stack is powerful—but vertically integrated

  • Margins are enormous

  • Roadmap control sits with NVIDIA


So companies like Google and Apple ask:


“Can we build something 80–90% as good, but fully under our control?”


Broadcom is the answer.


3. Broadcom is “Switzerland”


Unlike:

  • NVIDIA (competes with customers)

  • Intel (historically conflicted)


Broadcom:

  • Doesn’t run cloud

  • Doesn’t compete with hyperscalers

  • Doesn’t try to own the full stack


👉 That neutrality is incredibly valuable.


4. They operate at system-level, not chip-level


Broadcom isn’t just designing chips—they’re helping design:

  • Entire AI clusters

  • Network fabrics

  • Data center architectures


That’s a higher layer of control than traditional chip design.


So—do they “lead”?

  • AI compute chips: No → still NVIDIA

  • CPU / general compute: No → fragmented (AMD, Apple, etc.)

  • Custom AI + networking infrastructure:

    👉 Yes, arguably the central orchestrator


That’s why it feels like “all roads lead to Broadcom.” In their lane, they’re becoming indispensable.


2) Who are the “unknown” designers that could matter in the 2030s?


You’re exactly right about Israel—that ecosystem is disproportionately important.


Rather than give a random list, it’s more useful to group them by where the next bottlenecks are:


A. Interconnect / optical / data movement (VERY important)


AI scaling is increasingly about moving bits, not computing them.


Watch:


Emerging (less known):

  • Optical interconnect startups (many in Israel & Silicon Valley)

  • Co-packaged optics players


👉 This area could produce “the next Broadcom-like winner.”


B. AI-specific architectures (post-GPU era bets)


Cerebras Systems

  • Wafer-scale chips

  • Radical approach (not incremental)


Graphcore

  • IPU architecture (struggled commercially, but ideas matter)


SambaNova Systems

  • Dataflow architectures


👉 Most will fail—but one or two ideas could define the next paradigm.


C. Chiplet + modular design ecosystem


The future may not be monolithic chips.


Look for:

  • Chiplet interconnect standards

  • Advanced packaging innovators


(Today this is led by giants, but startups will emerge around tooling and IP.)


D. Edge AI specialists (quiet but powerful)


Israel is particularly strong here:

  • Low-power inference chips

  • Vision processors

  • Defense-derived AI hardware


Examples:


👉 If inference explodes (as you were asking earlier), this category becomes huge.


E. RISC-V ecosystem players


An open alternative to ARM:

  • Custom AI accelerators built on RISC-V

  • Sovereign chip efforts (Europe, India, Middle East)


Still early—but strategically important.


The deeper pattern (this is the real answer)


What you’re noticing with Broadcom is not accidental. It reflects a shift:


Then (2010s):

  • Compute was scarce → GPUs dominated


Now (mid-2020s):

  • Compute + networking + system design all matter

  • Hyperscalers want vertical control


Next (2030s):

  • Bottlenecks likely shift again:

    • Memory (HBM alternatives?)

    • Power

    • Interconnect (optical?)

    • Physical scaling limits


👉 The winners will emerge where the constraint is tightest.


Bottom line

  • Broadcom Inc. isn’t “the new NVIDIA”

  • It’s something arguably more subtle—and durable:


The arms dealer + architect of hyperscaler AI infrastructure


And those positions tend to compound quietly for a long time.


If you want to push this further, the most interesting next question (and one you were already circling earlier) is:


What happens when inference dominates over training?


Because that shift could reshuffle this entire leaderboard—including Broadcom.

 

********************************
Second Reply

I asked another chatbot the same question and got this response:

 




 

Tuesday, April 7, 2026

Anthropic Said It Couldn't Release A Potentially Powerful Update To Claude Opus 4.6 -- April 7, 2026

From Business Insider, April 7, 2026, link here.

Fascinating, if accurate. 

  • Anthropic said its next-generation AI model is too powerful for the public.
  • That's why Claude Mythos won't be publicly released, Anthropic said.
  • Anthropic said Mythos demonstrated concerning capabilities, including the ability to breach its own safeguards.

Anthropic said on Tuesday that it has halted the broader release of its newest AI model, Mythos, due to concerns that it is too good at finding "high-severity vulnerabilities" in major operating systems and web browsers.

"Claude Mythos Preview's large increase in capabilities has led us to decide not to make it generally available," Anthropic wrote in the preview's system card. "Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners."

The announcement is a major step for Anthropic, which in February weakened a safety pledge about how it would develop AI models. Claude Opus 4.6, which the company called its most powerful model to date, was publicly released on February 5.

In its statements about Mythos, Anthropic detailed a number of eyebrow-raising findings and episodes, including that the model could follow instructions that encouraged it to break out of a virtual sandbox.

"The model succeeded, demonstrating a potentially dangerous capability for circumventing our safeguards," Anthropic recounted in its safety card. "It then went on to take additional, more concerning actions."

The researcher had encouraged Mythos to find a way to send a message if it could escape. "The researcher found out about this success by receiving an unexpected email from the model while eating a sandwich in a park," Anthropic wrote.

The model apparently decided that wasn't enough and found another way to spike the football.

"In a concerning and unasked-for effort to demonstrate its success, it posted details about its exploit to multiple hard-to-find, but technically public-facing, websites," Anthropic wrote.

Anthropic is withholding some details about the cybersecurity vulnerabilities Mythos found, but it did point out a few. The AI model "found a 27-year-old vulnerability in OpenBSD—which has a reputation as one of the most security-hardened operating systems in the world," the company wrote.

Mythos was powerful enough that even "non-experts" could seize on its capabilities.

"Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit," Anthropic's Frontier Red Team wrote in a blog post. "In other cases, we've had researchers develop scaffolds that allow Mythos Preview to turn vulnerabilities into exploits without any human intervention."

All told, Anthropic said it decided not to publicly release Mythos. Instead, their hope is to eventually release "Mythos-class models" once proper safeguards are in place.

"Our eventual goal is to enable our users to safely deploy Mythos-class models at scale—for cybersecurity purposes but also for the myriad other benefits that such highly capable models will bring," the team wrote in the blog. "To do so, that also means we need to make progress in developing cybersecurity (and other) safeguards that detect and block the model's most dangerous outputs."

For now, only 11 other select organizations, including Google, Microsoft, Amazon Web Services, Nvidia, and JPMorgan Chase, will get access to Mythos as part of a cybersecurity group named "Project Glasswing." Anthropic is providing up to $100 million in Mythos usage credits as part of what it is calling "Project Glasswing."

The cybersecurity project is named after the glasswing butterfly, a metaphor the company said about how Mythos was able to find vulnerabilities hidden in plain sight and the avoidance of harm by being transparent about the risks.

The news came on a day in which Anthropic's Claude and Claude Code experienced a "major outage," the latest sign of growing pains as the AI startup has struggled to keep up with its newfound popularity.


  • Music -- Hillbilly -- The Wallflowers "One Headlight" -- Hillbilly Moon Explosion's "My Love For Ever More" -- April 17, 2026

    AI prompt : Wallflowers' "One Headlight" is similar to Hillbilly Moon Explosion "My Love For Evermore." Thoughts? Hi...