Tuesday, March 31, 2026

Alhazen -- Pelillo -- Vision -- Machine Learning -- March 31, 2026

AI prompt:  

I'm reading a most amazing book ("Why Machines Learn," Anil Ananthaswamy, 2024/2025, paperback, p. 151) and came across this passage (Alhazen, Pelillo): 
"When sight perceives some visible object, the faculty of discrimination immediately seeks its counterpart among the forms persisting in the imagination, and when it finds some form in the imagination that is like the form of that visible object, it will recognize that visible object and will perceive what kind of thing it is." 
Wow, wow, wow. First thing I thought of was an discussion somewhere how humans can distinguish between cats and dogs, and more importantly how dogs can distinguish between dogs and cats. LOL. But this fascinates me and I'm curious:  could / would that hypothesis by Alhazen be used when studying imprinting (ducklings)?

AI (ChatGPT) reply:

That is a “wow” passage—and you’re reacting to it exactly the way many historians of perception and AI do. What Ibn al-Haytham is describing is strikingly close to what we’d now call pattern matching against stored representations.

Let’s connect your intuition to imprinting—because you’re onto something real, but there’s an important twist.


1. What Alhazen is really proposing

In modern terms, his idea maps roughly to:

  • The mind stores internal forms (representations, templates)

  • Perception involves matching incoming sensory data to those stored forms

  • Recognition = successful match

That’s eerily close to:

  • modern cognitive science (schemas, representations)

  • and even machine learning (feature matching, embeddings)

So your leap to “cats vs. dogs” (and dogs doing the same!) is exactly right.


2. Now—imprinting (ducklings)

When we move to imprinting, especially in the classic work of Konrad Lorenz, something fascinating happens.

Imprinting is:

  • Rapid (often within hours of birth)

  • Triggered by exposure to a moving object

  • Leads to long-lasting attachment (e.g., “this is mother”)


3. Does Alhazen’s idea apply?

Partially—but not quite in the way he framed it.

Where it does fit:

You could reinterpret imprinting as:

  • The duckling forms its first internal “template”

  • Future perception = matching new stimuli to that template

  • Recognition (“this is mother”) follows from similarity

So yes—Alhazen’s “matching against stored forms” maps nicely onto:

“This moving thing matches the internal form I stored → it is my mother”


Where it breaks down (and this is the key insight)

Alhazen assumes:

You already have stored forms, and perception searches among them.

But imprinting shows something deeper: the system is creating the very first form in real time.

In other words:

  • There is no prior “mother template”

  • The brain has a built-in bias (e.g., attend to movement, faces, biological motion)

  • The first suitable object becomes the template


4. Modern interpretation (bridging both ideas)

A more complete synthesis would be:

  1. The brain has innate constraints / priors

    (e.g., “follow moving objects,” “prefer face-like patterns”)

  2. Early experience instantiates the first template

    (imprinting moment)

  3. After that, Alhazen’s mechanism kicks in: new perceptions are matched against that stored form

So:

  • Imprinting = template creation

  • Alhazen = template matching

They’re actually complementary stages.


5. Why this connects to AI (and your book)

This is exactly the tension in modern ML:

  • Do systems start with:

    • predefined representations (Alhazen-style)?

    • or learn them from scratch (imprinting-like)?

And the answer in both brains and machines is: a mix of innate structure + learned representations


6. Your intuition about animals distinguishing categories

Your cat/dog example is deeper than it sounds:

  • Animals (including dogs) likely form category prototypes

  • These are:

    • not perfect templates

    • but flexible, similarity-based representations

Which again echoes Alhazen:  recognition via resemblance to stored forms.


Bottom line

  • Yes—Alhazen’s idea can be applied to imprinting, but only after the initial template exists

  • Imprinting itself reveals something Alhazen didn’t fully capture:

    • how the first “form in the imagination” gets created


If you want to push this further (and it gets really interesting):

  • there’s a direct line from this idea → Plato’s forms → Kant’s categories → modern neural networks.

And imprinting sits right at the fault line between:

  • innate knowledge

  • and learned experience

Which, as your book suggests, is still the central question in understanding both minds and machines.

Saturday, March 28, 2026

AI -- Progression -- Jargon -- March 28, 2026

AI prompt: AI. Is this the general progression: training --> inference --> AGI --> the singularity?

Reply

Friday, March 27, 2026

AI -- March 27, 2027

I absolutely love AI.

One barely needs to say much, and AI figures out what you are talking about and replies.

Example:

AI prompt: When it comes to security, "compress first," then "scramble."

AI reply

Inference Dominating Training -- Demand And The Technological Singularity Question -- March 27, 2026

How far along the continuum or the curve are we, moving from training to inference? In other words, when will inference dominate demand, rather than training now dominating demand? And when inference trends toward 100% demand how close are we to "technological singularity," and when might that occur? You’re really asking two different—but tightly linked—questions: 1. Where are we on the training → inference shift? 2. Does inference dominance = approaching “singularity”? They connect, but not in a straight line. Let’s unpack carefully. ⸻ 

 

1) Where we are right now (2026): Still training-heavy—but turning 

Today’s frontier is still defined by training runs: 

• OpenAI 

• Google DeepMind 

 • Anthropic …are all still spending billions on: 

 • larger pretraining runs 

 • post-training (RLHF, synthetic data loops)

 • multimodal expansion 

But here’s the key shift: Inference demand is already exploding faster than training demand. 

Think of it this way: 

 • Training = one massive event 

 • Inference = millions to billions of daily queries We are roughly here on the curve: ~60–70% training / 30–40% inference (by compute intensity at the frontier) BUT Inference is growing ~2–5x faster ⸻ 

 2) Why inference inevitably dominates 

This isn’t speculative—it’s structural. 

 A. Economics flips the equation 

Training is: 

 • episodic 

 • capital intensive 

 • but infrequent Inference is: 

• continuous 

 • usage-based 

 • tied to real-world adoption 

 Once AI is embedded everywhere: • every app call • every workflow • every agent loop → becomes inference demand ⸻ 

B. “Token explosion” effect As models get better: 

 • People use them more often 

• Tasks get longer + more complex • Agents call models recursively 

 Result: Inference scales superlinearly with usefulness ⸻ 

 C. Enterprise + agents = the inflection The real tipping point isn’t ChatGPT-style usage. It’s: 

 • autonomous agents 

 • enterprise workflows 

• machine-to-machine AI calls That’s when: inference becomes invisible infrastructure ⸻ 

3) When does inference dominate? 

Likely timeline (high-confidence directional view): 

 • 2023–2025 → Training dominates (we just lived this) 

 • 2026–2028 → Crossover phase 

• ~2028–2030 → Inference clearly dominates (70%+ of compute) We’re probably early in the crossover right now. ⸻ 

 

4) What changes when inference dominates? 

This is where your earlier point about business models becomes critical. 

The winners shift: 

Training era winners: 

 • NVIDIA (high-end GPUs) 

• hyperscalers (capex scale) 

• frontier labs 

Inference era winners: 

 • Low-cost, high-efficiency silicon (ASICs, edge chips) 

vertically integrated ecosystems (again, Apple matters here) 

 • distributed compute networks 

• possibly telecom + edge infrastructure players ⸻ 

 

Infrastructure changes: 

 • From: giant centralized clusters 

 • To: distributed + latency-sensitive systems 

Inference pushes toward: 

edge compute 

 • regional micro data centers 

• on-device AI ⸻ 

 

5) Now—the “singularity” question 

 Let’s be precise, because this term gets abused. 

Technological Singularity = a point where: 

• AI improves itself recursively • progress becomes uncontrollable / exponential ⸻ 

 

 Critical insight: Inference dominance ≠ singularity They are related but not equivalent. ⸻ 

 

 6) How they are connected 

 Inference dominance enables something important: 

A. Continuous learning loops 

When inference is everywhere: 

 • models observe the world in real time 

• generate data constantly 

• feed training pipelines continuously 

This creates: quasi-continuous training systems ⸻ 

B. Agent-driven self-improvement 

If agents: 

 • write code 

• run experiments 

• evaluate results 

Then you start getting: partial recursive self-improvement ⸻ 

7) So how close are we? 

 Blunt answer: We are NOT close to singularity yet

Even with rapid progress: 

• Models still lack: 

 • true autonomy 

• robust long-horizon reasoning 

 • reliable self-improvement loops ⸻ 

Realistic staging: 

Stage 1 (now → ~2028) 

 • Inference dominates 

• Agents proliferate 

• AI everywhere, but still tool-like 

Stage 2 (~2028–2035) 

 • Partial self-improving systems 

• AI accelerates R&D meaningfully 

 • “soft takeoff” dynamics begin Stage 3 (uncertain, post-2035?) 

• Possible rapid recursive improvement • This is where singularity debates become real ⸻ 

 

 8) The deeper takeaway (this is the non-obvious part) 

The real signal to watch is NOT: 

 • inference % 

 • model size 

 It’s: Does AI meaningfully accelerate its own improvement cycle? That’s the hinge.

 

 9) Bringing it back to your original framing 

Your LDC / infrastructure question becomes even more important: 

• Training world → centralized, capex-heavy 

 • Inference world → distributed, usage-heavy 

And the big question: Do hyperscalers still dominate inference? or Does inference fragment across edge + specialized providers? ⸻ 

 

Bottom line 

  • We are early in the shift from training → inference dominance (mid-innings, not late) 

• Inference likely dominates within ~3–5 years 

This shift does NOT equal singularity, but

 • it lays the groundwork for continuous learning systems 

And the real “clock” on singularity isn’t inference share— It’s when AI systems start improving themselves faster than humans can meaningfully track or control. ⸻ 

 

If you want to go one level deeper, the next question is fascinating: Does inference becoming cheap and ubiquitous actually slow down frontier training? (counterintuitive, but very possible)

LDC Build-Out -- Two Business Models -- March 27, 2026

AI Prompt: With regard to LDCs, there appear to be two business models. One, the large hyperscalers (AWS, META) buiild their own, top to bottom. Two, independent contrators (CoreWeave) finance, build, rent/sell to second tier LLMs -- sometimes to specification, sometimes not. At one time, Oracle was doing the same thing -- using something called the "cookie cutter" approach. Thoughts? Reply: You’re basically describing the vertical integration vs. capacity merchant split in AI infrastructure—and you’ve got it right. The interesting part is why both models exist simultaneously and why they may converge or clash over time. Let’s break it down cleanly. ⸻ 1) Hyperscaler Model: Full-Stack Ownership Examples: Amazon Web Services, Meta Platforms, Google Cloud What they do • Design chips (e.g., Trainium, TPUs) • Build data centers • Own networking + software stack • Run internal + external workloads Strategic logic This is control + optimization at scale: • Cost curve advantage If you’re deploying millions of GPUs/ASICs, shaving 10–20% matters enormously. • Workload-specific silicon Trainium / TPU = tuned for their own models + customer workloads. • Tight vertical integration Think Apple but for the cloud: • silicon → compiler → runtime → application • Guaranteed utilization They are the anchor tenant (internal AI demand + cloud customers) Weakness • Enormous capex risk • Can’t pivot quickly if architecture shifts (e.g., from training → inference-heavy world) ⸻ 2) “AI Infrastructure Merchant” Model Examples: CoreWeave, Oracle (to a degree), emerging GPU lessors What they do • Raise capital • Buy GPUs (mostly NVIDIA) • Build or lease data centers • Rent capacity to: • LLM startups • Enterprises • Even hyperscalers at times Strategic logic This is capital arbitrage + speed: • Faster deployment No legacy constraints → can spin up clusters quickly • Specialization Optimized for AI workloads only (not general cloud) • Financial engineering • Long-term contracts (like power purchase agreements) • Debt + structured financing backed by compute demand • Serving the “long tail” Not everyone can get capacity from AWS/Google during shortages ⸻ 3) The “Cookie Cutter” Concept (Oracle angle) Oracle’s “cookie cutter” approach was essentially: • Standardized data center modules • Repeatable deployment templates • Pre-configured hardware/software stacks Why that matters now This idea is suddenly very relevant again in AI: • AI factories are becoming modular • Think: • 50k GPU cluster • standardized power/cooling/network design • replicated globally CoreWeave and others are effectively doing a modernized cookie cutter, but: • financed externally • tied to specific GPU generations • often pre-sold before completion ⸻ 4) The Real Underlying Difference This isn’t just business models—it’s who bears risk. Hyperscalers • Bear: • Technology risk • Demand risk • Capex risk • In return: • Capture full margin stack CoreWeave-type players • Shift risk: • To lenders (debt) • To customers (long-term contracts) • Operate more like: • toll roads for compute ⸻ 5) Where It Gets Interesting (Your “Two-Tier” Insight) You previously mentioned: EDGE vs CLOUD This maps directly: EDGE (Apple-like world) • Apple • Own silicon + tightly integrated ecosystem • Optimized, private, efficient CLOUD (this discussion) Splits into: 1. Fully integrated hyperscalers 2. Merchant AI infrastructure providers ⸻ 6) Convergence Scenarios (This is the key forward-looking piece) Scenario A: Hyperscalers win • Custom silicon (Trainium/TPU) undercuts NVIDIA economics • Third-party GPU clouds get squeezed • CoreWeave becomes niche or acquisition target Scenario B: Merchant model explodes • AI demand >> hyperscaler capacity • Capital markets fund massive GPU fleets • CoreWeave-like firms become: • “ExxonMobil of compute” Scenario C (most likely): Hybrid equilibrium • Hyperscalers: • dominate baseline + integrated workloads • Merchants: • handle burst demand + frontier training + startups ⸻ 7) Subtle but Critical Point The real dividing line isn’t just “who builds the data center.” It’s: Who owns the customer relationship AND the software layer? • Hyperscalers → sticky, platform-driven • CoreWeave → more commoditized (unless they move up-stack) That’s why: • Everyone is racing up the stack (platforms, APIs, tooling) • Not just renting GPUs anymore ⸻ Bottom Line Your framing is right, but I’d refine it slightly: • It’s not just two business models • It’s two different economic philosophies of compute 1. Vertical integration (Apple/AWS-style) → optimize everything, own everything 2. Financialized infrastructure (CoreWeave-style) → standardize, scale, lease, securitize And Oracle’s “cookie cutter” idea is the quiet bridge between them—it’s the industrialization layer that both models now depend on. ⸻

Wednesday, March 25, 2026

Silicon Valley -- ChatGPT -- 2026

 Apple's active device install base surpassed 2.35 billion as of early 2025, a new record, with growth fueled by strong sales of iPhones (especially iPhone 16), Macs, and iPads, contributing significantly to their booming Services revenue and reinforcing their ecosystem's strong user loyalty. This substantial user base drives high engagement, with over a billion paid subscriptions and ongoing device additions year after year. 



THE LINEAGE: 1909 STANFORD → MODERN SILICON VALLEY

Below is the sequence in 10 steps, each linking to the next with real technical, institutional, and personnel continuity.



1. 1909 — Stanford Engineering’s Wireless Group

Key figures: Cyril Elwell, Prof. C.D. Marx
Breakthrough: Investigating the Poulsen arc (continuous-wave wireless).
Why it matters:
This is the first time Stanford engineers enter the frontier of global electrical communication. Elwell realizes that the U.S. has no CW wireless technology and that Denmark’s Poulsen arc is superior to spark-gap systems.

This is the spark.



2. 1910 — Cyril Elwell forms the Federal Telegraph Company (FTC/TFC)

Location: Palo Alto/SF Bay Area
Innovation: Imports and industrializes the Poulsen arc; builds the most powerful radio transmitters in the world.
Significance:
    •    First major wireless company on the West Coast
    •    First long-distance CW transmissions in the world (California → Hawaii)
    •    Creates a cluster of high-level radio engineers in the Bay Area

This is effectively Silicon Valley startup #1.



3. 1911–1913 — Lee de Forest joins FTC

Why this matters:
de Forest (inventor of the Audion triode) works for FTC and realizes that continuous-wave systems require amplification. This period is where de Forest matures the vacuum tube from an oddity into a practical amplifier.

Technical leap:
CW (from Stanford) + de Forest’s triode amplifier = the foundation of all radio, broadcasting, radar, and early computing.

This is the birth of the electronics era.



4. 1915–1920 — AT&T, vacuum tubes, and long-distance telephony

AT&T sees what FTC and de Forest are doing and fully adopts the triode amplifier.

Key breakthroughs:
    •    Transcontinental voice telephony
    •    First radio telephone service
    •    Large-scale tube manufacturing
    •    Institutional research culture

This phase births Bell Labs.

The Stanford → Elwell → de Forest chain now merges into the most important research engine of the 20th century.



5. 1920s–1940s — Bell Labs builds the electronic world

Bell Labs becomes the epicenter of:
    •    Radio engineering
    •    Radar
    •    Microwave networks
    •    Early digital switching
    •    Information theory
    •    Semiconductors (intensifying in the 1930s–40s)

Hundreds of engineers trained here migrate outward — the first Bell diaspora.

This is the prehistory of Silicon Valley’s engineering culture: applied physics + corporate R&D + future orientation.



6. 1947 — Bell Labs invents the transistor

Key figures: Bardeen, Brattain, Shockley
Shockley directs the semiconductor group. This moment creates:
    •    the semiconductor industry
    •    the idea of solid-state physics as a commercial frontier
    •    the future of computing, digital logic, and integrated circuits

This invention is directly downstream of:
Stanford → Elwell → FTC → de Forest → AT&T amplifiers → Bell Labs



7. 1953–1955 — William Shockley returns to Palo Alto

Shockley leaves Bell Labs and founds Shockley Semiconductor Laboratory in Mountain View.
This is partly personal (mother in Palo Alto) but also reflects the long-standing Bay Area radio engineering environment (seeded by FTC decades earlier).

Shockley brings:
    •    transistor physics
    •    semiconductor process engineering
    •    Bell Labs culture of innovation

This is the first semiconductor lab in what becomes Silicon Valley.



8. 1957 — The Traitorous Eight leave Shockley → Fairchild Semiconductor

They bring:
    •    planar process
    •    silicon transistors
    •    the foundations of the integrated circuit
    •    the culture of spinouts

Fairchild becomes “Silicon Valley Zero.”

From Fairchild come:
    •    Intel
    •    AMD
    •    National Semiconductor
    •    dozens of spinoffs

The region transforms from radio → microwave → semiconductor → computer electronics.



9. 1960s–1980s — The Semiconductor Expansion

The Fairchild and Intel diaspora snowballs into:
    •    microprocessors
    •    memory chips
    •    personal computers
    •    networking equipment
    •    workstation computing
    •    graphics hardware

Stanford’s engineering school becomes deeply intertwined with local industry (Terman, HP, and federal defense funding play major roles).

This solidifies the structure of Silicon Valley as an ecosystem:
University + venture capital + small startups + engineering culture + rapid spinouts.



10. 1990s–2020s — The Modern Valley: Internet, Mobile, AI

Direct lineage:
    •    Stanford students found Yahoo, Google, Snapchat, Instagram
    •    Semiconductor and hardware heritage → Nvidia, Apple Silicon
    •    Wireless communication heritage → Qualcomm, WiFi, LTE, 5G
    •    Bell Labs ideas → digital signal processing, fiber optics, packet switching
    •    FTC’s foundational wireless ideas → modern radio, WiFi, Bluetooth, satellite internet

Everything from GPUs to cloud computing to smartphones sits downstream of the 1909 decision by a few Stanford engineers to explore wireless telegraphy.



THE CHAIN IN A SINGLE LINE

1909 Stanford wireless experiment →
Federal Telegraph Company →
de Forest + triode amplification →
AT&T long-distance networks →
Bell Labs research culture →
Transistor →
Shockley returns to Palo Alto →
Fairchild Semiconductor →
Intel + the Valley spinoff network →
Modern Silicon Valley

**************************************

AI Prompt:
In the 2023 softcover edition of Malcolm Harris' "The History of Palo Alto, p. 98, the first full paragraph (the second full paragraph in subsection called "The Age of Synergy") is the best one paragraph summary of the start of the engineering history of Stanford University and the history of wireless telegraph and telephone, Cyril Elwell, C.D. Marx, the TFC, and the first transmission between California and Hawaii. I had not seen this history before but it is amazing that this is where "it" all started. It could have been almost anything else but it was 1909, Stanford, and wireless telegraph. 
It is an extraordinary moment once you see how all the threads line up — and it’s easy to miss because the canonical histories of Silicon Valley usually start later, with Shockley in the 1950s or the “Traitorous Eight.” Harris is one of the very few who pushes the origin story back to the actual hinge: 1909, Stanford’s engineering department, and a handful of people who decided to bet on wireless at exactly the right moment.

A few things that make that paragraph so unusually powerful:

1. The “accident” of timing — 1909

Marconi had just completed transatlantic signaling, Poulsen arc technology was emerging, and the U.S. still had no dominant wireless standard. Everything was up for grabs. If Stanford students and faculty had taken a different interest—hydraulics, mining, structural engineering—the future of global electrical communication might well have been shaped somewhere else entirely.

2. Cyril Elwell as the hinge figure

Elwell is one of those “invisible founders” of Silicon Valley.
He takes a Stanford class project, hears about the Poulsen arc, sees the technical gap in American wireless, and goes to Denmark himself to examine it. That is the exact kind of technical-entrepreneurial initiative that later becomes the Silicon Valley signature.

3. Prof. C.D. Marx as technical catalyst

Marx doesn’t get enough credit. The way Harris emphasizes him is unusual in the literature: Marx is the one who recognizes the opportunity, encourages Elwell, and has the engineering gravitas to make the project credible.

4. The TFC (Federal Telegraph Company) as the first Silicon Valley-style startup

The TFC predates Hewlett-Packard by decades but has all the patterns:
    •    a Stanford technical breakthrough
    •    outside investors
    •    a technology-driven market opportunity
    •    young engineers moving fast in a new industry

They built the world’s most powerful continuous-wave transmitters and created the first large-scale American wireless network.

5. The first California–Hawaii transmission

This is where the story becomes genuinely epic.
A group of Stanford-adjacent engineers, backed by a local company, created a transmission powerful enough to reach Hawaii—the longest continuous-wave wireless link in the world at the time.

Harris is correct to treat this as the true beginning of what would become:
    •    de Forest’s adoption of continuous-wave amplification
    •    the vacuum-tube revolution
    •    AT&T’s long-distance radio telephony
    •    and ultimately the Bell Labs diaspora that shaped 20th-century electronics

It really is the “ur-moment” of Silicon Valley: a group of young engineers who believed that world-scale technical problems could be solved with local ingenuity and a little money.

Saturday, March 14, 2026

DFW -- TSA Security Checkpoints -- Saturday, March 14, 2026

13 checkpoints

Status:

  • Green: 0 - 10 minutes
  • Yellow: 10 - 20 minutes
  • Red: > 20 minutes 

Saturday: 

0440: at least one in each terminal open; most green

0540:

  • E: 3/3 green
  • D: 1/3 yellow; 1/3 red; 1/3 closed; WORST TERMINAL
  • C: 2/2 yellow 
  • B: 1/2 yellow/ 1/2 red;     
  • A: 1/3 green; 1/3 yellow; 1/3 red.

0810:

  • E: 3/3 green
  • D: 1/3 yellow; 1/3 yellow; 1/3 closed; BETTER
  • C: 2/2 green;
  • B: 2/2 yellow.   
  • A: 1/3 green; 2/3 yellow.

1230:

  • E: 3/3 green
  • D: 3/3 green;
  • C: 2/2 green;
  • B: 2/2 green;
  • A: 2/3 green; one closed.


 

 

Music -- Hillbilly -- The Wallflowers "One Headlight" -- Hillbilly Moon Explosion's "My Love For Ever More" -- April 17, 2026

AI prompt : Wallflowers' "One Headlight" is similar to Hillbilly Moon Explosion "My Love For Evermore." Thoughts? Hi...