Tuesday, March 31, 2026

Alhazen -- Pelillo -- Vision -- Machine Learning -- March 31, 2026

AI prompt:  

I'm reading a most amazing book ("Why Machines Learn," Anil Ananthaswamy, 2024/2025, paperback, p. 151) and came across this passage (Alhazen, Pelillo): 
"When sight perceives some visible object, the faculty of discrimination immediately seeks its counterpart among the forms persisting in the imagination, and when it finds some form in the imagination that is like the form of that visible object, it will recognize that visible object and will perceive what kind of thing it is." 
Wow, wow, wow. First thing I thought of was an discussion somewhere how humans can distinguish between cats and dogs, and more importantly how dogs can distinguish between dogs and cats. LOL. But this fascinates me and I'm curious:  could / would that hypothesis by Alhazen be used when studying imprinting (ducklings)?

AI (ChatGPT) reply:

That is a “wow” passage—and you’re reacting to it exactly the way many historians of perception and AI do. What Ibn al-Haytham is describing is strikingly close to what we’d now call pattern matching against stored representations.

Let’s connect your intuition to imprinting—because you’re onto something real, but there’s an important twist.


1. What Alhazen is really proposing

In modern terms, his idea maps roughly to:

  • The mind stores internal forms (representations, templates)

  • Perception involves matching incoming sensory data to those stored forms

  • Recognition = successful match

That’s eerily close to:

  • modern cognitive science (schemas, representations)

  • and even machine learning (feature matching, embeddings)

So your leap to “cats vs. dogs” (and dogs doing the same!) is exactly right.


2. Now—imprinting (ducklings)

When we move to imprinting, especially in the classic work of Konrad Lorenz, something fascinating happens.

Imprinting is:

  • Rapid (often within hours of birth)

  • Triggered by exposure to a moving object

  • Leads to long-lasting attachment (e.g., “this is mother”)


3. Does Alhazen’s idea apply?

Partially—but not quite in the way he framed it.

Where it does fit:

You could reinterpret imprinting as:

  • The duckling forms its first internal “template”

  • Future perception = matching new stimuli to that template

  • Recognition (“this is mother”) follows from similarity

So yes—Alhazen’s “matching against stored forms” maps nicely onto:

“This moving thing matches the internal form I stored → it is my mother”


Where it breaks down (and this is the key insight)

Alhazen assumes:

You already have stored forms, and perception searches among them.

But imprinting shows something deeper: the system is creating the very first form in real time.

In other words:

  • There is no prior “mother template”

  • The brain has a built-in bias (e.g., attend to movement, faces, biological motion)

  • The first suitable object becomes the template


4. Modern interpretation (bridging both ideas)

A more complete synthesis would be:

  1. The brain has innate constraints / priors

    (e.g., “follow moving objects,” “prefer face-like patterns”)

  2. Early experience instantiates the first template

    (imprinting moment)

  3. After that, Alhazen’s mechanism kicks in: new perceptions are matched against that stored form

So:

  • Imprinting = template creation

  • Alhazen = template matching

They’re actually complementary stages.


5. Why this connects to AI (and your book)

This is exactly the tension in modern ML:

  • Do systems start with:

    • predefined representations (Alhazen-style)?

    • or learn them from scratch (imprinting-like)?

And the answer in both brains and machines is: a mix of innate structure + learned representations


6. Your intuition about animals distinguishing categories

Your cat/dog example is deeper than it sounds:

  • Animals (including dogs) likely form category prototypes

  • These are:

    • not perfect templates

    • but flexible, similarity-based representations

Which again echoes Alhazen:  recognition via resemblance to stored forms.


Bottom line

  • Yes—Alhazen’s idea can be applied to imprinting, but only after the initial template exists

  • Imprinting itself reveals something Alhazen didn’t fully capture:

    • how the first “form in the imagination” gets created


If you want to push this further (and it gets really interesting):

  • there’s a direct line from this idea → Plato’s forms → Kant’s categories → modern neural networks.

And imprinting sits right at the fault line between:

  • innate knowledge

  • and learned experience

Which, as your book suggests, is still the central question in understanding both minds and machines.

No comments:

Post a Comment

Alhazen -- Pelillo -- Vision -- Machine Learning -- March 31, 2026

AI prompt :   I'm reading a most amazing book ("Why Machines Learn," Anil Ananthaswamy, 2024/2025, paperback, p. 151...