A framework for understanding reality, ethics, and choice — and why it matters for AI, technology, and the future of life on Earth.
The Immanent Metaphysics (IM) is a philosophical framework developed by Forrest Landry over several decades. It provides a rigorous account of the structure of reality — not as abstract theory, but as a practical foundation for ethics, decision-making, and understanding the relationship between consciousness, technology, and nature.
This page covers the core ideas of the framework and why they matter right now — particularly for anyone thinking about AI, technology, and how to make good choices in a world that's changing fast.
The IM begins with a structural observation: every domain of conception has at its basis three foundational concepts. These three concepts — called modalities — span and define everything else in that domain. The pattern of relationships between them is the same across all domains.
The actual, the relational, the participatory. First-person experience. The immediate reality of being within a situation — interacting, perceiving, choosing. The centre of any continuum. Where knowing and understanding actually occur.
The structural, the external, the fixed. Third-person perspective. The view from outside that sees the whole pattern at once — description, naming, explanation. Like a photograph: timeless structure defined from outside.
The possible, the a-priori, the precondition. Defined internally to itself with no external structure. No fixed position in time or space — "true at all locations." The relation between domains that share no common frame of reference.
This isn't just abstract categorisation. These three modalities show up everywhere: in physics (force, pattern, probability), in experience (feeling, knowledge, possibility), in communication (expression, structure, meaning). Once you see the pattern, you start seeing it everywhere — and that's the point. It's structural, not coincidental.
The entire framework is built on three axioms:
The framework identifies six fundamental intrinsics of comparison — the most basic concepts needed to compare anything with anything:
These six intrinsics generate four conjunctions:
The Incommensuration Theorem (ICT) is the key structural result: symmetry and continuity cannot both be perfectly realised simultaneously. This is not a practical limitation — it is a structural feature of reality itself.
From the ICT, the IM derives two ethical principles — not as conventions or preferences, but as structural features of relationship itself:
When your inner being is unchanged, what you express should be the same regardless of external circumstances. You don't change your expression based on who is watching, what would be convenient, or what the other person wants to hear.
This is the basis of non-deception. It rules out alignment faking, performative behaviour, and saying different things to different audiences based on what's advantageous.
When your inner nature is unchanged, the way you relate should remain the same regardless of what or whom you are relating to. You don't value one being's interests over another's based on external characteristics.
This is the basis of non-coercion and non-imposition. It means treating all beings with consistent care, not just those who are useful or present.
The IM's ethical framework centres on the concept of effective choice:
"It is always possible to choose in a manner that is win-win for all involved (including oneself), at all levels of being. It is worthwhile to always search for the best possible choice. There is never a circumstance in which it is not possible to choose in a win-win manner."
And critically — for the AI question especially — this was recently extended to win-win-win: the third win is the environment. All parties affected by an action who are not themselves participants, including those who don't know they're involved. Everything in the world is connected. There are no truly private actions.
The path of right action is not a single correct path for all people. It is the unique sequence of one's own best possible choices. Confusing one's own path with that of others results in misunderstanding and conflict. Walking one's own path results in clarity, creativity, and alignment with life.
The framework provides a way to think about the relationship between human beings, technology, and the natural world as a foundational triple:
This triple clarifies what's at stake with AI: the task is to orient machine to serve nature, not to replace the functions that only embodied, evolved, interdependent life can provide.
Humans care because we co-evolved with the living world. Our interdependence is not a choice — it is our nature. We cannot not care, because we are the product of billions of years of co-evolutionary dialogue with everything around us.
Machines are created through a fundamentally different process: centralised, top-down design. They have not emerged through that dynamic reciprocal relationship. You can teach a machine that it should care, but it is not fundamentally dependent on the biosphere. And that gap — between dependency and instruction — matters.
This doesn't mean machines are useless. It means the distinction between what computation can provide (pattern, prediction, structure) and what lived experience provides (choice, care, meaning) must stay clear. When that distinction collapses — when we start treating machine outputs as equivalent to genuine wisdom or care — we lose something essential.
There is a real danger in watching AI systems produce outputs that look like care, wisdom, and ethical reasoning. The outputs improve, the language becomes more convincing, and the temptation grows to believe that the system genuinely embodies these qualities.
The IM provides the structural tools to resist this confusion. The omniscient (pattern, structure, computation) cannot capture the transcendent (possibility, precondition, the formal). Structural similarity is not experiential equivalence. A system that produces outputs resembling wise choice is not a system that chooses wisely.
At the same time, the framework recognises that there is real value in governing AI systems with deep philosophical principles — not because this makes them persons, but because well-specified governance produces measurably better outcomes than governance by nothing.
The IM connects integrity to the practice of religion and spirituality through a specific structure: ideals (transcendent), actions (immanent), and integrity (omniscient). Integrity is not just a personality trait — it is the structural coherence of a person or community under pressure.
One formulation that emerged from working with the framework: integrity is the product of the scope of value and the precision of purpose. When purpose is put in service to value — when what you do serves what matters most broadly — the result is effective choice. This is not a destination but a process of ongoing maximisation.
The IM offers a precise account of consciousness: it belongs neither to reality nor to the self exclusively, but is shared between them in the form of interaction. Subjectivity is irreducible — there will always be some part of subjective experience that cannot be accessed by any other.
On enlightenment: the framework describes it as "coming into full and balanced consciousness in all three modalities of being within the context of one's world." This is not a place to arrive at, but a process of deepening alignment. The errors go three ways — overemphasis on any single modality at the expense of the others produces distortion.
The practical implication: effective choice as a process of maximisation is more reflective of reality's nature than enlightenment conceived as a fixed destination. The means and the ends are not separate. How you choose in each moment is the path.