IIT, overlapping minds, collective consciousness, and virtual brains
Some thoughts in response to 'The Feeling of Life Itself' by Christof Koch
I recently finished listening to Christof Koch’s ‘The Feeling of Life Itself – Why Consciousness Is Widespread but Can’t Be Computed’, in which he explains and defends Integrated Information Theory (IIT), a prominent (and controversial) theory of consciousness, and I thought I would share my thoughts on the theory and the book. IIT is seen by some as especially promising because of its mathematical precision and empirical testability, but has also been accused of being “pseudoscience”, due to the difficulties of testing some of its more extreme implications, and possible implications of panpsychism.
(By the way, for those with an Audible subscription, the audiobook is included for free as part of the Plus Catalogue)
Methodology
IIT was created by Giulio Tononi, following an experience-first methodology, beginning with the phenomenology of experience itself, as we know it from the inside, rather than beginning with observations of the brain. Looking at experience in this way, Tononi identified a series of “axioms” concerning the nature of consciousness, and from there derived how a physical system would need to be organised to be conscious. This is, in my opinion, a very good methodology if we want to make progress on explaining conscious experience. We should begin by trying to construct a first principles theory of what consciousness is and how it works, and then attempt to test that theory empirically. Einstein reportedly once said, “When I have one week to solve a seemingly impossible problem, I spend six days defining the problem. Then, the solution becomes obvious.” Similarly, I think it’s necessary to begin by directing our attention to the nature of our first-person subjective experience before we attempt to find any explanation.
Core Idea: Integrated Information = Experience/Consciousness
The central idea of IIT is that consciousness/experience (the theory treats the two as identical) is a matter of a thing existing to itself, i.e. having causal power over itself. If we have a system where A affects B, and B affects A, then the system of A and B together possesses some degree of consciousness. This is evaluated for larger systems by considering how the system would behave it were “cut” in two, such that the two sides could no longer affect one another. The more interdependent the two sides are for every possible cut of the system, the higher its “phi” value, representing the degree of complexity of its consciousness. It is this mutual interdependence that integrates the information across the system as a whole, so that each point within the system is informative concerning the rest of the system. And this integrated information is, according to IIT, simply what consciousness/experience is.
I think this idea brings a valuable insight, but it is mistaken on two points.
Objection 1: Self-causation is necessary but not sufficient for consciousness
Firstly, I believe this mutual interdependence and self-causation are necessary but not sufficient for consciousness, understood in a fuller sense, beyond mere experience, for reasons I laid out more fully in my last post. Merely affecting oneself in the way required for a high phi score is not enough to be conscious in the sense we usually mean. As I noted, if that is all that is required, then a microphone/speaker setup experiencing feedback has consciousness! Is that really what we mean by “consciousness”?
Koch acknowledges that IIT attributes consciousness to such simplistic setups, but isn’t put off by it. Instead, he accepts it as just a surprising consequence of the theory.
Perhaps my above objection is unfair, and merely an issue of definitions, since Koch is taking consciousness as synonymous with experience, while I consider it something much more. But that brings me to my second objection:
Objection 2: Self-causation is not necessary for *mere* experience
I believe this kind of self-causation is unnecessary for mere experience. IIT involves the strange idea that as soon as a proper self-causal loop is formed, the “lights come on” and there is something-it-is-like-to-be that thing — it suddenly gains subjective experience, for no clear reason. My view is that all things have experience, which is nothing other than the causal influences they receive from the world. If they causally influence themselves, they will experience themselves; if they do not, then they will still experience the incoming causal influences they do receive. I discussed this at greater length in this post:
So on the one hand, I cannot see why self-causation should create subjective experience, and on the other hand, I do not believe it is sufficient for full consciousness.
Still, this mutual interdependence/self-causation/integrated information is a necessary prerequisite for “proper” consciousness, and so it is, I believe, a valuable step for deepening our understanding of consciousness.
The Exclusion Principle
One of the five axioms of IIT is that consciousness is exclusive: it includes what it includes and excludes everything else. Quoting from the IEP,
‘Fifth [axiom], consciousness has the property of exclusion. Every experience has borders. Precisely because consciousness specifies certain things, it excludes others. Consciousness also flows at a particular speed.’
…
‘Fifth [postulate], the exclusivity of the borders of consciousness implies that the state of a conscious system must be definite. In physical terms, the various simultaneous subgroupings of mechanisms in a system have varying cause-effect structures. Of these, only one will have a maximally irreducible cause-effect structure. This is called the maximally irreducible conceptual structure, or MICS. Others will have smaller cause-effect structures, at least when reduced to non-redundant elements. Precisely this is the conscious state.’
In other words, of all the ways of considering and calculating an object’s phi, it is only the way that grants the highest value that wins and gets to enjoy phenomenal experience. Every other way gets excluded. This means that, even though we might calculate a non-zero phi for an entity like a corporation, this will not have any subjective experience because its phi value will be dwarfed by that of the separate minds that make it up (employees are interdependent, but not as interdependent as different conscious regions of a single brain). Similarly, while individual brain cells might have non-zero phi, if they are incorporated into a larger conscious whole, their conscious experience will disappear, being swallowed up by the larger whole.
I think this is a bad postulate. The axiom represents the trivial fact that each conscious experience “is what it is”. Our first-hand experience does not tell us that no overlapping consciousnesses can co-exist upon a shared or partially shared substrate. How could it? And the idea that it is simply the highest value that gets to be experienced seems arbitrary.
Even more troubling, the exclusion principle either has physical implications, or consciousness is back to being epiphenomenal (i.e. it would have no causal effects on physical reality). IIT, as Koch describes it, sees consciousness as being deeply causal in nature and as corresponding to the cause-effect structure of the whole (the ‘MICS’ above, but often referred to as ‘the whole’ in the book). But if experience itself is causal, then the exclusion principle denying experience to certain ways of viewing the system must have causal, empirically measurable implications.
In itself, that’s not a problem, and would in fact offer a powerful way to test the theory. Unfortunately, it would also make phi self-referential and so impossible to calculate (and possibly incoherent). In order to preserve the measure and the exclusion principle, it seems we are forced to accept epiphenomenalism (which is a horrific idea for reasons I have not yet explained, but I mean to in a later post).
But is it so crazy to think there might be an experience associated with overlapping entities at different levels? If that is what the theory suggests, and it involves no contradiction, and makes no difference to empirical predictions, why reject it? I’ll offer a suggestion for how we might think about overlapping consciousnesses below, in the section on “collective consciousnesses”.
Integrated Information as a Solution to the Combination Problem
Returning to my own panpsychist theory, in which all things possess experience in the form of their incoming causal influences, IIT might be seen as pointing to a solution to the ‘combination problem’ (the problem for panpsychists of explaining how many micro-scale minds combine into, for example, a single brain-scale mind). The solution is that by their mutual interdependence/integrated information/communication, the various micro minds become harmonised and unified, sharing and augmenting each other’s knowledge, abilities, and goals. I discussed this in ‘The Mind is a Corporation of Neurons’, considering how our minds might be unified in much the same way as human organisations and communities. Integrated Information illuminates this further, showing how an interdependent whole can be more deeply unified, such that the various parts reflect and contain the whole.
From this point of view, the question of how minds combine is just a question of the degree of integration, interdependence, and communication. The exclusion principle is not required: the more integration, mutual interdependence, and communication, the more the constituent minds will be one; where there is less integration, less interdependence, and a breakdown of communication, the less unified the constituent minds will be. The unity of a mind need not be all or nothing, just as we have all experienced communities with different levels of unity/disunity, synergy/dysfunction.
Collective Consciousness
This allows us to take seriously common ideas such as ‘collective consciousness’ and ‘mob mentality’. Just as neurons with high interdependence and communication (high phi) are united as one greater mind, so individuals, too, can be unified to think, feel, and act as one. We see this especially in mobs, certain religious phenomena, and military/political rallies. There is also some degree of surrendering one’s autonomy to the collective involved, with feelings of “ecstasy”, “letting go”, and “losing yourself”.

That is not to say that individuals truly lose their own minds and form a single super mind. The consciousness/experience still resides in all of the constituent minds, but they are unified (integrated), to a greater or lesser degree, to feel, think, and act as one. As Aristotle noted,
What is a friend? A single soul dwelling in two bodies.
One interesting point in the book is that IIT predicts that there can be a kind of “bare consciousness”, where there is a highly integrated system that is therefore highly conscious, but with no activity, and therefore no content of experience. Koch links this with the reported experiences of advanced meditators from various traditions, who speak of experiencing “bare awareness”. I bring this up here because I have noticed what may be a similar phenomenon, where intentionally being silent as a group can produce a remarkable experience of both unity with the group and of the silence itself. It has been widely noted that meditating in a group environment is often easier and more beneficial, and I wonder if this may be part of the reason why.
Could a computer be conscious? Could a virtual brain?
Koch is firm that the kinds of computers we have today are not capable of consciousness, since they are designed as feed-forward architectures, with information moving from one component to the next with no feed-back, meaning no potential for self-causal loops. The phi of such systems will be zero.
Interestingly, he says that it is possible for a non-conscious, feed-forward system to perfectly imitate/simulate the information processing behaviour of a conscious system with feedback and a high phi, taking the same inputs and producing the same outputs. The feed-forward system would require many more components, but it is doable. These would, according to IIT, have the same behaviours as the conscious original, but would themselves lack any consciousness.
It may even be possible to simulate the full workings of a conscious human brain on a feed-forward computer, but because of its feed-forward nature, it would still have 0 phi and hence not be conscious. At least, so Koch claims.
I wonder if we might legitimately calculate phi by looking at the simulated brain’s components, rather than the physical computer components simulating them. IIT does require calculating phi using a particular “level of granularity”, deciding what gets grouped together as a single component or not, so why could we not take the simulated components as the level of granularity?
Metaphysical Considerations
This brings us to deeper metaphysical waters. What can count as a “real” component in a conscious whole? And what does it take for a component to be considered the same component across time? Does it need to be made of the same physical matter, or might each component “pass the baton” onto a copy to play its role at the next step, like an understudy replacing the lead mid-show?
My own intuition is that the matter should not matter. What is required, I think, is not that the material components should be integrated, but that the streams of information/causality should be. I imagine the streams of information like strands of rope extending through time, being interwoven together into a unified whole.
I like this metaphor a lot, especially because we can see how, when things are interwoven, they function together as a single entity. Where there were previously many individual hairs, there is now a single plait. Where there were many leaves, there is now a single basket.
But I should admit that this idea of interwoven streams of information is at this point just an intuition, or even just a vision, and lacks precision and vigour. What makes a “stream of information”? What makes it the same information stream across time? Metaphysical work remains to be done…
What do you think?
Have you read Koch’s book? Is IIT a promising approach? Are my ideas on the right track or going off the rails? Is there something I’ve missed in my understanding of IIT?
Please let me know in the comments!
I don't think science as we currently understand it is capable of grasping consciousness in the full sense of that word, but IIT is the best out there. But it's still science as usual and a dead end insofar as it supposes neuronal activity of the brain is all there is to it. That's just mistaking the part for the whole. It's a rather bizarre presumption to make given IIT's starting point.
I want to read two of your linked posts before I comment on panpsychism or experience. About IIT, I think it may be a necessary component but doubt it is sufficient.
Based on the one example we know is conscious — brains — I think a *physical* complex network is obviously necessary and further that synaptic connections (rather than mere interconnections) are key. (I once read a neurophysicist describe synapses as the most complex biological engine we know of.) I've long been skeptical that a software simulation of such a network would work, though LLMs do give me some pause. (If interested, I can point you to a number of posts I wrote here last year discussing why. They're in the "My Best Guess" newsletter starting last August.)
WRT group consciousness, indeed, and you might find this recent post about "bio-behavioural synchrony" interesting:
https://neuroscienceandpsy.substack.com/p/the-sandman-effect
The post is more about one-on-one synchronization, but in my comment there I asked about sporting events and political rallies as examples of many-to-many and one-to-many synchronization. It's a fascinating topic, and for me, explains a lot of life experiences.