Consciousness, Part I

Consciousness has traditionally been a philosophical question due largely to a lack of information regarding the specific details of brain function and how they correlate to the everyday experience in general.  With models and techniques from experimental neurobiology and paradigms from the complexity and information sciences, neuroscientists have begun to produce empirical results that should inform any philosophical speculation.

While there is no universal definition of consciousness, there are both theoretical and clinical definitions. When all the semantics are clarified, the real curiosity for the general public tends to be the theoretical question of how organized clumps of matter (us) can feel and experience things; that is, how the subjective experience arises (The Hard Problem). Not only is this of interest to the general public, it would also help ethics boards for animal experiments (IACUC in the US) decide how experimental protocol should be handled. In 2011, IACUC protocol for tadpoles and frogs required that, under anesthesia, you remove the forebrain before further surgery and experimenting to prevent unintentional suffering of the animal. If there was a quantitative test established for consciousness, ethics boards would more easily be able to determine which animals can experience suffering and under what conditions.

Clinically, members of the medical field are interested in determining the state of wakefulness of  patients for various tests and treatments as well as the measure of consciousness of coma patients; anesthesia can sometimes fail due to patient variety. Here, we will focus on the theoretical aspect (which can still provide a clinically meaningful quantification of consciousness as discussed below). Besides the subjective experience, most other aspects of consciousness have  readily available analogs:  a well-programmed robot can be made to “adapt” and “learn” and score high on intelligence tests. One may even call some robots autonomous, but few would dare to conjecture that robots (or any non-living systems) are having a subjective experience.

The following theories have emerged from the scientific literature (peer-reviewed articles are linked in the titles).  This list is by no means complete and only represents my familiarity with the literature.

Towards a cognitive neuroscience of consciousness: basic evidence and a workspace framework

Three fundamental empirical findings on consciousness:

  1. Cognitive processing is possible without consciousness
  2. Attention is a prerequisite of consciousness
  3. Consciousness is required for specific mental operations

Consciousness as Integrated Information: a Provisional Manifesto

  1. …the quantity of consciousness corresponds to the amount of integrated information generated by a complex of elements
  2. …the quality of experience is specified by the set of informational relationships generated within that complex
  3. Supporting Evidence:
    1. Breakdown of cortical effective connectivity during sleep
    2. A Theoretically Based Index of Consciousness Independent of Sensory Processing and Behavior

The brainweb: Phase synchronization and large-scale integration

The emergence of a unified cognitive moment relies on the coordination of scattered mosaics of functionally specialized brain regions

The free-energy principle: a unified brain theory?

…if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.