Bookmark and Share

Consciousness and the mechanism of thinking in general have remained an opaque block to science overall and specifically to researchers in the area. Here I attempt to lay out the fundamental underpinnings that support consciousness, as well as other related mental activity, and then place consciousness and related function into the context so established. I make a concerted effort not to lapse into jargon.

About the Title

As it turns out, “Theory of Mind” has some previous associations, so please note it was only intended as a description of the content here, not a declaration of association with these ideas.

I will present a description of how the brain operates. Not a metaphor — metaphors tell you what things are like, not what they are — but my conclusion as to how the brain, and therefore the mind, actually works.

I’m working backwards on this, as are we all — but after almost forty years of examining the problem I have come up with a model that has turned out to satisfy every question that I have about thought and consciousness in what I can only describe as a manner satisfactory to myself. Which is, I think, in itself notable. If for no other reason than everything I have ever come up with previously, or read about, has utterly failed to do so. So, dear reader, please come along as I try to explain myself. Literally.

As an overview, what I’m going to describe is the idea that the brain is a collection of nearly independent networks, networks where all the fundamental operating mechanisms are implemented by the mundane cells via leverage of chemistry, electricity, and topology. I am going to describe to you why I think these networks are mostly independent; how they arise and how they become independent, how they are organized for two distinctly different modes of operation, and what that independence means to, and for, the other networks in your brain. I will use what I hope are familiar experiences for most readers. Here we go.

Networks

I’m going to use the term “network” quite a bit here, it is central to the concept, so let me expand upon exactly how I mean the term. Generally, network means some things connected to each other. One connection, many connections — connected. When I talk about a network in the brain, I’m specifically referring to the chemical and electrical communication or interaction between cells; and I’m incorporating the idea that such a network is topologically spread out over three dimensions, not necessarily all in one closely-associated lump of cells, although that may occur as well.

Neurology knows of networks of cells that take the signals from your eyes as they respond to light and convert them to the electrical and chemical states needed to stimulate your visual cortex, another network of cells specialized to resolve the color, contrast, and stereo spatial information inherent in the signals flowing from both eyes. This takes input, primarily, from your eyes. In a process internal to the network, the visual cortex creates a signal-based representation of that input that can be associated with previous visual information also stored using similar representation in your memory, and provide many types of outputs that indicate matching, not matching, moving, and so forth, which in turn are passed along to other networks that do other things.

So when I say “network” in the exposition that follows, that’s what I’m talking about. A number of cells that have connectivity, in and out, that serve to create a useful output upon being presented with relevant input by virtue of the evaluation of that input by the network.

Brainops are:

  • Electrical
  • Chemical
  • Topological

Why not Quantum?

Quantum effects only manifest at a very small scale as far as we know. Scales much smaller than the area of a neuron. Such effects tend not to have variable large scale consequences, although we’ve been able to leverage them to do that in some cases by making particular structures that are very tiny themselves, and then designing larger structures that are intentionally sensitive to very small changes in those smaller elements, basically balanced on a knife-edge so the quantum effect can push the larger scale structure one way or the other. When we do this, the results tend to be novel — and obvious — because large scale structure can no longer explain its own actions. However, we have not observed this in the behavior of neurons.

There are several very good bits of evidence that point away from quantum modulation as a component of brainops.

Perhaps most importantly, we can simulate neural activity successfully: we can make a simulated neuron’s output do exactly what an organic neuron does given comparable stimulation at its inputs. Further, we can simulate groups of neurons and the groups do exactly what the organic counterparts do. In particular, we can have these groups of simulated neurons do very brain-like things, such as identify items in complex images, parts of speech, and implement balance. These are all very strong arguments for the position that we have a complete understanding of the relevant operations of the neuron without having to account for quantum (or any other) so far undiscovered effect operating to modulate network activity within the brain. Related… you might enjoy looking over OpenWorm, a fascinating project.

I suggest that the brain’s quantum components act as fundamental low-level elements that only manifest stably as invariant effects at higher levels. This would be at least partially due to being random in character, occurring in very large numbers, and statistically averaging out to steady overall conditions.

Time and Timing?

Certainly temporal order and frequency are deeply part of network operation and communications, but timing is managed throughout the brain by the same three fundamental elements: electricity, chemistry and topology. I’m looking at the issues up-level from there — what the networks themselves do when working as a network, why they are that way, and how that may work — at that same level.

In no way am I minimizing the role of any time-related factor. It is simply that just as I have very little in particular to say about axons and dendrites and so forth, time-related factors are another characteristic of communications between sensory inputs, networks, more networks, and outputs to the body that are managed by electricity, chemistry, and topology.

More on timewise NN modeling

I suggested adding timewise information to NNs in my remarks on this Google+ AI group post. Later, Numenta offered a paper on neural modeling incorporating timewise information.

These networks are known to operate using electrical potentials and chemical triggers; one cell is connected to another, and via these connections, when the signal intensity or sequencing breaks a level set as a result of the network’s timing, chemical states, and ultimate topology, can provide one or more triggers to the cell it is connected to. Other connections from other points do the same, and when all of those triggers add up to a sufficient stimulus to the receiving cell, it in turn fires and signals other cells. There’s a specific functionality that arises consequent to all of this: Networks of brain cells can take input signals, process them, and respond with output signals that vary according to both the inputs and the conditions and connections within the network itself.

The preceding paragraph is basic science; in no way is it speculation — and it is the essential basis for what follows.

Just as an example, scientists have taken a petri dish containing about 25,000 rodent brain cells, and, by training the cells though feedback to the network, and using outputs of the cells to control flight systems, created a cellular network that can fly an F22 flight simulator. This is precisely the type of network specialization, learning, input and problem solving that I am thinking of when I say that one simple word in the context of the brain: network. The implications include the chemical, electrical, and topological (spatial) connectivity of the network internally, and what its inputs and its outputs go to “outside” the network that actually solves the problem.

Brainops — networks as instances of specific, dedicated problem solving mechanisms

When you first learn to ride a bicycle, you focus on the task as much you can. Even this is rarely, if ever, enough, the first few times you try. The shaking of the bicycle’s front wheel and the right to left shimmy of the bike’s body are ample testament to your brain trying to apply counter-forces to what it sees as impending failure — a crash. Not to mention the actual crashes themselves. There’s no network built to handle this initially, you’re trying to think your way through it at a level far too close to analysis, and too far away from resolved understanding. In this mode, you are slow. Far too slow. Hence the feedback problems, the crashes, the skinned knees.

But as you repeat the various components of the experience, your brain begins to establish a specialized network dedicated to solving that problem — and pretty much only that problem. This much input from the ear canal, that much correction to the body position. No thinking. Just established relationships and reactions to those. Later, the network becomes even more sophisticated. With the distraction of impending failure removed, it picks up on the fact that as you pedal, you are moving your weight around anyway, and that this motion only has to be slightly modified to create a perpetual state of moving between controlled extremes that are not, in fact, all that extreme and can be put to use in a balancing rhythm. Now you really know how to ride, to steer. And… you don’t have to think about it.

Let me repeat that, it’s important. You don’t have to think about it. It’s no longer a conscious act; you can (and probably have done, if you’re like most people) ride down the street while listening to, and carefully considering, a story from someone riding near you, or singing along with a song on your earbuds. Well, as we generally think of ourselves — represent our consciousness — as the thinking we’re doing, who, or what, is controlling that bike?

The answer appears to me to be that very network you trained to do the job. It’s trained now, doesn’t need your input in other than the most general fashion (where do you want to go?), and it can, and does, even obviously does, do so without you prodding it. You’ve built a network of brain cells that solves that particular problem. Solves it well, too. Practice makes perfect? No, practice makes networks! And networks do very explicit things for you with no more than what are, compared to the networks themselves, extremely abstract and sparse inputs.

This applies to learning an instrument; throwing a baseball; enormous portions of martial arts; pretty much anything you can imagine that requires more than a moment’s cognition. Time pressure inevitably does the job. You can’t play an instrument well by thinking about it. You can’t throw a baseball well by solving the equivalent batch of simultaneous equations, or guessing. These things must be solved in such short periods of time that you must develop networks that can present you with the physical “do this” solutions directly, without the need for consciousness to guide the process, as this requires the consumption of time in trying to comprehend the problem in high level terms — like the metaphor provided by a mathematical model, or a language-based sequence of steps, or instructions. And — obviously — we do this thing, generate solutions free of high level metaphor or lists of steps. Or something so like it as to make little if any difference in the describing of it all.

You can probably walk, chew gum, avoid potholes, stay out of traffic, listen to music, all at once, all without engaging your consciousness. Doing all those things, each driven by its already well trained network, you can think about something entirely different. This is possible because those networks proliferate until you have, all on your own, “canned” the patterns and responses to stimuli that are required to solve those problems.

You probably can’t explain the majority of your networks operations or principles, either. When asked, “how do you stay balanced on that bike”, what would you tell someone? You don’t know. That’s because that network doesn’t expose that information — it doesn’t need to. You’d have to actually develop new access to that network in order to be able to explain it, and in the process, you might even disrupt its function. Try explaining how you’re balancing on a bike while you’re doing it. And let hilarity ensue when you see what happens to your moment to moment balance on the bike. Because it is funny, as long as you don’t actually crash.

And if something comes along you have not managed to integrate into that network, such as riding over a patch of oil, slipping out past what would normally, given traction, be the sideways limit of your cycloid balancing act, but which turns out to be unrecoverable because the traction that is the root fulcrum of you being able to pull your weight back the other way is suddenly and surprisingly missing in action, your bike-riding network will emit an attention-getting signal that in essence screams SOLVE THIS! I CAN’T! YER GONNA CRASH” — or perhaps it’ll be an emergency signal from your balance monitoring networks with the same result — instantly, all else you were thinking about is void, and ALL you are thinking about is that you are falling. And because, just as when you were initially learning, your thinking isn’t fast enough to solve the problem the first few times, you are probably going to crash. It will suck, you may bleed, you may feel really, really stupid. But your bike-riding network is already working on a solution. These issues relate to that specific network, and so it’s the one that gets the input; the actions you took that helped, or didn’t, are being taken into account, and that network will now evolve a bit more. A couple more slips, and it’s foot out, tension the knee, lean the bike down, step off. No sweat.

I’ll change gears now. No pun intended. Ok, untrue. Anyway:

For example, when I play music (I’m a guitarist), when I’m relaxed and comfortable, I am not thinking about the notes I am playing, my hand motions, or really anything along those lines. What I am doing is mostly bathing in abstract feelings that the sound of the music and the feel of the neck and the vibration of my skin bring to me, and letting those things guide, again in a very abstract way, what happens next.

Which, again, I couldn’t possibly tell you in real time, or perhaps even at all. The network(s) engaged in the process are solving those problems. Not my consciousness, the part of me that takes into account an abstract integration of the whole, and (slowly, oh so slowly) can drag things up out of my memory to compare, to analyze, to draw metaphors and analogies and try and understand something if, and it’s a big if, I am so inclined.

In fact, if I start trying to explain what I’m playing, even just to myself in my head, my playing will stutter and fail to the great bemusement of anyone listening. My music-playing networks operate just fine until I try to get at them directly. If someone asks me how a particular song I “know” (have integrated into a related network) is played, I generally can’t tell them. I don’t know which chords are in it, I don’t know what scales I employ, I don’t know when or where the accents and decorations go… because if that stuff were at the top level, that is, in the network that “is” my consciousness, I couldn’t play it at all. I have to play the tune to get the details to come out, and if I try to explain them then, I fumble the playing.

Learning a song, I think, establishes a dedicated network that is at a higher level than “how to play”; it seems to me to be nothing more than a very sparse input to the network(s) that do know how to play. It still takes time to establish; time and repetition, practice makes perfect, or at least not execrable, and so on. It’s information of the form of this chord, these notes, this timing, while the lower level network knows how to take those abstracts and make them concrete. I certainly don’t learn how to play a particular chord each time I learn a song; no, that’s all already there, and quite solidly, too.

So, when it comes to our daily pursuits, what I’m trying to describe is a veritable plethora of nearly independent networks dedicated to solving the various challenges we present ourselves with, or are presented to us outside of our own choices. This accounts not for a little, but, I think, most of human and animal activity. Learn it, do it, not thinking about it. If you’re thinking at all in the sense of working analytically, it’s probably not about what you’re doing anyway. I can’t tell you how many times, as an engineer, I’ve solved a problem I was having trouble with while I was playing guitar. And probably playing pretty well, though I’m usually alone at such times. If I was playing poorly, that would distract me, you see, so the fact that I can work the technical problem tells me my playing was at least somewhat euphonic.

These ideas also account for other brain activities we generally don’t have direct access to. Regulating your heartbeat (at the brain level… there are other, lower level systems that regulate as well in the case of the heart, particularly if your brain stops doing its part of the job… it’s a biological backup system, which is fascinating), your breathing, your hormone balances, the general tension level of our bodies, balance, how to move both eyes in such a way as to obtain the desired field of view in stereo, maintaining proprioceptive awareness of our surroundings so we keep our body parts from running into things… and so on for quite a long list.

They account for those things we learn, they account for the early establishment and later sophistication of a baby’s ability to initially see facial components, but later to only accept them as a “face” if they’re in a particular topological configuration, they account for learning to walk and then later on using that ability without thinking about it, only directing it to get you “over there” and similar very high level guidances.

This can even account for instinct — all that is required there is that the network be established by growth instead of stimulus. But that also requires evolutionary pressure, and with a thinking being, our derived solutions, I think, are superior to our built-in ones.

For instance, you have some instincts that know to pull your hand back from a hot stove, but a trained, peak level martial artist has such superior threat avoidance skills as to make your yanking back from a hot surface reveal itself as wholly unsophisticated and clumsy. Because, quite simply, practice, practice, practice. You don’t practice with the stove, you see. So your responses are simplistic — but sufficient.

Brainops. I think it is how we get through the majority of the day. Those networks, solving problems they already know how to solve, quietly and without bothering unrelated networks hardly at all. And… this leaves your top level networks, your consciousness, without much interference.

Thinking

Now, what is thinking?

Self-image

I submit to you that most of the brain’s learned networks are accessible (as to the general inputs they use as guidance only) on demand. You have, elsewhere than in the network itself, a general idea of the competence of any particular problem solving network you have developed. Are you a good bike rider? A good guitarist? Do you whistle well? Can you defend yourself? You know the answers to these things because you, that is, a network well away (and I’m going to characterize it as above, or topologically superior, shorthand “above”) from the networks themselves, have established more networks that have been doing nothing but going “that works… that doesn’t work… that always fails… etc.”

When I ask you are you good at something, that’s where the answer comes from. Not from the network that does it — because that’s not a language/communications oriented network; but from another that looks at network competence in general, and is able to express this in a form you can access. This is true for many other things I ask you about yourself.

Sometimes, though, there’s no supervisor. Is your heart beating ok? (blank look) Because… no supervisor. That’s an essentially independent network, of which you also have many. You can think about it, but you can’t access it worth a damn. Not without practice, practice, practice (think Yogi), and that’s practice you — most of you — don’t have.

But up top, you have these general inputs from the evaluation networks of I’m good with this, bad with that, am not concerned with that, and so on. That’s your self-image, right there. A distinct part of consciousness coming from a network different from the others only in what it deals with.

Intuition

Over life, you establish many networks. Some are bike riders, some are spitters, some are lookers, some are stone skippers, and some are walkers. Some let you do math, some let you anticipate inertial effects, some just manage your swallowing and digestion and so on.

Most of the learned networks are readily accessible; you know what they are, and when it is relevant, the associative ties to them bring them front and center pretty quickly. Patterns are a huge part of these networks. And when a network’s pattern has a good match with the problem half of something that your mind is building a model or response to in another network, an associative link can be established very quickly, and you have an “aha!”

You just had a topological metaphor moment — recall, and association, are based upon similarity from top to bottom. You get to one memory by hooking into another across something(s) they have in common: That summer day comes back when you smell the daisies, and so on.

The smell of nail polish puts me in a very romantic mood, and I probably don’t need to explain that even if it isn’t anything that would affect you the same way. The association is very strong. When a hand that has fresh nail polish on it nears my nose, I experience light intuition — guidance, if you will — that someone is interested, that romance is up in potential. And as it turns out, that’s usually quite right.

Induction

Networks can be considered reserves of method. Many different types, utilizing many different inputs depending on what it is they are required to do. They have outputs, often right to our limb-controlling networks, and from there to the limbs. When you are taught a mathematical procedure, you, eventually and perhaps with much pain, establish a network that takes symbols — numbers or more abstract symbols — applies a method to them, and hands back the result. After much practice, a new network is established that is good at that. And it’s as nicely generalized as the one that allows you to ride a bike on any even slightly similar surface.

If the problem you assemble in your consciousness — no matter what it is — strongly resembles a problem that’s already been generally solved, then associative memory of the pattern used to call up the solved network comes into play, and you have the tool you need, or one close to it, right to hand.

Sometimes it’s just memory. For instance, I can tell you the exact powers of two without calculation up to about a million. I can also tell you the first 14 or so digits of pi. Not because I calculate them or use induction to figure them out — I just know them. It’s rote memory. And good grief, is it ever useful. But it’s not induction.

If, on the other hand, you ask me what, say, XXX times YYY is, I will have to access a network that can apply the methods needed to solve that problem. They’re in small steps, so I solve the problem that way. Answers to such things are not memorized, and I’m not, unfortunately for me, one of those people who has been able to establish precision networks that solve problems in more complex steps. I muddle through my math, even though I use a lot of it. Because it works, because I repeat the actions a lot, and that’s my form of practice — both reinforcement that these are the networks to use, and no excursions into some other way (and which might work a lot better, judging by my friends who are math geeks.) The process runs in my topmost level network, which is to say it has my full attention, and other things fade away as I plod through the steps required to get the actual answer.

I submit to you that this is the same for anything you are thinking about that does not have a canned network solution. You hold the concept as a chunk of memory that is associated with every other memory that your mind is topologically organized to map it to. From these, you collect meaning, or lack thereof. You do this for all parts of the problem, and the process is both familiar — you do it for everything new — and may even be comfortable or enjoyable as it generally results in a more effective you, able to do more because you have more solved networks available after every success. Or it may irritate you if this isn’t really what you would prefer to be thinking about right then. I can definitely think of some personal examples that apply, I’m sure you can as well.

I think that the network that feels like “you” is the network or perhaps networks at the highest (as before, topologically speaking superior) level, the one with the most abstract links downwards/retrieval wise and therefore the least distraction of detail for what you’re thinking of, while the associative mechanisms are testing those concepts for similarity to already functional networks.

To think, then, or that process we generally call induction, is to consider, to turn over and examine, to compare, to mull. It’s a network manipulating established memory regions — the ideas at hand — in such a way as to try and fit them to established networks.

Morals and Behavioral Choices

I think that as we learn, we establish a series of memories we accept as axioms of what is good, and what is bad. For some, these layer up with ever more sophisticated nuance, for others, not so much. But for most of us, it is clear that what we perceive as benefiting us is “good”, and what does not, “bad.” It’s not a huge step (although that almost ignores the question of why some people never make it, I suppose) to the idea that if I benefit you, you will likely, or perhaps certainly in some cases, benefit me back. Sooner, later, whatever… the point is, you probably will. Or, at least, you’ll be more inclined to consider me when your decisions are made, as the benefits I extend to you are best if continued, as with any source of benefits. From this we can derive obvious, simplistic, yet very useful philosophical basics.

Our biology contributes actively to this, particularly in matters of strong sensory input. But what I like, and what you like, can differ in an almost polar fashion. Do you like liver? I can’t stand to be in the same circulated air where it is being cooked. I run to the door and I am gone. No excuses (talking requires breathing that stuff), no delay, just up and gone. Worst smell I can imagine. But I enjoy the smell of a skunk up to a certain point (beyond which, I also want to run because it burns my nose.) But when I’m sitting there and thinking how wonderful and interesting it is that a shambling beast like a skunk has such a profoundly off-putting odor for most, everyone else has already done what I do when I catch the scent of liver: run away.

As we grow more complex, the things we relate to the things we are experiencing become more and more a part of the new experience. I mentioned what the smell of nail polish evokes for me; some others that affect me similarly are the smells of lipstick, certain perfumes, and nylon. The one memory retrieves the other, and off I go on wings of association.

Good or bad, these things color what we do, what we like, what we don’t like, what disgusts us and so on. In turn, those reactions tend to color higher level responses and induction; I’ve no problem with some things, yet others react to those same things in ways I consider incomprehensible at best.

We can be presented with an idea and immediately feel its “rightness”, as it fits well with our associations, our perceptions of good and bad, and so on. These are the easiest ideas to understand — they feel natural to us, they “just fit.”

Other ideas — ones we might, eventually, really agree with and be willing to adopt as our own — may be so new or so at odds with our current associations that we don’t even “get” them…. until we suddenly do. Instead of association, we have to establish new networks.

In this way we build up what is eventually, usually, a very complex set of analytical networks that can (relatively) quickly take the abstracts of a situation and hand back an emotional trigger, whereupon in many cases, other networks, ones we have little direct access to, proceed to cause dumps of chemicals almost directly into our systems and then we experience a physical reaction as well, which is duly integrated into the whole of the experience by the top level network — the consciousness.

Love

Actually, I think I just explained most of love, but added to the general very strong association with goodness, is an association with the idea that losing this, even for short periods, consists of such a loss of goodness that it is undesirable and hence we instantiate behaviors from constant attention signals (flowers, compliments, kisses, rides when someone could have just as easily walked, etc.) to extreme (and often counter-productive) attempts to reserve (jealousy, interference, structuring for avoidance of the perceived threat, etc.)

When both parties feel the same, there is a bout of closeness that compares poorly to most other experiences; it really is unique, in that there are few, or no, doubts about what is going on that make it to the top level, and as long as those attentions are obviously and essentially unchallenged, love tends to be a very positive thing.

It can go south quickly, though, if a perception arises that one’s person has been forsaken for another interest. In my humble opinion, it is the ability to discriminate a real threat from those natural relaxations of attention that arrive due to comfort, that separate long term loves from those that tend to go up in associative nuclear fire.

Attention

At any time, lower networks can demand or request attention from the top level network when they process something that they either can’t handle, or consider the solution to be handling whatever it is at a higher level.

Pain. Hearing. Vision. Smell. Balance. Association. Color. Topology. Many, many more. These can command your attention shift to them — essentially by intrusion from a lower level network. No matter what you were concentrated upon previously, your attention can be shifted without your conscious consent, given sufficient intensity on the part of the intrusion.

How long you focus on any one thing, sensory or network-wise, is your conscious attention span. But if you’ve established a network to deal with whatever it is, or things sufficiently similar, the problem solving can go on without you — and you may (and I have) wake up in the morning with a “new” idea, wholly germinated and inexplicably — because we have no access to the process — crowned with an already blooming flower, metaphorically speaking.

This has happened to me quite a bit; not as much as I would like, mind you, because it is amazing, rewarding, and often turns out to be financially lucrative in one way or another, but still, often enough that I immediately recognize it when it happens. I’ve apparently got a network that goes, “oh yeah, it’s another one of THOSE” which in turn just reinforces the whole recognition process.

Or it could be recognizable topologically, that is, as an associative pattern of what it all feels like, rather than as a stimulus that gets analyzed by a custom system. It is difficult to say which, but the end result is incontrovertible: every time it happens, I know it has happened before I can even get the whole idea consciously organized, or even well enough associated with my communications networks to even articulate to myself what has happened overall.

Consciousness

This is your highest level network(s.) Meant in the sense that they are the most complex and capable — not in that consciousness dominates or is superior to the other networks, because clearly, that’s not the case. Consciousness takes the most abstract inputs from other networks as they make themselves known and associates them with our established memories, as well as running them through language networks to give us a usable handle on them, which gives them the depth, detail and importance they appear to manifest with. The essence of qualia, the broader impression sensory perception imparts to us, is the new input and whatever your mind associates with that input creates an experience with (usually) more than a single focus.

As senses associate something and memory sends a signal to bring it to the network’s notice; the dull hum, as it were, of signals of low associated criticality but of general interest (warm sun on your skin, the smell of a book you’re reading, the contended purring of the cat on your lap) continues apace.

Your consciousness is engaged in integrating all that together, to no purpose, to some purpose, with great intention… however the network, your consciousness, is operating at the moment.

The more concentrated on a problem this network(s) is, the quieter the “hum” gets. It can disappear altogether. Whatever is not gone away at any point… that’s you, your consciousness. Your focus. Your induction at work (or not.) That’s me.

And, just as an aside, that’s the cat’s state of mind too. It’s just that the cat’s networks aren’t all that similar to yours and mine in the level of abstraction. They’re just as fast and effective — more so in some cases — as ours are. Just watch one hunt sometime. Or leap six times its own body length up in the air and land on the top edge of an open door without exhibiting any balance issues or concerns about stability. They’re geniuses, physically speaking. But I digress.

Consciousness demonstrates fundamental variation in network types

Consider that in order to change the topology of a network, nerves have to at least physically alter existing connections, and may even require growth of new cells; in the case where we have observed this (see the link to the article on the f22 simulator flying petri dish, above), such alteration takes a not insignificant amount of time. Very reminiscent of (and I would suggest probably not significantly different from) our learning to ride a bicycle. You’re not going to “get it” on the first try.

Yet we know that some parts of our brains are much faster than this, and agreeably general purpose. Short term memory is one of these; clearly, when someone tells you their name is so-and-so, that goes into memory that is either purely electrical, chemical, or both in nature, simply because it cannot be topological — neural topology simply cannot change that quickly. Long-term memory, however, is almost certain to be topological, which would account for why it is so much more difficult to establish, and to erode once established.

Tricks to enhance memory often fall into the “associate with something” top hat. “Her name is Susan and she’s a lawyer.” Associating “Sue” with “Lawyer” was previously fixed on your mind, so you’ve got something already in long term memory that you can tie this new need-to-remember thing (presently) in your short term memory to — and that seems to go quite a way towards ensuring that you’ll be able to recall the details next time you need them for most who use the technique.

Likewise, consciousness exhibits rates of activity that prove a relatively fast, inherently flexible network based on other than topology in is play — the speed at which we reason far exceeds any observed topological rate of change in the brain, so that leaves only faster mechanisms as potential candidates for actual operating methodology.

There’s an implication here, that of networks that are themselves established in such a way as to be as general purpose as possible, within the bounds of what they are supposed to do. These may be a matter of nature — heritable (it seems so… there are definitely genealogical lines that dependably produce certain levels of intelligence on a somewhat predictable basis) or they may be a matter of early learning, that is, nurture. Or both. Whatever the case, once established, these networks don’t have to change much further for most operations they are concerned with. I suspect such changes would require a very significant alteration in the manner in which one pursued one’s life and thinking methods. It may not be going too far to speculate that most of this development is complete by the teen years.

Short term memory may exactly replicate long term memory topology, using electrical and chemical analogs. That could explain how the transfer to long term memory can be made relatively cleanly. If it is more indirect than that, then likely the effect is the same in any case and any analog (AI, enhancement, etc.) could use such an approach.

So we can infer that the network(s) that support consciousness have to be stable topologically, while being able to support a great deal of sophisticated and concurrent operation involving almost anything at all that symbolism (direct, indirect, or newly formed) has been established or can be established for; constant input and output via associative memory, constant conceptual conversion to and from language, monitoring of sensory information, control of relative attention / power applied to the various items currently of sufficient interest to receive the attention of the network(s.)

So again, we have something that has established a relatively stable topology, but eventually, at least, can process information using the more malleable elements of network variability alone. This speaks strongly to the power of the model used by animal brains; topology is one solution, and when the need is relatively static, can be the only solution, carrying with it the benefits of extreme long term stability. Whereas other solutions demand faster mechanisms, and here, chemical and electrical means must suffice upon a topology that is sufficient to support anything required of it. Which, in the case of consciousness, consists of a truly massive range of tasks.

For the vast majority of people, this, or these network(s) is/are the “me”, the “I” of our selves; the aware portion of the brain, the system that tracks what is happening right now, relates it to past experience, and extrapolates future probabilities from it. And that constantly changing landscape of events, in turn, is the experience of “being yourself.” The internal narration is the language network(s) feeding back familiar symbols in words; the sensory experiences have their own sets of symbols and associations, as well as their own limits and intensity controls; qualia is the degree of processing consciousness gives to these sensory inputs in the more-or-less instantaneous sense.

Implications for Human Learning

If my ideas here accurately describe how the brain and memory work, then repeating a learning process of any kind while being mindful of what you are doing will serve to reenforce it and add detail. Likewise, associating part(s) or all of the issue at hand with other similar tasks you already know how to perform should seat the knowledge more firmly in your mind. In order to facilitate moving your short-term, newly acquired information into long-term memory, you’ll want decent blood sugar levels and adequate amounts of sleep.

Within the context of the model I suggest, long-term learning of physical competencies will require topological change and growth, which will take nutrient supply and time; while acquiring procedure-based knowledge will additionally require time to make the long-term topological modifications against short-term memory that is no longer changing, which will require sleep; and for both, repetition, repetition, repetition.

Back to considering neural nets for a moment, we know that training them, that is, getting them to learn, requires many repetitions, and that the network must be in a very plastic state, that is, it must be amenable to change. We “freeze” the network once we feel that it has learned what we want it to — that’s the direct equivalent of altering a short term, plastic network state into a long-term, much more rigid state. These characteristics correlate well with the network model of overall memory for tasks and facts presented here.

Implications for AI

One of the hand-waving assumptions that is commonly bandied about is to take an approximate neuron count of the brain (a very large number) and then posit that AI faces the need to build something of similar complexity.

Clearly this is not so, and has never been so: First of all, as we can do many things better than the body’s neural systems can — from sound reproduction to precision memory of images and other observational specifics, copying the neural solutions is not just unnecessary, but actually contraindicated.

This argues for use of conventional (non-neural) technologies in service to the rest of the project, and that in turn reduces the complexity of both the system and the questions to be resolved to get the system going.

As far as the insights here apply, any network that is specialized to solve a particular problem, or any area that is as yet not specialized at all, is not part of the actively cognitive, conscious system of the human brain.

Here’s what I mean by that. If you were to attempt to approximately model my brain’s consciousness, for instance, you don’t need my bicycle riding skills, you don’t need my ability to catch a baseball, you don’t need my ability to ski, you don’t need my ability to drive a powerboat and so on. My consciousness exists apart from these skills, and does just fine across the majority of my life experience when none of this is called upon in any way. It follows that said skills simply aren’t required.

Might the resulting consciousness be different from mine? Certainly — I am aware of my competencies and that definitely affects my self-image — but it wouldn’t affect my ability to be conscious, and that’s my point. The end goal in AI, the line we want to cross, is the creation of a consciousness within which an intelligence may prosper. Not a carbon copy of any particular human being.

As it is clear that large parts of the brain’s neural inventory are engaged in the encoding and solution of various tasks, and that to at least some extent, additional resources are likely unused, ready for the next procedure to be learned, as well as in use storing memories and regulating bodily systems, it follows that the topologically stable network that supports our consciousness utilizes fewer neurons by the same proportions.

It isn’t just human-comparable AI that wins here; Aiming at a middle ground, for instance, the consciousness of a small animal, suddenly is revealed to be a lot less complex in terms of hardware required than some would have us presume.

The task facing us is definitely much, much less complex than “re-creating an entire brain’s neuron count.” That’s good news!

In Summary

I may (already have, in fact) come back to this and expand upon some of the ideas, perhaps explore a few more aspects of the sensorium and cognition — I’ve obviously only touched upon a few, and am depending upon the reader’s ability to generalize for the rest.

I like to give this type of speculative writing a while and then approach it somewhat refreshed.

So far, I feel comfortable with this on every level that I’ve laid out here. Although one should not expect a perfectly stable read over time. I like to fix things once built, as I see the shortcomings — there are inevitably shortcomings.

Anyone who reads this and has an opinion is welcome to express it, as long as you stay in line with the site policies, a link to which is found above every page here.

In particular, if you think I’ve missed addressing an issue that doesn’t seem to be accounted for by the general idea behind all this, please do let me know. I will be pleased to entertain any such notions.

Thanks for reading.

–Ben