We urgently need a third cognitive revolution, and perhaps we are close. Most of our industrial elites are still to notice the significance of the second revolution and so have, like the latent effects of irretrievable CO2 emissions, unwittingly set their meltdown in motion (setting the inertia of oligarchies aside for now).
Understanding the third revolution will be essential for understanding the nature of innovation generally and corporate innovation in particular. As it happens, innovation might not be the right word because most of what we do today under the rubric of innovation is actually something else.
This is mostly because we are miles from any cognitively viable “Theory of Product”. Most of what gets done in product design is barely removed from guesswork, often disguised within a framework (like Lean) or a “school” of thought. The frameworks and schools in themselves seem plausible. That’s mostly how they “work.”
However, we are entering an age where what little cognitive science we know, which is very little and still “Pre-Galilean“, is enough to show that most of what we do in business follows deeply rooted cognitive illusions that become even deeper once converted into stories, intuitions and conventions, such as “The Customer is Always Right,” and so many more.
Of course, the customer isn’t always right. In fact, they are mostly wrong and probably a poor source of information.
That said, everyone is wrong in the internalist sense of semantics wherein there is no “product meaning” to begin with. There are as many product concepts as there are customers. And I don’t mean in the wishy-washy “persona” sense or so-called user-centric design.
Ultimately, this lack of external meaning might prove to be the undoing of any attempts to find a product process that actually stands a chance of empirical validation, which seams a goal worth pursuing.
Unfortunately, in a world where it is so easy to share ideas and for some of them to gain traction (mostly by chaotic processes) we are inundated with new stories on a seemingly hourly basis.
The field of product design, to take one example, is riddled with faux concepts where the use of the word “method” is enough to fool us into thinking that there actually is a method. We readily adopt words like method from science to give a sense of quasi-scientific legitimacy. Ideas like “minimally viable” sound attractive, but are mostly nonsensical.
Even were we to find a product method that is actually rooted in cognitively sound principles, whatever they are, we might quickly become all “meta” about it, to quote the inter-kids, which is to say that we follow the “method” in outward form only, mostly for social reasons, whilst internally we continue to follow our own narratives, usually to do with personal agendas and motives, or, worse still, intuitions.
Indeed, it is easy to argue, though difficult to prove, that perhaps 99.999% of tasks undertaken in the corporate world are motivated by each actor’s personal goals and biases merely reframed as legitimate corporate ones, mostly, and ironically, thanks to the myriad faux business concepts in which it is so easy to disguise our own agendas. Game Theory no doubt has much to say about this, were we to take it seriously.
So what is the third cognitive revolution and what has it got to do with innovation?
Firstly, the cognitive revolutions I am talking about here are related to how we interpret the world and our role in it. This in turn frames how we go about our business.
Let’s begin with the first cognitive revolution.
I do not mean what story-teller Yuval Hurari mentions in his playful historical analysis: Sapiens, A Brief History of Humankind. He describes the first cognitive revolution as that time when humans acquired language, a process or event still mostly unknown, both in terms of its origins and true nature.
Of course, this “cognitive revolution” marked the birth of modern humans, so it rightly forms a pivotal part of Hurari’s narrative. For me, it is not worthy of denoting as a revolution because it refers to the emergence of the cognitive capacity rather than doing anything useful, or revolutionary, with it.
Any cognitive revolution mentioned here will concern itself with genuine paradigm shifts regarding how we use or understand our cognitive capacity, however it came about, to think about the world and then act upon it radically differently. Indeed, I could have more usefully referred to revolutions as paradigm shifts, but that has a management-speak quality that somehow renders the term as meaningless as Red Oceans.
For me, per Chomsky’s frequent elucidation, the first cognitive revolution must surely begin with the birth of science, or, more accurately, the emergence of the naturalist tradition: our first attempts to see the world for how it really is rather than how we accounted for it using common sense, intuition and superstition.
Indeed, I hope to convince you that the role of intuition radically diminishes with each revolution in turn and that the role that intuition currently plays in business is a sign of how antiquated our management theories are. Perhaps the endless debate about “entrepreneurialism” and whether you have it or learn it is a sign of our complete lack of actually useful theories. It’s certainly a sign that business is more like religion or magic than science.
Whence the third cognitive revolution unfolds, corporate managers who insist upon intuition, or their business “gut”, as lauded in so many business fables, will most likely become irrelevant. I hope so. Then we can put an end to involving so many smart workers in quasi-religious fables that are surely a massive waste of human creativity and potential.
The most striking example of the naturalist breakthrough was Newton’s postulation of invisible forces, namely gravity, as an account for the nature of mechanics in a world hitherto considered to be mechanical. Oddly enough, mechanics is the wrong word (even though we still use it) because Newton showed, much to his own horror, that the true nature of things is distinctly non-mechanical, even non-physical.
Prior to his insights, philosophers had assumed a mechanical explanation of the world much like today we increasingly assume a computational explanation for the world, which I will get to shortly.
Newton was greatly troubled by his own explanation because it so remarkably defied intuition. As physical beings (whatever that means) our intuitions are guided by our reflexive behaviors and perceptions, including our physical interaction with the world as we find ourselves in it.
For example, in our daily experiences we can only bring about movement by mechanical force. We touch things to move them. We do not move objects (yet) with our minds, although many an X-Men fan would relish the chance. I will return to telekinesis because it is a useful illustration of limited human scope.
It is worth pointing out that our brains stopped evolving some time ago. Indeed, per Tooby, each of us walk this Earth with “a stone age mind inside a modern skull.” Leaving aside the objections about evolutionary psychology, it is still true to say that most of our cognitive capabilities, as in the ones pre-wired in our brains, were mostly formed before the emergence of language. These pre-wired tendencies are what often form our intuitions – i.e. attitudes, judgement and behaviors that are mostly reflexive but then projected onto social and cultural concepts, including businesses and the fable of markets. We simply can’t help thinking on “auto pilot” and somehow imagining that we are the pilot.
Our discovery of the world, including science, is an overlap of what our minds can comprehend about the world and what we can observe and prove about the world itself – i.e. presumably what happens in the “real” world (outside of ourselves). Our ordinary interaction with the world is based on intuitions. For example, if we see another human being, we automatically know what to expect within the bounds of norms: that we can interact with them in certain ways, exchange ideas, become attracted, and so on. Such intuitions would largely be correct.
And when we see an apple (the fruit, not the machine) we know that we can eat it. Its physical reality – i.e. that it is a mass of molecules that reflect light in a certain way and interact with our taste buds in a certain way and experience gravity in a certain way – does not come to mind, never mind its quantum reality.
Actually, even something as seemingly concrete and simple as an apple can provide a number of conundrums. For example, what if we genetically engineer a pear to look and taste like an apple. Is is an apple?
However, as we shall see, in the same way that we cognitively integrate an apple’s “intuitive” meaning to be mostly a food object, what we see when we “see” a business (or a market) is also a gross approximation that fits neatly within our intuitions but says little of its true nature (assuming it has one and is not merely an honorific term). I will return to this because it is an idea that sits at the heart of the current cognitive revolution.
Our intuitions are what we used to rely upon to explain the world. For example, when the moon looks bigger near the horizon (the moon illusion) it must be, per intuitions and experiences, actually bigger. Of course, it is not. It only appears bigger.
Only later, when we understood optics and human vision, did we understand the illusion. Nonetheless, whenever we see this illusion, our brains see a bigger moon and intuitively we think that the moon is bigger. We can’t help it. The remedy is data, the data that tells us about the actual nature of the moon. And there is a hint: data is more useful than intuitions, perhaps.
Similarly, when we see an object move, our intuitions assume some kind of natural motive or power accounting for the movement, or some kind of causal agent. Historically, when we could not account for such agents we typically ascribed occult powers, like Apollo moving the sun because for the sun to move it must surely have something to move it.
The sun is a special case that required, or allowed for, an occult explanation because we didn’t know what the sun was. We could not touch it, nor hardly look at it, nor account for its magnificent power (to light the world and grow crops). However, we know what an apple is (or kind of) even when it falls. And the falling isn’t puzzling because that’s what apples do when they ripen. Actually, it ought to have been a deeply puzzling mystery.
Accounting for the fall of an apple, scientists, prior to Newton, had not considered this a mystery at all. Rather, per Aristotle’s elucidation, objects go to their natural place. Notice that this is really a type of intuition again because we have a natural experience with objects belonging somewhere. Clouds belong in the sky, trees belong on the ground, the water in rivers belongs in the see, hands belong at the end of our arms, and so on.
Indeed, according to Lakoff, we can account for most observations in the world by drawing parallels to our own physical experiences or embodiment. This is the theory that our only real cognitive capability is metaphor. I find the case for this quite compelling and in ways that we haven’t yet fully realized.
The driver of the first cognitive revolution (and all cognitive insights subsequently) is the ability to be puzzled by something rather than let our intuitions explain it away or relegate it to background data unworthy of explanation.
Chomsky’s quote: “Discovery is the ability to be puzzled by simple things” is firmly affixed to my office door.
This brings me nicely to the second cognitive revolution, often referred to as The Cognitive Revolution, which was the shift from an externalist world to an internalist one. Namely, in trying to account for human capabilities, mostly cognitive ones and especially language, we should stop looking at external traits, like behavior and environment (nurture) and instead assume that there is some kind of computation happening inside of our heads that acts upon information, but in ways constrained by natural endowment.
The classic example, which to this day remains controversial, is to explain language as a module inside of our brains that is pre-wired (from birth) to interpret and compute the kinds of structures that underpin language, namely non-linear tree structures where words pick out other words in entirely non-obvious ways that our minds cope with effortlessly.
Without this capacity, we could not understand or use language. This is contrary to the historical behavioralism concept, and what many still assume, which is that humans can use language because we learn it (i.e. nuture) by listening to our parents and the world around us.
In this externalist world, we learn language by hearing words and applying the learned rules of grammar, which are external rules embodied in the language (on a language by language basis).
It turns out, per Chomsky’s great linguistic claim, that our brains already know how to compute the various structures that underpin all languages and that language acquisition is mostly learning which particular set of words (say the English or French vocabulary) and which general order (subject-predicate or predicate-subject) for our mother tongue. The brain can do either, as is so obvious because a child raised in China where they say “she him loves” would instead have learned to say “she loves him” if raised in England, say.
But once the language engine is activated for “she loves him” the speaker could just as easily say “she loves the dog” or “she loves jam” or any other infinite number of variations, without – and this is key – ever having heard or uttered those particular combinations in the past, just as I can now write: “she loves triadic chords” without ever having heard that sentence before (as far as I know) in order to know how to construct it. I just did it. This is the so-called productive creativity aspect of language.
Indeed, the vast majority of sentences that even a child utters, are unlearned. They are computed extemporaneously. So too are the sentences uttered by adults. I just wrote that last sentence as I thought it, without any (conscious) forethought.
One of the radical consequences of the internalist view is that there are no word-world relations. In other words (forgive the pun) the meaning of words do not exist in the physical world. They exist only inside of our heads. In case you haven’t grasped what I’ve just said, let me be plainer: there is no such thing as an apple. The meaning of that word belongs entirely in our heads and is not in any way encoded in a real-world object.
Of course, this sounds ridiculous. Without fail, every time I have tried to explain this notion to folks in person, they reject it.
Well, I don’t mean to discuss the merits of the internalist view (versus the referentialist one) here but would rather ask you to consider a more important point, which is to consider the objections you might have to internalism. If you can take a moment to introspect, you might be convinced that your objections are based solely on your experiences with words, such as assuming, all along, that the meaning of the word apple is tied up in that object invariably called “apple.”
In other words, you are using your intuitions.
And now consider the historical reaction to Newton’s claim that objects move under the influence of hidden forces – i.e. “action as a distance.” Using intuition alone, this is an equally absurd claim as the notion that words only have meaning inside of our heads. It simply doesn’t fit with our experiences.
Hence the importance of the first cognitive revolution where we learned that the naturalist viewpoint is an attempt to see things for how they really are and not how our experiences account for them.
So what, then, is the 3rd cognitive revolution?
Well, that is the subject of the next post now that I have laid some of the groundwork for understanding the nature and significance of the first two cognitive revolutions. Stay tuned…