Hello MetaModel fans and friends,
At INLPTA, we have knowledge questions we ask our trainers to answer for preparation, which are a summary
of the diverse variations of NLP you can find at different schools and institutes.
We try to be scientific and methodical in our approach and make definitions and distinctions
- so that NLP becomes comprehensible, understandable, and acceptable.
One question is: What is the meta model 1, 2, and 3 ?
We define 3 phases of the meta model:
MM 1. structure of magic - nine distinctions ca. 1975
MM 2. manuals not published - order of application: dist - gen - del - 10-11 distinctions sometimes presuppositions included early 80s
MM 3. Chr. Hall, Eric Robbie, et. al - since mid-late 80s 12-14 distinctions
MM3 is taught today
a lot of NLP trainers today are not able to make the necessary distinctions between the language, the thinking, the linguistic structures, or the transformational categories. Sameness and fuzzyness are clouding the understanding of many NLP Trainers and NLP students.
So I want to help by giving some background and some distinctions and explanations.
The Metamodel of Language
The MM was inspired by Noam Chomsky’s Transformational Grammar (TrG)
and Linguistics does have a logical and scientific structure - that does mean distinctions do exist.
In Chomsky’s TrG, the three transformations change a complete sentence into a shorter incomplete one.
In TrG, the deep structure and the surface structure are language.
The deep structure is the full, complete, and correct linguistic expression we have in our head - the surface structure is the language we use when we think and communicate - but it is transformed, short, and reduced.
So by reversing the transformation from the surface structure back into the deep structure, you would create a very long sentence.
NLP (Grinder et al. ) started to shift from a pure linguistics view (original TrG) to a psychological view.
In NLP, the deep structure is now the experience (VAKOGAd) that will be transformed into language,
so it explains how non-verbal experience (thinking / map / model of the world) is transformed into verbal expression and thinking.
The meta-model of language (MM) would be the correct full name.
It is a model about (meta) model of language. (meta being aboutness,
meta mathematics is about mathematics, a meta communication is something _about_ the communication)
The MM is a model of how verbal language is mentally created.
The MM can be used to identify the model of the world through the use of language.
You can learn to ask precision-inducing (intelligent) questions without knowing about the content.
MM questions can be used to create more precision in the use of language.
MM questions can be used to reduce misunderstanding and errors.
MM questions can be used to change the model of the world.
MM questions can be used as verbal self-defense .
The verbal language is generated in a transformation process from the deep structure.
The deep structure contains our model of the world, map, experience, and pre-verbal thinking.
The description of experience is the verbalization process.
e.g.
„I want to buy a new phone.“ is a generated verbal sentence transformed from the deep structure.
In the experience, I might see and experience how my old phone is slow or does not work/charge well.
The thinking goes on - I am unhappy - I don´t like to be unhappy - I want to be happy - I can become happy again when I have other experiences - to achieve that, I think I need another phone. A new phone is faster and works better.
A volition is created that creates the sentence: I want to buy a new phone.
That sentence is harmless. - maybe? - so we leave it.
„My supervisor does not like me“ - this is more critical - from an ecological assessment - the consequences might be severe. (being unmotivated, interpreting the decisions/evaluations as unfair, reduced communication, etc. )
So the mind reading should be challenged: eg. What did you experience to assume that you are not liked?
Which could be false or true. „I was not invited to the birthday party“ - some coworkers were not, some were, … so maybe it was false :
New: repaired sentence: „My supervisor does not like“ me or maybe even: „My supervisor likes me“
The new sentence creates more options and has better consequences. Also changes the model of the world / map.
Back to: „I want to buy a new phone“ - if the person is in financial trouble and can barely buy food - the consequences are relevant.
A good question here would be to ask for the motivation.
That would be a model operator of volition (not in the current MM) like: What did you experience that motivates you to buy a new phone?
In the end - to lease one or get a used one or even replace the battery might be the solution.
In the moment of verbalisation, a transformation happens - which includes the three categories.
MM questions (MMQ) are used to compare the deep structure with the surface structure for repair.
Three transformation categories
The three transformation categories deletion, generalization, and distortion are different and have different effects on the sentences they produce.
The three categories are („hard wired“) a cognitive fundamental structure and happen all the time.
Sometimes the transformations are correct and sometimes they are wrong (or not useful or helpful).
The metamodel questions (MMQ) help to repair wrong/unuseful ones and can check if statements are true or false (regarding the map).
( When I teach the metamodel, I claim that the three categories are the main functions necessary for intelligence. )
So the three categories are useful and necessary - just sometimes they are applied when they should not.
If there is a misapplication, NLP wants to challenge them with a Metamodel question.
The goal is to have a language (surface structure) that is close to the experience (deep structure).
Distinctions and definitions of the three categories:
Deletion: is when a piece of information is missing - one instance
eg. I go for a walk. - Where? (There needs to be a location; you cannot walk without a location.) I go for a walk in the park.
Generalisation: is when a single experience is multiplied - multiple instances
eg. He comes always late. - Really - always? He was late the last 3 times. (or possibly : yes - he was late _all_ three times he was here.)
Now why are Modal operators in that category? Because they “generalise“ a single experience into a “rule“ -
I cannot call my friend so late. (single experience - the phone is off or is asleep - rule: not possible) - What would happen if you did ? I could wake her up.
The GRI - generalised referential index - a plural without a universal quantifier: Dogs are dangerous -
Metamodel question: Which dogs are you referring to? Pitbulls.
(The question: All dogs ? does not help here, because it is not all, and even what helps to repair universal quantifiers, offering counter examples: this golden retriever is totally harmless does not help, because it is just an exception; dogs per se are dangerous. So only the question that connects the surface structure with the deep structure helps: referencing the generalisation experience. )
Distortions: is a simplification or a reduction of complexity - structural change
eg. He wants to go home. - How do you come to the impression he would like to go home? - He looked at his watch twice.
Cause-effect and complex equivalence go here too.
Some cause-effect are correct - Showering makes me wet. Some are not: Reading books makes me smart. Both use - makes me - the signifier for a cause effect. The 2nd one is incorrect, but I probably would not challenge that because it is a useful wrong causality.
NLP does have an utilitarian primary („if it is useful or working it is good or okay“) - which should be challenged or at least made aware of too.
Coming back to
the usefulness and the base of intelligence in the three categories.
Deletion:
If someone would not be able to delete a lot of info from experience when talking, he would not be smart. One of the definitions of intelligence is: the ability to select the relevant information out of the total amount of information. Needle in a haystack :-). And selecting means deleting the irrelevant information. That is smart. Sometimes we overdelete - the hard-wired transformation category deletes relevant info - then a Metamodel question can retrieve it.
Generalisation:
If someone learns something - to change a tire - and cannot apply it to other tires - he lacks the ability to transfer „generalise“ the learned.
So generalisation is necessary for intelligence. Sometimes we overgeneralise , we transfer, or apply single experiences or learnings to other things which is not useful - then a MMQ can repair that.
Distortion:
All science models use assumptions (physics - mass is on one point of the pendulum, chemistry - gas is equally distributed, economics - higher demand leads to higher prices) these are all distortions of reality but a way to reduce complexity to create easy mathematical formulas - so it is simple to calculate - without distortion no science, no religion. Sometimes we oversimplify and create unuseful or misleading connections. (Beliefs, assumptions, untruth, conspiracy theories, etc. )
BTW - presuppositions belong to distortion
If we take our common metaphor of the map - than
deletions are one speck on the map, a white spot - a single something is missing
Generalisations are if a symbol (viewpoint) has changed its meaning (parking) it affects multiple places
Distortions are when the legend colours (brown = mountains) are changed (brown=swamp) it affects the structure of the map.
This is also the reason why the impact of the metamodel questions on the person’s model of the world goes from distortions, generalisations, and deletions.
How about nominalizations ? Are they distortions or deletions ?
Nominalizations are in linguistics verbs or processes that are made into nomens. It is like a frozen verb / process.
Give me security
Give me a glass of water
Have both the same grammar structure - the glass I can give you - the other is more complicated - because we have to thaw the “nominalization ice”.
How or when will you feel secure ? Being secure - when is that the case ? - - in the nominalization “security“ the dynamic, the time, the involved are
missing or deleted - hence deletion.
Why do some people sort it as distortion ? Because to thaw the nominalization you can also ask for the complex equivalence:
What does security mean to you ? But that evokes another process and would mix up the structure from nominalization to complex equivalence.
When you want to use MMQ efficiently, you need to check for the ecology too.
Ecology checking is the exploration of consequences. In most cases, increased variety/options leads to increased ecology and decreased variety/options lead to a decreased ecology or even an unecological situation.
Q & A
Common Questions regarding the MM:
Q: When should I ask a MMQ?
A: Two conditions need to apply:
a) You recognise a linguistic indicator that a MM pattern has occurred
b) the verbal expression is possibly wrong or hurtful or unuseful Or in other words: It is ecologically bad. If it is wrong but useful or helpful (ecological) you could leave it.
Q: Are the three transformation categories: deletion, distortion, generalisation ( also the meta model violations ) bad ?
A: No, they are good and even necessary for intelligence. Overdoing it, or applying it to create false or incomplete statements that create negative consequences is bad. These sentences should / could be challenged with a MMQ to be corrected.
Q: Why do people get stressed or annoyed when asked MMQ ?
A: The MMQ challenges the thinking and model of the world of the person they have to check and compare the experience/memory/map with the verbal expression for accuracy (truthfulness) and they have to correct themselves. The MMQ will also be interpreted as not being trusted or understood, hence a weakening of rapport.
Q: How do I know a MMQ is effective?
A: The other person needs to start a thinking process (watch: eye accessing cues) to re-check the sentence. Then a corrected sentence as a response.
Q: Is it possible that a sentence has more than one MM element ?
A: Yes - highly probable. „Some say they want chaos.“ can lead to at least 8 questions.
I hope that was useful.
Bert Feustel