Mine was for my flatmate who was also (at the time) my little brother’s girlfriend of four years.
And it was rubbish.
Even now it makes me cringe.
This spectacular failure always troubled me because by any classic ‘spirit’ or ‘energy’ model I should have really nailed it. The girl in question was practically family. Contagion or whatever… I had all the background information I needed. I knew her intimately. Plus she was sitting right across from me.
At the time there were few people in the world I knew better and my prediction quite simply could not have been more wrong.
What with all the investigation into prediction and randomness that has been occupying me of late, it’s just possible I may have landed on an answer that doesn’t rely on circular explanations to do with yet more spirits or yet more energy.
It appears to be a mathematical inevitability brought to you today by Henri Poincaré (and the letter G).
I got to thinking… What is a piece divinatory technology?
It’s a data generating machine. It is a model of the universe that spits out pictures of what it thinks the future will look like.
And they can vary in complexity from a binary ‘coin flip’ all the way up to an enormous Greek temple economy where indentured women were slowly poisoned to death by breathing in toxic fumes and then speaking in riddles.
Complexity and accuracy
Runes, I Ching, geomancy, necromancy, cartomancy and so on. What you have is a spectrum of technology ranging from the supremely simple to the extremely complicated all performing the same function.
Does it stand to reason that more complex systems produce more accurate results?
Well, not inherently.
Because more complex models are typically built around the notion that if you increase the levels of relevant seed data that you start with, you will end up with a more accurate picture of the future. Examples of more seed data can be
- An increase in potential options
- An increase in potential alternatives and… most dangerously…
- An increase in presumed knowledge of the subject in question.
As it stands over the last forty years, every single attempt to apply empirical methods to quintessentially non-empirical subjects like economics, weather prediction, national security, commodity prices and so on has resulted in a decrease in the accuracy of predictions. (Or more specifically, accuracy has not improved at all but observed effects have become more extreme… Meaning predictions fall further and further away from the mark.)
What would we call non-empirical? Any field that deals with the future/predictions and bases its findings on the non-repeatable past.
Let’s just restate that.
The more data you put in at the beginning, the less accurate the picture you end up with.
Don’t be a friendship expert
In these non-empirical fields (and your friends’ lives count as non-empirical fields) the most dangerous thing you can be is an expert.
Because experts don’t know what they don’t know.
“The problem is that our ideas are sticky: Once we produce a theory, we are not likely to change our minds… When you develop your opinions on the basis of weak evidence you will have difficulty interpreting subsequent information that contradicts these opinions, even if this new information is obviously more accurate… Remember that we treat ideas like possessions and it (is) hard for us to part with them.”
Think about this in comparison to what you know about the friend you could be reading for versus a complete stranger. With your friend, you don’t know what you don’t know… With the stranger you do.
It appears you are vastly more likely to miss crucial information with a friend than you are with a stranger.
In 1965, Stuart Oskamp conducted an experiment whereby he gave a group of psychologists a succession of files, each containing an increasing amount of information about their patients. The additional information did not make their diagnoses more accurate. Instead it made them much more confident in their initial diagnosis…
So confidence in their own predictions increased with additional knowledge, but their accuracy remained under 30%.
Your prediction will fall through the enormous gulf between what you know and what you think you know about your friend. This is how Black Swans are created in the first place. The multi-trillion dollar global finance industry was filled with ‘experts’ who didn’t know what they didn’t know. (It still is, of course. But we can only hope global finance will be a swan free zone for a while.)
The implication of all this is that the more data you have available to you during a structured divination, the less accurate it will be, through no fault of your own.
The good news
You have probably got here before me but there is some good news.
The above findings are about prediction methods that are based on available data at the time of prediction.
But divination doesn’t require much in the way of available data to work. Really more of just a topic area. There are implications for the people you choose to read for (including yourself in many cases) but for the most part it serves to heighten the competitive advantage that being good at prediction can offer.
It also means you can avoid the awkwardness of having to tell your friend that the cards say he’s not going to get a girlfriend until he drops twenty pounds and moves out of his mother’s house.
Good news all around, then.