Much of an engineer’s working life is spent predicting the future. Whether in relation to the physical world in the design space, or in project management predicting timelines and costs.
This is a subject that has interested me over the past few years. Why is forecasting so difficult when we can design a bridge with no such prediction issues?
3 books that have helped my understanding are;
- Superforecasting – Philip E. Tetlock
- Thinking Fast and Slow – Daniel Kahneman
- The Black Swan – Nassim Nicholas Taleb
While there are many factors including organisational pressures that effect prediction, overall, the key driver in my opinion is complexity.
Bridge design obeys the laws of physics that are predictable and follow the bell curve. Predicting construction activities has many moving parts and is non-linear, meaning it doesn’t work like a bridge deck. We human’s aren’t good at understanding the non-linear.
The more inter-dependencies and the longer the process, the harder it is to predict. This is why it is easier to predict the cost of a drainage line installation in a paddock than a drainage line on a large project with other activities happening around it.
More nodes, more inter-dependencies, more complex cause and effect relationships.
So as Engineers how do we deal with this and start to improve our forecasting?
My current thinking on the matter is to approach prediction in a combined “bottom up” and “top down” approach.
By completing a first principals forecast based off industry knowledge and benchmarking it against historical data, we can use the technique of Bayesian reasoning to aid in our prediction capability.
Bayesian reasoning is a relatively simple way to acknowledge historical data of past events while still adjusting for the uniqueness of the current situation. “ We tend to either dismiss new evidence, or embrace it as though nothing else matters. Bayesians try to weigh both the old hypothesis and the new evidence in a sensible way.” – Nate Silver and Allen Lane – The Signal and the Noise
It allows you to nominate a “base rate” that a particular project / activity has cost in the past and then move up or down from this figure based upon your first principals forecast.
Actual or historical data (productivities) can provide “base rates” for prediction (which have the complexity of the unknown somewhat “built in” by their nature). Then, adjustments based on what we currently know and what we believe we don’t know for the uniqueness of the project can be factored in.
To effectively implement this technique, we need to significantly improve as an industry at collecting meaningful actual productivity data. The more we have, the more robust our base rates become.
To help with this FSC are currently developing a simple productivity application that we hope will help implement this discipline, making the project manager’s predictions more like bridge design. Stay tuned for more updates.
Leave a comment here to discuss this further as I would love to develop my thinking further in this space.
Have a great day 🙂