I read an article (How to Make Better Predictions When You Don't Have Enough Data) today about making predictions and data - the word prediction always attracts me because I don't like it and I like to see how it's being interpreted and used by others. This article made some sense:
Thus in order to stay relevant, statisticians will have to get out of the purist position of fitting models that are based solely on direct historical data, and to enrich their models with recent data from similar domains that could better capture current trends.
This is known as Transfer Learning, a field that helps to solve these problems by offering a set of algorithms that identify the areas of knowledge which are “transferable” to the target domain. This broader set of data can then be used to help “train” the model. These algorithms identify the commonalities between the target task, recent tasks, previous tasks, and similar-but-not-the-same tasks. Thus, they help guide the algorithm to learn only from the relevant parts of the data ...
... To avoid critical mistakes in prediction, data analysts need to adopt new methods that will enable them to translate knowledge from different time periods and domains.
I think predictions can be of value when they use a short term timeframe - such as elections which this article uses as example. And the concept of Transfer Learning is a good one because it reminds us to move beyond our ingrained, habitutated thinking modes and seek information and data in new places. The aim is to expand our understanding of the issue rather than closing down out thinking to what we know already.
Looking for both confirming and disconfirming evidence is how we strengthen our thinking about the future, whether that is by trying to predict the future (not recommended) or anticipating the future by learning to think in multiples and possibilities.