Leveraging Simplicity in ML Model Selection

In this nascent moment for massive LLMs and transformer-based neural networks (NNs), it is tempting to look for powerful and ultra-deep NNs to solve difficult problems, ranging from time-series analysis to computer vision.

However, these techniques come with some significant disadvantages. These range from impractical training times, to requiring huge quantities of data to be trained and validated. For most real machine learning (ML) contexts these models are simply not feasible, even if their theoretical results would be stronger.

In Henesis, we advocate a bottom-up approach to ML solutions. Start simple, find a model better than a baseline, then begin working on increasing its complexity, always evaluating the cost of continuing against the potential benefit to the model.

Often, even simple statistical models can perform very effectively, especially in time-series prediction. Alternatively, leveraging mathematical know-how, like frequency analysis, filtering or dynamic time warping, can bring powerful tools to the table where an equivalent NN would be costly to develop and deploy. And simple models almost always give deeper insights than a black-box NN.

Are you interested in getting in touch regarding this bottom-up approach to ML?

» Get in touch!