The #1 Mistake Companies Make When Creating Their Data Science Foundation | 1
| |

The #1 Mistake Companies Make When Creating Their Data Science Foundation

Imagine you just finished training an excellent neural network after months of hard work.

It works well on the training data, test data and passes all your validation tests.

But as you move it to the production, you start to notice; it doesn’t do a great job.

If this sounds familiar, you’re not alone. According to Gartner, 85% of data science projects are failures. A majority of projects fail despite having more resources at affordable rates today than ever before.

According to Nirman Dave, CEO at Obviously AI. the reason is surprisingly simple. Companies don’t realize the power of simple models.

Most companies are obsessed with technologies such as deep learning. Only after spending a majority of their budget, time, and effort they realize that it’s not a silver bullet.

Why should you choose a simple model over complex alternatives?

Simple models such as logistic regression, despite their simplicity, offer great value for businesses. They could solve almost 80% of most business problems.

Simple models do well for several reasons. Noticeably, they are more explainable, require less preparation, and can be easy to retrain for the dynamics of high-volume data.

On the other hand, complex models require a ton of preparation. They take time and a massive chunk of your budget to train the models. Not to mention, complex neural networks are hard to explain.

I don’t mean to say never to use deep learning at all. They are probably the most significant advancement in AI. Yet, there are more factors to consider before choosing deep learning. It may be beneficial to start with a simpler model and evolve along with the complexity.

Simple models are more explainable.

A trained decision tree can be plotted to represent how individual features contribute to making a prediction. It makes simple models more explainable and easier to understand.

People who aren’t familiar with machine learning terminology find it helpful because they can see what goes into each step in an algorithm without previous knowledge!

“Decision-makers want to know the reasons behind an AI-based decision, so they have the confidence that it is the right one.” — Google Cloud

Furthermore — if using an ensemble of these same types of (simple) models is better than betting all your money on one complicated model or risking not getting adequate results from ML techniques at all. This strategy also takes away some risk by spreading out losses among many different investments rather than relying solely upon one.

Deep neural networks can be challenging to understand because of their complexity. Nontechnical people find it difficult to understand. Even data scientists who work with them regularly have difficulty trying to debug them compared to simpler ones. You need someone with experience to debug them when something goes wrong.

Simple machine-learning models work well for high-volume data.

To stay ahead of the data curve, you need real-time insights.

However, by the time these are derived, there will be new information that invalidates what was learned from an earlier point in time. This can lead companies into a vicious feedback loop where they never quite know how far behind their competition is until it’s too late!

“We show deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive”— MIT-IBM Watson AI Lab

Machine learning models are much quicker to train and can be applied in environments where deep neural networks would fail. They also take fewer resources to train than deep-learning ones.

Deep learning takes significantly more computational power compared to a conventional machine learning model. So, it’s not easy to train these complex algorithms in parallel with other tasks running at once.

That being said, there have been many studies that demonstrate how even minor modifications can cause problems when trying different types of neural networks or ANNs ( Artificial Neural Networks).

Complex models require a ton of upfront investment.

It’s an arduous task to prepare data for machine learning because it is often in different formats across multiple storage solutions. Each change can potentially require changes throughout the entire pipeline. It means that you have an immense responsibility to organize and cleaning your information before trying out new techniques or algorithms with them.

Deep learning models are a tough sell.

“Time required to deploy a model is going up, with 64% of organizations taking a month or longer.” — George Lawton, Data Science Journalist.

They require so much power and time. It’s not just the graphics cards but an entire infrastructure to train these deep neural networks with tons of data sets for them to have any hope of being effective at predicting your future needs or wants!

Cloud services such as AWS, Azure, and Google Cloud help reduce the cost of infrastructure that is required to train deep learning models. These low costs allow clients with limited funds access to high-quality expertise in artificial intelligence while not sacrificing their budget or time factors on implementation costs.

Although cloud services are an excellent way to get started with deep learning training, there is always a risk that the cost of these systems could turn out huge.

Final thoughts

Becoming more data-driven is easier than how it appears. Yet, most data science projects fail because most companies are obsessed with the sexy terms in the data science space.

Companies can solve most of their problems with models such as decision trees and logistic regressions. They require fewer data to train, and relatively cheap infrastructure. Most interestingly, these simple models are easy to retrain and get more up-to-date insights. On the other hand, a complex model such as a deep neural net takes an extended period to update its new learnings.

Deep learning is, however, a super helpful technique. How and when we use them makes all the difference. Here’s an article I wrote that might help you decide on it. Furthermore, techniques such as transfer learning could help drastically reduce these problems. But that’s for a future post.

Thanks for the read, friend. It seems you and I have lots of common interests. Say Hi to me on LinkedIn, Twitter, and Medium.

Not a Medium member yet? Please use this link to become a member because I earn a commission for referring at no extra cost for you.

Similar Posts