{"id":297,"date":"2021-10-26T00:00:00","date_gmt":"2021-10-26T00:00:00","guid":{"rendered":"https:\/\/tac.debuzzify.com\/?p=297"},"modified":"2023-06-22T09:27:08","modified_gmt":"2023-06-22T09:27:08","slug":"transfer-learning","status":"publish","type":"post","link":"https:\/\/www.the-analytics.club\/transfer-learning\/","title":{"rendered":"Transfer Learning: The Highest Leverage Deep Learning Skill You Can Learn"},"content":{"rendered":"\n\n\n

Training a deep learning model can take days, weeks, or even months.<\/p>\n\n\n\n

Transfer Learning could solve this problem. It\u2019s a machine learning method where trained models are reused as starting points for new tasks. This speeds up training and improves performance on related issues.<\/p>\n\n\n\n

It is one of the most popular methods in Deep Learning because it saves time and money by reusing pre-trained models from other tasks that have a similar structure to your own task. In this post, you\u2019ll learn how to transfer learning works and how you can use it to speed up your deep learning training process!<\/p>\n\n\n\n

What is transfer learning?<\/h2>\n\n\n\n

Transfer learning is a machine learning technique in which a model trained on a specific task is reused as part of the training process for another, different task.<\/p>\n\n\n\n

Here is a simple analogy to help you understand how transfer learning works: imagine that one person has learned everything there is to know about dogs. In contrast, another person has learned everything about cats. If both people are asked, \u201cWhat\u2019s an animal with four legs, a tail, and barks?\u201d The person who knows all about dogs would answer \u201cdog,\u201d while the individual who knows everything about cats would say \u201ccat.\u201d<\/p>\n\n\n\n

Since both people already know half of what they need to know to solve the problem at hand, each one only has to fill in their missing information before answering correctly. This is how transfer learning works in machine learning. Combining the information that one model has learned about certain features with another model\u2019s knowledge of other features can result in a new task.<\/p>\n\n\n\n

Related: Machine Learning Vs. Artificial Intelligence; What’s the difference?<\/em><\/a><\/strong><\/p>\n\n\n\n

How to use Transfer Learning?<\/h2>\n\n\n\n

Now that you know how transfer learning works, you probably wonder how to make it work for your own machine-learning models. There are two different ways to do this: feature extraction and fine-tuning.<\/p>\n\n\n

\n
\"Transfer<\/figure><\/div>\n\n\n

Feature Extraction: If you want to transfer knowledge from one machine learning model to another but don\u2019t want to re-train the second, larger model on your data set, then feature extraction is the best way to do this. This is possible because you can take the learned features from one model and train another, much smaller model. Used in conjunction with fine-tuning, this process can give you outstanding results in a short amount of time.<\/p>\n\n\n\n

Fine-Tuning: If you are already training your own deep learning models or want to fine-tune the output of an existing model for your dataset, this approach could be a good fit for you. By using a smaller model to learn from the larger one, you can benefit from any of the work that has already been done by the larger model without having to go through all of the hassles of training it yourself. As a result, this approach is faster and more efficient than feature extraction alone.<\/p>\n\n\n\n

Related: How to Train Deep Learning Models With Small Datasets<\/a><\/em><\/strong><\/p>\n\n\n\n

Steps to transfer learning from a pre-trained model to a new one.<\/h3>\n\n\n\n

Transfer learning procedures include these five steps. Here are the steps and what to do if you\u2019re using Keras to build your deep nets.<\/p>\n\n\n\n

Related: How to Evaluate if Deep Learning Is Right For You?<\/em><\/a><\/strong><\/p>\n\n\n\n

Extract layers from a pre-trained model.<\/h3>\n\n\n\n

These layers contain information to achieve the task in general settings. Typically they are pre-trained with huge datasets. In Keras, you can easily eliminate it by specifying include_top=False<\/code> when loading a pre-trained model.<\/p>\n\n\n\n