Build powerful applications with limited resources using the magic of transfer learning

Getting started making AI apps with fast.ai

Google search trends for “deep learning

What is transfer learning?

One of the main points in the first couple fast.ai lessons is that you don’t need a ton of data or computer power to do compelling things with machine learning. Transfer learning is when you use a model trained for a particular problem on another problem. In other words, you can use a pre-trained model, and fine-tune it for your particular use case. It’s the training that takes a bunch of resources. Once the model is available, transfer learning allows you to do a lot with a little.

Tune the model and make your own classifier

Given you can train an image classifier based on image search — what would you make? I went through a few ideas and settled on doors. Given an image of a door, is it open or closed? I can do a search for open doors and closed doors for my two sets of labeled data. There isn’t much to it, and I’m not entirely moved by the idea, but hey it’s something to learn with.

A note on GPU stuff

In picking a cloud GPU provider with a notebook interface, I tried out Kaggle, then Paperspace, then Google Colab Pro. I tried the free Colab and after a half hour of training realized I needed something better. For Paperspace I got on the paid plan, but then all of the GPUs required additional $$ which came to like $3/day on my usage on top of the monthly costs. I switched to Colab Pro. At first I used the premium GPUs but they used up the credits quickly, so I moved onto the regular ones which work fine.

Build the classifier

Working off of the earlier birds example, I created a model for classifying doors. In the second lesson we learned data augmentation. This technique allows you to take a small data set and get more out of it. You warp, scretch, crop, transform the training images slightly to get a larger set of training data. There’s also an interesting technique of building the classifier before cleaning the data. Because the images were just downloaded off a search, some of them aren’t actually good images of doors, for instance, ones that show signs. After classifying, you can list the ones where the classifier had the least confidence to see where to clean the data.

Some of these aren’t what a door looks like, so we remove them in the cleaning
Are these opened or closed?

Deploy the classifier

You can host your machine learning apps on Hugging Face Spaces. It includes an integration with Gradio to make the app building a bit quicker. I’d never used it before, and this tutorial helped. You can see my classifier here.

Okay v2 will have better styling 😅

What’s inspiring me about this

Working with transfer learning helped me understand in a more tangible way what makes generative AI so powerful. I can take these models, and focus them on a particular application because of transfer learning.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Kawandeep Virdee

Author of “Feeling Great About My Butt.” Previously: Creators @Medium, Product @embedly, Research @NECSI. http://whichlight.com.