Introduction to Self-Supervised Learning

Basics of Self-Supervised Learning
self-supervised learning
unsupervised learning
Published

August 24, 2020

In general , if we observe the way humans learn it is evident that they don’t need huge training data with explicit labels. Most of the learning is done largely in an unsupervised fashion i.e. without labels. Hence it is natural to explore if unsupervised learning can be used to improve the efficiency of different deep learning algorithms. Technically, this can mean if unsupervised learning can be used to improve or learn more interesting representations from unlabelled data.

Lets look at some of the ideas of Supervised Learning, Unsupervised Learning and Self-Supevised Learning below.

Supervised Learning : This refers to the process of training learning algorithms where an inhererent supervision is available in the data. Futher this can be defined as using labels available in the data to train learning algorithms. Classification is a common example for Supervised Learning.

Unsupervised Learning : In Unsupervised learning, there are no labels available in the data . Instead the data with features is just used by the algorithm to learn inherent structure. Clustering is an example of the algorithm that uses Unsupervised Learning.

Self-Supervised Learning : This can be thought of a subset of Unsupervised learning where in own supervision is created via pretext tasks or proxy loss. The idea and the hope is that the pretext task or proxy loss helps to learn better representations thus improving the accuracy of the learning algorithms. Further to this , it is also believed that this helps the learning algorithms generalize better.

Below are some of the advantages of using self-supervised learning:

In the next blog post, we will look at some interesting literature along with concepts that help us dive deeper into self-supervised learning.

References :