Photo by Nagara Oyodo on Unsplash

Introduction

In this article, we will dive into non-contrastive learning and discuss model collapse, one of the major impediments affiliated with it. You can find the accompanying GitHub repository here.

Without further ado, let’s get coding!

Non-Contrastive Learning

A brief overview of self-supervised learning by Meta AI can be read here, although for…

Photo by Hana Oliver on Unsplash

Introduction

In this series, we will write, in PyTorch, SimSiam, an awfully simple yet competitive self-supervised learning (SSL) technique that dispenses with complicated tricks prevalent in other SSL algorithms like BYOL’s momentum encoder, SimCLR’s negative pairs, and SwAV’s online clustering. You can find the accompanying GitHub repository here.

Audience: Because SimSiam…

Photo by Enrico Da Prato on Unsplash

Introduction

In this article, we will look at how SimCLR extracts features from images, computes their similarity, and optimizes the model to differentiate amongst positive and negative pairs.

Without further ado, let’s get coding!

The Encoder

Older contrastive methods like AMDIM designed customized CNNs for the encoder to constrain the network’s receptive fields…

Photo by Maksym Tymchyk on Unsplash

Introduction

In this two-part series, we will talk about SimCLR, a straightforward yet accurate framework for self-supervised learning (SSL) to teach the model useful visual patterns without supervision. It takes advantage of a technique known as contrastive learning in conjunction with a few other tricks to attain competitive performance on downstream…

Photo by Rohit Tandon on Unsplash

Introduction

In this article, we will write the model’s backward pass, add modules for updating its parameters, and lay the foundation for gradient descent. You can find the GitHub repository here.

Without further ado, let’s get coding!

Linear Layer

Since we’ve already implemented the linear forward pass, the backward pass should be no…

Photo by Kirill Pershin on Unsplash

Introduction

In this article, we will finish our neural network’s forward pass by writing ReLU, a sequential module for stacking arbitrary layers on top of each other, and the mean squared error function. You can find the GitHub repository here.

Without further ado, let’s get coding!

ReLU

Previously, we saw the ease…

Borna Ahmadzadeh

Machine Learning Engineer at Lixr

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store