Blog posts

2023

Fine tuning a pretrained model from Hugging Face Transformers with flax

8 minute read

Published:

Pre-trained models are great. They’re trained on a lot of data us normies probably won’t be able to compile by ourselves and they also require a lot of compute to train from scratch. Ever since BERT was released, the NLP community has been using pre-trained models to fine-tune on their own datasets. This is a great way to leverage the power of these models without having to train them from scratch. Read more

Model Checkpointing using Orbax

9 minute read

Published:

So say you’ve trained a model using flax, it trained fine, has a nice learning curve (train vs validation) and now you want to save it. Or, you want to save checkpoints of the model during specific stages of the training process and later, use the best checkpoints for inference. Technically, all flax modules are dataclasses and params (part of model state in flax) are what store the model, so what we need to do for checkpointing is to persist the params. Read more

2022

Fedora post installation steps

6 minute read

Published:

I use Fedora on my workstation to make my home and lab computers consistent with each other. This is a collection post installation steps I had followed. If you plan to use Fedora sometime in the future and are looking for a guide, you can use this one as a reference. Read more

2021

2020

Demystifying Autocomplete

13 minute read

Published:

How many times has the autocomplete feature on your phone’s keyboard app saved (or sometimes ruined, depending on what you wanted to type) your conversations? Judging from the number of texts and emails we sent around everyday, the number would be staggering and you may not even count it as something significant because you’ve grown so used to this often overlooked and underrated piece of technology. And you may even go on and say, “Eh, what’s so special about it anyway, it’s so simple, I write a word and it predicts the next thing.” Read more