Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Fine tuning a pretrained model from Hugging Face Transformers with flax
Published:
Pre-trained models are great. They’re trained on a lot of data us normies probably won’t be able to compile by ourselves and they also require a lot of compute to train from scratch. Ever since BERT was released, the NLP community has been using pre-trained models to fine-tune on their own datasets. This is a great way to leverage the power of these models without having to train them from scratch. Read more
Model Checkpointing using Orbax
Published:
So say you’ve trained a model using flax, it trained fine, has a nice learning curve (train vs validation) and now you want to save it. Or, you want to save checkpoints of the model during specific stages of the training process and later, use the best checkpoints for inference. Technically, all flax modules are dataclasses and params (part of model state in flax) are what store the model, so what we need to do for checkpointing is to persist the params. Read more
Fedora post installation steps
Published:
I use Fedora on my workstation to make my home and lab computers consistent with each other. This is a collection post installation steps I had followed. If you plan to use Fedora sometime in the future and are looking for a guide, you can use this one as a reference. Read more
Using Nvidia GPUs in Podman containers in Fedora 37
Published:
Okay why not docker though?
Read morePaper Summary : Feng et al. (2018) : Pathologies of Neural Models Make Interpretations Difficult
Published:
Paper Information
Read morePaper Summary : Yao, Zhang et al. (2020): Non-deterministic and emotional chatting machine: learning emotional conversation generation using condition
Published:
Paper information
Yao, Zhang et al. (2020): Non-deterministic and emotional chatting machine: learning emotional conversation generation using conditional variational autoencoders, Neural Computing and Applications. Read morePaper Summary : Rudinger et al. (2018) : Gender Bias in Coreference Resolution
Published:
Paper Information
Read moreDemystifying Autocomplete
Published:
How many times has the autocomplete feature on your phone’s keyboard app saved (or sometimes ruined, depending on what you wanted to type) your conversations? Judging from the number of texts and emails we sent around everyday, the number would be staggering and you may not even count it as something significant because you’ve grown so used to this often overlooked and underrated piece of technology. And you may even go on and say, “Eh, what’s so special about it anyway, it’s so simple, I write a word and it predicts the next thing.” Read more
Collocation discovery with PMI
Published:
What is a collocation?
Read moreFeature scaling strategy - Mean, Median or Mode?
Published:
What is feature scaling
Read morepublications
TV series recommendation using fuzzy inference system, K-Means clustering and adaptive neuro fuzzy inference system
Published in 2017 13th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD), 2017
Recommending TV Series is a more challenging task than movie recommendation. Not only the system should consider the taste of the user, it has to take into account the time commitment factor because a TV series can contain thousands of episodes. This paper proposes a way of recommending TV series by analyzing the users’ genre preferability of movies, the genre of the TV series and the number of episodes. This system analyzes the genre preferability of the user from movie data using Fuzzy Inference System, puts the users of similar taste into a cluster using K-Means and finally applies Adaptive Fuzzy Neuro Inference System in the cluster to predict the rating of that TV series the user might give in real life. Read more
Fruit Image Classification Using Convolutional Neural Networks
Published in International Journal of Software Innovation (IJSI)7(4), 2019
Convolutional neural networks (CNN) are the most popular class of models for image recognition and classification task nowadays. Most of the superstores and fruit vendors resort to human inspection to check the quality of the fruits stored in their inventory. However, this process can be automated. We propose a system that can be trained with a fruit image dataset and then detect whether a fruit is rotten or fresh from an input image. We built the initial model using the Inception V3 model and trained with our dataset applying transfer learning. Read more
MMTEB: Massive Multilingual Text Embedding Benchmark
Published in The Thirteenth International Conference on Learning Representations, 2024, 2025
Text embeddings are typically evaluated on a limited set of tasks, which are constrained by language, domain, and task diversity. To address these limitations and provide a more comprehensive evaluation, we introduce the Massive Multilingual Text Embedding Benchmark (MMTEB) - a large-scale, community-driven expansion of MTEB, covering over 500 quality-controlled evaluation tasks across 250+ languages. MMTEB includes a diverse set of challenging, novel tasks such as instruction following, long-document retrieval, and code retrieval, representing the largest multilingual collection of evaluation tasks for embedding models to date. Using this collection, we develop several highly multilingual benchmarks, which we use to evaluate a representative set of models. We find that while large language models (LLMs) with billions of parameters can achieve state-of-the-art performance on certain language subsets and task categories, the best-performing publicly available model is multilingual-e5-large-instruct with only 560 million parameters. To facilitate accessibility and reduce computational cost, we introduce a novel downsampling method based on inter-task correlation, ensuring a diverse selection while preserving relative model rankings. Furthermore, we optimize tasks such as retrieval by sampling hard negatives, creating smaller but effective splits. These optimizations allow us to introduce benchmarks that drastically reduce computational demands. For instance, our newly introduced zero-shot English benchmark maintains a ranking order similar to the full-scale version but at a fraction of the computational cost. Read more