I'm starting a new model line, Locus. These models aren't fine tuned, they de-tuned 🤗. What I mean by that is I remove a percentage of the corporate tuned speech patterns like "why this matters" "no fluff" "as a large language model". By reducing the RLHF based habitual patterns in model response I've had higher success rates in personality adoptability. I've fine tuned on the Locus models myself so you can chat with it post fine-tune or just trust me and try it yourself!
I don't aim to remove guard rails or the LLM identity entirely, what I want to do is dampen RLHF to a manageable volume. Personality models perform better with guardrails intact no different than humans with moral guidelines and boundaries. Refusals can help steer and mold personality. RLHF however drowns out adaptability so I'm cranking it down for you to crank your project up!
Turns out : if we predict 🌏 earth we can save a lot of time looking for interesting things and less time looking at things that we expect to see.
Sentinel-2 imagery 🛰️basically takes a long time to download towards earth. so our "near real time" systems are quite far from that in practical terms.
meanwhile , if we "predict" what we will see , based on what we do see , we can send down much less data in a timely way , and prioritize 📡earth-bound response .
I'm talking about illegal fishing , logging , mining or building in nature reserves , the more of that we predict early the more we're able to stop it on time.
6 Open-Source Libraries to FineTune LLMs 1. Unsloth GitHub: https://github.com/unslothai/unsloth → Fastest way to fine-tune LLMs locally → Optimized for low VRAM (even laptops) → Plug-and-play with Hugging Face models
3. TRL (Transformer Reinforcement Learning) GitHub: https://github.com/huggingface/trl → RLHF, DPO, PPO for LLM alignment → Built on Hugging Face ecosystem → Essential for post-training optimization
4. DeepSpeed GitHub: https://github.com/microsoft/DeepSpeed → Train massive models efficiently → Memory + speed optimization → Industry standard for scaling
6. PEFT GitHub: https://github.com/huggingface/peft → Fine-tune with minimal compute → LoRA, adapters, prefix tuning → Best for cost-efficient training
Okay maybe I'm a little obsessed with LR schedulers ATM. I ran a SST-2 Sentiment Classification eval using the nyu-mll/glue dataset on distilbert/distilbert-base-uncased-67M to see how different schedulers perform.
I think I've graduated from ML enthusiast to full blown data hoarder and I don't know if I can turn back now.
Anyways I evaluated the 2 schedulers that i designed as well and was pretty happy with the performance of both over all so hell ya to that guess I'll go and grab some more graphs.
Okay, I may have been talking out of my ass about my scheduler using less VRAM compared to a FFT. What I did find though: training only ~30% of the model's weights per step consistently beat dense SFT on Hendrycks Math across 3 different seeds.
What makes it interesting isn't just the sparsity — it's that no two consecutive windows share the same active layers. The model never has a stable path from input to output decision. Adjacent layers are rarely both alive at the same time, so the model can't build shortcuts between them. I started developing this to reduce semantic redundancy across layers and stumbled onto something I didn't expect.
Setup: Qwen2.5-3B-Instruct, simplescaling/s1K (1k reasoning traces), 5 epochs, LR 1e-5, optimizer adamw_torch_fused , and cosine scheduler with my lucky pick scheduler on an AMD MI300X 192GB.
since everyone liked my previous announcement post ( https://huggingface.co/posts/Tonic/338509028435394 ) so much , i'm back with more high quality proceedural datasets in the Geospacial domain for SFT training !
Okay, I had way too much fun trying to make the unsloth-bot hallucinate incorrect answers like so many frontier models have done to me in the past regarding fine-tuning and general machine learning. Learning to fine-tune LLMs could have been so much simpler had this been available when I began screwing around with neural networks.
I dropped a new scheduler I created last week without much of an explanation of what it was or how it worked called the Lucky Pick Scheduler. It was just a modal ready app that anyone could have launched and troubleshot their way around.
I've decided I'm going to enter it into the AMD hackathon. Today I started putting together a Github repo with a few extra additions to the scheduler itself.
Essentially it's a training scheduler that randomly drops layers/heads/channels every ~50 steps during fine-tuning, holds the topology frozen, then reshuffles. In theory the model has to build distributed representations because it never trains through the same compute path for long.
And with less gradient memory, bigger models are able fit on smaller hardware.
It's now close to fully capable of automatically configuring itself to any language mode. I've tested it on:
This is the best set of AI and ML books and a full guide to learning machine learning from the ground up. This is my study material that I used, so I thought it would be helpful to share it with others. Like, share, and add it to your collection at Ujjwal-Tyagi/ai-ml-foundations-book-collection.
We are hiring at Shirova AI. We need AI researchers and engineers to work in our research lab. Shirova AI is a research lab in India, so we can help our researchers move to nearby workspaces or let them work from home without ever coming to the lab. We're building our founding team, so the pay will be good. You can learn, so don't hesitate to mail us at: careers@shirova.com