Train Your Own LLM from Scratch

(github.com)

308 points | by kristianpaul 8 hours ago

22 comments

  • jvican 7 hours ago
    If you're interested in this resource, I highly recommend checking out Stanford's CS336 class. It covers all this curriculum in a lot more depth, introduces you into a lot of theoretical aspects (scaling laws, intuitions) and systems thinking (kernel optimization/profiling). For this, you have to do the assignments, of course... https://cs336.stanford.edu/
  • NSUserDefaults 6 hours ago
    Been doing it since the day I was born. The beginnings were hard but I’m getting there.
    • hliyan 5 hours ago
      You've actually been primarily training a physics model, with an LLM attached to it.
      • falcor84 20 minutes ago
        Good point, and I'm actually not sure that there is a clear dividing line. I expect that once we achieve capable world models and are able to analyze their internals, we'll find that the prediction mechanisms for purely physical and for verbal/behavioral responses to the agent's actions are at least partially colocated.

        As particular motivation for my intuition, I expect that we had evolutionary pressure to adapt our defense mechanisms of predicting the movements of predators and prey, to handle human opponents.

  • JoeDaDude 5 hours ago
    Coincidentally, I just started on Build a Large Language Model (From Scratch), a repo/book/course by Sebastian Raschka [0][1][2]. Maybe it is a good problem to have to have to decide which learning resource to use.

    [0] https://github.com/rasbt/LLMs-from-scratch

    [1] https://www.manning.com/books/build-a-large-language-model-f...

    [2] https://magazine.sebastianraschka.com/p/coding-llms-from-the...

    • gchadwick 2 hours ago
      I really enjoyed the book. Great for people who want to understand the real nuts and bolts and have worked examples of all of the calculations.
  • antirez 5 hours ago
    Context: he is one of the MLX developers, a skilled ML researcher.
    • thrww26 10 minutes ago
      Source? I think that's not correct.
  • y42 3 hours ago
    shameless plug:

    A series of Jupyter notebooks explaining the whole machine learning mechanism, from the beginning

    https://github.com/nickyreinert/DeepLearning-with-PyTorch-fr...

    and of course also how to build an llm from scratch

    https://github.com/nickyreinert/basic-llm-with-pytorch/blob/...

  • kriro 5 hours ago
    I did it back in the day when fast.ai was relatively new with ULMFiT. This must have been when Bert was sota. The architecture allows you to train a base and specialize with a head. I used the entire Wikipedia for the base and then some GBs of tweets I had collected through the firehouse. I had access to a lab with 20 game dev computers. Must have been roughly GTX 2080s. One training cycle took about half a day for the tokenized Wikipedia so I hyper parameter tuned by running one different setting on each computer and then moving on with the winner as the starting point for the next day. It was always fun to come to work the next morning and check the results.

    The engineering was horrible and very ad-hoc but I learned a lot. Results were ok-ish (I classified tweets) but it gave me a good perspective on the sheer GPU power (and engineering challenges) one would need to do this seriously. I didn't fully grasp the potential of generating output but spent quite some time chuckling at generated tweets (was just curious to try it).

  • Miles_Stone 1 hour ago
    This is a really interesting direction. Thanks for sharing!
  • ofsen 6 hours ago
    This looks like exact copy of this video of andrej karpathy ( https://youtu.be/kCc8FmEb1nY ) but in a writing format, am i wrong ?
    • mellosouls 6 minutes ago
      The page describes its relationship to nanogpt.

      ...nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour...

    • drcongo 3 hours ago
      Yes, you are.
  • fabian_shipamax 3 hours ago
    If someone is interested, I am giving short courses with walkthrough on how to train you LLM from scratch via AI Study Camp.
  • steveharing1 5 hours ago
    The documentation is really helpful enough to get started
  • hiroakiaizawa 7 hours ago
    Nice. What scale does this realistically reach on a single machine?
    • lynx97 6 hours ago
      Model: 36L/36H/576D, 144.2M params

      runs on a Blackwell 6000 Max-Q, using 86GB VRAM. Training supposedly takes 3h40m

  • iamnotarobotman 8 hours ago
    This looks great for a first introduction to training LLMs, and it looks simple enough to try this locally. Great job!
  • baalimago 7 hours ago
    Train your LM from scratch*

    I doubt you have a machine big enough to make it "Large".

    • utopiah 3 hours ago
      If you have a credit card with a "normal" ceiling you probably can rent enough on neocloud providers like HuggingFace or Mistral Forge.

      I'm not saying it's worth it but you don't need to buy a GPU yourself to be able to train.

      • busfahrer 1 hour ago
        This is the whole point of Karpathy's nanochat which OP refers to, to train a GPT-2 level LLM for under $100, renting an 8xH100 VM.
    • mips_avatar 6 hours ago
      You can fully train a 1.6b model on a single 3090. That’s a reasonably big model.
    • nucleardog 7 hours ago
      Hey now! I've got a half terabyte of RAM at my disposal! I mean, it's DDR4 but... it's RAM!

      And it's paired with 48 processor cores! I mean, they don't even support AVX512 but they can do math!

      I could totally train a LLM! Or at least my family could... might need my kid to pick up and carry on the project.

      But in all seriousness... you either missed the point, are being needlessly pedantic, or are... wrong?

      This is about learning concepts, and the rest of this is mostly moot.

      On the pedantic or wrong notes--What is the documented cut-off for a "large" language model? Because GPT-2 was and is described as a "large" language model. It had 1.5B parameters. You can just about get a consumer GPU capable of training that for about $400 these days.

      • baalimago 4 hours ago
        Yeah it's just a semantic pet peeve. Let me ask you this: What is a "Language Model", if this is a "Large Language Model"? Inversely, if a 1.5B model is "Large" then what is the recent 1T param models? "Superlarge"?

        In my own very humble opinion, it becomes "Large" when it's out of non-specialized hardware. So currently, a model which requires more than 32GB vram is large (as that's roughly where the high-end gaming GPUs cut off).

        And btw, there is no way you can train a language model on a CPU, even with ddr5, lest you wait a whole week for a single training cycle. Give it a go! I know I did, it's a magnitude away from being feasible.

        • joefourier 28 minutes ago
          Calling anything "large" in computing is problematic since hardware keeps improving. GPT-1 was an LLM in 2017 and had 117M parameters, when did it stop being large?

          GPT would have been a better term than LLM, but unfortunately became too associated with OpenAI. And then, what about non-transformer LLMs? And multimodal LLMs?

          Maybe we should just give up, shrug and call it "AI".

      • Malcolmlisk 5 hours ago
        Then rewrite the title and call it "learn how to do a non usable llm from scratch"
        • improbableinf 5 hours ago
          Opus 4.7 is non-usable for the tasks I have — but it’s considered an LLM.

          And no one is stopping anyone from tweaking few parameters in this repo to go above 10M parameters.

          • skinfaxi 1 hour ago
            What tasks is it non-usable for?
  • DeathArrow 3 hours ago
    I would start with linear algebra, some calculus and statistics and understand how a neural network - which really is just one type of ML - works, the learn the basics of CNN and RNN, then learn transformers and LLM.

    But that is just me. I think is more useful to understand the how and whys before training a LLM.

  • yjaspar 5 hours ago
    That’s actually super interesting
  • Ozzie-D 3 hours ago
    [flagged]
  • rithdmc 3 hours ago
    I know it's a bit of a joke, but "I Built a Neural Network from Scratch in SCRATCH" gave me, a complete outsider, a lot of insight into how neural networks work.

    https://www.youtube.com/watch?v=5COUxxTRcL0

  • flowdesktech 5 hours ago
    [flagged]
  • gbkgbk 2 hours ago
    [flagged]