Llama 3.1 is a group of open-source instruction-tuned models from Meta. These multilingual models have a context length of 128K, state-of-the-art tool use, and strong reasoning capabilities. The 8B variant is a light-weight, ultra-fast model that can run anywhere.