GPT-4 is wonderful at coding however the competitors is getting nearer!
Last week Meta launched Code Llama — a fine-tuned model of the open-source Llama 2. Code Llama is unbelievable at 1 job: producing code… Surprise 🙂
Actually, Meta launched 9 variations of the mannequin. There are 3 sizes (7B, 13B, and 34B) and three variations:
- Code Llama ➡️ the foundational mannequin.
- Code Llama — Instruct ➡️ fine-tuned for understanding human directions.
- Code Llama — Python ➡️ specialised for Python code.
And I’m probably the most excited in regards to the Python model!
But at the moment, we’ll use the smallest, normal mannequin: Code Llama 7B. After ending the article, you’ll be taught:
• How to make use of GPU on Colab
• How to get entry to Code Llama by Meta
• How to create a Hugging Face pipeline
• How to load and tokenize Code Llama with Hugging face
• Finally, you’ll learn to generate code with Code Llama! 🙂
The normal Code Llama is just not an instruction-based mannequin.
What does it imply for us?
We do NOT immediate it with pure language directions, equivalent to “Write a operate that returns the factorial of a given quantity.”
Instead, we immediate it like this: “def factorial(n):” Then, Code Llama will full the code for us.
Note: At the top of this text, you’ll discover all helpful hyperlinks, together with a ready-to-go Colab pocket book with the mission described on this article (and a video model of this information).
Let’s dive in and have some enjoyable!
Before we transfer to the code, you’ll want to speculate 2 minutes to undergo these 3 needed steps:
- Ensure you’ve switched your Colab runtime to GPU for optimum efficiency. In the menu on high go to
Runtime -> Change runtime kind
and choose “T4 GPU”. - Create an account…