Meta Unveils Llama 2, the Enhanced Suite of Text-Generating Models for Better Assistance
July 19 2023
Meta has launched Llama 2, a new generation of AI models aimed at enhancing the performance of chatbot applications like OpenAI’s ChatGPT and Bing Chat. Llama 2 is pretrained and available for fine-tuning on AWS, Azure, and Hugging Face’s AI model hosting platform, and is easier to run, optimized for Windows, smartphones, and PCs with Qualcomm’s Snapdragon system-on-chip. Llama 2 offers two versions, Llama 2 and Llama 2-Chat, each available in three sophistication levels depending on the parameter count. It has been trained on two million tokens, which is more than its predecessor, and is designed to generate text with greater effectiveness. Despite potential issues such as biases in data and toxicity, the overall aim is for Llama 2 to contribute to the development of safer and more beneficial generative AI.
Back to Breaking AI News
What does it mean?
- Pretrained: This refers to AI models that have been previously trained on a large dataset. These pretrained models can be further fine-tuned for a specific task, saving a lot of computation resources and time.
- Fine-tuning: This is a process of training where a pretrained model is further trained (i.e., its prelearned patterns are refined) on a new task that is similar to, but not identical with, the one it was originally trained on.
- Hugging Face: A company that has developed an open-source library for natural language processing (NLP) tasks, including training and using AI models to understand and generate human language.
- System-on-chip (SoC): A type of integrated circuit that includes all the components of a computer or other system on a single chip. It's commonly used in mobile devices, where it can perform many functions while taking up less space and using less power.
- Snapdragon: A suite of system-on-chip semiconductor products for mobile devices designed and marketed by Qualcomm.
- Parameter count: In the context of AI models, parameters are the parts of the model that are learned from historical training data. The parameter count refers to the number of these parameters, which often correlates with the size and complexity of the model.
- Tokens: In natural language processing, a token is a piece of a string that can't be divided further. In most cases, a token is a word. NLP models are trained with tokens to understand and generate human language.
Does reading the news feel like drinking from the firehose?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?
Do you want more curation and in-depth content?
Then, perhaps, you'd like to subscribe to the Synthetic Work newsletter.
Many business leaders read Synthetic Work, including:
CEOs
CIOs
Chief Investment Officers
Chief People Officers
Chief Revenue Officers
CTOs
EVPs of Product
Managing Directors
VPs of Marketing
VPs of R&D
Board Members
and many other smart people.
They are turning the most transformative technology of our times into their biggest business opportunity ever.
What about you?