Table of Contents 64q43
Today (18/4) the Meta announced the first two models of the next generation of the Llama, the Meta Llama 3, ready to be used comprehensively. This new version includes pre-trained and fine-tuned language models, with parameters 8B e 70B, capable of meeting a wide range of needs. Following the tradition of ing the open source community, Meta made the llama 3 for the community. This also means that the AI goal, artificial intelligence present on the company's social networks, will also be improved, including the ability to create images in real time in the WhatsApp. See more! 2884c
Meet Llama 3 2d2q2d

With llama 3 the company intends to develop open models that rival the best proprietary language models currently available, as is the case with GPT-4. Developer has been prioritized to improve the overall usability of Llama 3, according to Meta, “maintaining a commitment to leadership in the responsible use and implementation of Grand Language Models (LLMs)".
Following the principles of open code, an early and frequent release approach was adopted, allowing the community to access and contribute to the development of these models in real time. The text-based templates introduced today are just the first in the Llama 3 series. The vision for the future further includes expanding Llama 3 to offer multilingual and multimodal , increase context capability, and continually improve performance across all key aspects of LLMs, such as reasoning and coding.
In line with the design approach in llama 3, Meta opted for a transformer architecture decoder-only, which is a default choice. Llama 3 employs tokenization with an expanded vocabulary of 128 thousand tokens, resulting in more efficient language coding and, consequently, improved model performance. To optimize the inference efficiency of Llama 3 models, the company incorporated the clustered query attention technique (GQA) in two different sizes: 8B and 70B. During training, the sequences used have a length of 8.192 tokens, and a mask is applied to ensure that self-attention remains within the document boundaries.

To train the best language model, it is essential to have a large, high-quality training data set. In line with our design principles, we have invested considerably in pre-training data for Llama 3. This model is pre-trained on more than 15 trillion tokens, all from publicly available sources. Our training dataset is seven times larger than that used for Llama 2 and includes four times as much code.
In preparation for future multilingual use cases, more than 5% of the Llama 3 pre-training dataset consists of high-quality data in languages other than English, covering more than 30 languages. However, Meta does not expect to achieve the same level of performance in these languages as that achieved in English.
To ensure that Llama 3 is trained with the highest quality data, a series of data filtering pipelines were developed. These pipelines include the use of heuristic filters, filters for inappropriate content, semantic deduplication techniques and text classifiers to assess data quality. Previous versions of Llama are effective in high-quality data identification, then Llama 2 was used to generate the training data for the text quality classifiers that feed Llama 3.
Additionally, Meta even used experiments to determine the best ways to merge data from different sources into the final pre-training dataset. These experiments allowed us to select a combination of data that ensures Llama 3 performs well in a variety of use cases, including trivia questions, STEM (science, technology, engineering and mathematics), codification, historical knowledge, Among others.
Comparison with Llama 2 122a4w

the new models llama 3, with parameters of 8B and 70B, represent an advance in relation to the llama 2, setting a new standard for LLM models at these scales. Meta claims that due to improvements in both pre-training and post-training, pre-trained and instruction-finetuned models are currently the undisputed leaders on the 8B and 70B parameter scale.
Optimizations in post-training procedures resulted in a reducing erroneous rejection rates, in addition to improving alignment and increasing diversity in model responses. Significant improvements were also observed in features such as reasoning, code generation and future guidance, making Llama 3 even more adaptable and targeted.
During the development of Llama 3 there was an analysis of the model's performance, both in standard benchmarks and in real-world scenarios. To ensure effective optimization for practical applications, a new high-quality human evaluation suite was created. This set consists of 1.800 prompts covering 12 main use cases, including asking for advice, brainstorming, sorting, answering closed-ended questions, coding, creative writing, extracting, impersonating characters/personas, answering open-ended questions, reasoning, rewriting e summary.
To avoid the overfitting ( overfitting, the most commonly used term) accidental of models to this evaluation set, even Meta's own modeling teams do not have access to it. The presented graph shows the aggregated results of human evaluations in these categories and prompts, comparing the performance of Llama 3 with the models Claude Sonnet, Mistral Medium e GPT-3.5.
How Llama 3 Improves Meta AI o3319

Due to the progress made with Meta Llama 3, the company announces the international expansion of Meta AI — ing that the resource was exclusive to the United States. Coming in as a strong competitor to existing models, Meta AI is now available to more people around the world, allowing s of the Facebook, Instagram, WhatsApp e Messenger enjoy this free technology to perform a variety of actions, create content and access information in real time.
Meta AI was initially revealed during Connect last year, and now s in countries like Australia, Canada, Ghana, Jamaica, Malawi, New Zealand, Nigeria, Pakistan, Singapore, South Africa, Uganda, Zambia e Zimbábue can also enjoy its benefits. As part of the expansion of Meta AI, people will now also be able to access it on meta.ai, in the web version.
Are you planning to go out at night with friends? Ask Meta AI to recommend a restaurant with an amazing view and even vegan options. Are you organizing a trip for the weekend? Ask Meta AI to find shows for Saturday night. Are you preparing for a test? Ask Meta AI to explain how hereditary traits work. Are you moving into your first apartment? Ask Meta AI to “imagine” the aesthetic you want and the assistant will generate some inspiration photos for your furniture.
Meta explaining a little how AI can help you

O AI goal it will also be available directly in the search function of Facebook, Instagram, WhatsApp and Messenger. This means you can access information from the internet in real time without having to switch between applications. For example, imagine you're planning a trip with friends in a Messenger group chat. With Messenger search, you can ask Meta AI — powered by the new Llama 3 — to find flights from your origin to your destination, and discover the slowest weekends to visit. And of course, all this without having to leave the Messenger app.
People will also have the ability to access Meta AI while browsing their Facebook Feed. If you find a post that piques your interest, you can request more information directly from Meta AI from that post, just like a regular Google search, only within a Facebook post.
Meta is accelerating the imaging process to enable people to create real-time images from text using Meta AI's Imagine feature. The rollout of this feature began in beta today, available on WhatsApp and the Meta AI web experience in the United States. When they start typing, people will see an image appear instantly. This image will evolve with every few letters you type, allowing you to watch as Meta AI brings your idea to life in real time.

According to the company, these generated images present a improved sharpness and quality, offering a better ability to include text in images. Additionally, it will provide suggestions for improving the image, allowing you to continue refining from your initial starting point. When you find an image, simply ask Meta AI to animate it, adapt it to a new format, or even transform it into a GIF to share with your friends.
While these updates are specific to Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web, it's important to that Meta AI is also available in the United States for smart glasses Ray-Ban Meta, and soon in Meta quest — Meta virtual reality devices.
Transparency with open source and security 424b2q

The potential of generative AI technology can truly improve the experience of Meta's products and the broader ecosystem. Still, another point that must be addressed is ensuring that this is done in a responsible e safe. Therefore, the company is taking measures to assess and mitigate risks at all stages of AI development and implementation. This includes integrating safeguards into the Llama base model design and release process, as well as ing the developer ecosystem to promote responsible practices.
Therefore, with Llama 3, a systematic approach was adopted that integrates safeguards at all stages of development. This means that special care has been applied to training and tuning processes, in addition to offering tools that enable developers to implement models responsibly.
This approach not only strengthens efforts in responsible AI, but also reflects the vision of open innovation, empowering developers to safely customize their products to benefit their s. Meta also has the Responsible Use Guide, an important source for developers, providing guidelines for building products.
As we explained when we released Llama 2, it's important to be intentional in deg these measures, as there are some that can only be implemented effectively by the model provider, and others that only work effectively when implemented by the developer as part of their specific application.
Strengthens the company
Since launching Meta AI last year, the brand has continually improved the experience in several areas:
- Meta AI's responses to political and social issues have been refined, incorporating specific guidelines for these topics. The goal is to offer a variety of relevant points of view on a topic, while respecting the 's intentions when asking specific questions.
- Including specific instructions and responses to make it more useful, using reward models to guide your behavior.
- Meta AI's performance is evaluated in benchmarks and through testing with human experts, addressing any issues identified in an ongoing process.
- Request and response level safeguards, including filters and classifiers to ensure interactions are aligned with guidelines and secure.
- tools for s to share their experiences, allowing us to continually improve Meta AI's performance.
And you, what did you think of the news? We can't wait to get to Brazil, can't we? Tell us what you found in us Comment!
See also:
Google Photos frees it free AI image editing for all s.
With information from: Goal [1], [2] e [3].
reviewed by Glaucon Vital in 18 / 4 / 24.