Table of Contents 64q43
This Monday, the 18th, the NVIDIA conducted his long-awaited presentation of the GTC 2024 (GPU Technology Conference), hailed by the company as “the leading AI conference for developers.” This annual event is a meeting place for experts, researchers, developers and hobbyists, all coming together to explore and debate the latest trends and innovations in high-performance computing, artificial intelligence, machine learning, gaming and a wide range of other areas. . 2l702o
During this year's event, the company revealed its latest architecture, the Blackwell, dedicated to the advancement of artificial intelligence. In addition, he presented his new supercomputer, the DGX SuperPOD, news about the Omniverse Cloud, along with other awesome ads. Check out the announcements below NVIDIA na GTC 2024.
NVIDIA's New Blackwell Architecture 563e4l

O Blackwell is the innovative new GPU architecture from NVIDIA that promises to define a new era in computing and generative artificial intelligence (AI). With revolutionary resources in the technological market, the Blackwell is destined to play a key role in a wide range of industries, from data processing and automation to quantum computing.
Generative AI is the defining technology of our time. Blackwell is the engine driving this new industrial revolution. Working with the world's most dynamic companies, we will realize the promise of AI for every industry.
Jensen Huang, founder and CEO of NVIDIA.
With parameters that guarantee up to 25 times lower operating costs and energy consumption than its predecessor, the Blackwell introduces revolutionary new technologies, enabling AI training in LLM models (such as Chat GPT) for scales of up to 10 trillion different parameters, considerably increasing the ability to interpret data and provide intelligent answers and solutions.
Firstly, the company itself stands out as “the most powerful chip in the world”. With an incredible 208 billion transistors, GPUs Blackwell are produced using a custom TSMC 4NP process, setting a new standard for power and efficiency. The architecture employs a unique approach with two reticles per GPU chip, interconnected via a 10 TB/second chip-to-chip link, resulting in a single unified GPU with unprecedented processing capabilities.
Powered by new scaling micro-tensioner and advanced dynamic range management algorithms, will allow the architecture Blackwell double computing power and model sizes, delivering new floating-point AI inference capabilities (floating points) 4-bit.
O NV Link 5th Generation is another crucial technology, providing a throughput Innovative bidirectional 1,8 TB/s per GPU. This enables high-speed communication between up to 576 GPUs, accelerating the performance of complex AI models with multiple trillions of parameters and mix of experts.

In addition, Blackwell GPUs include the RAS engine (Reliability, Availability e Serviceability, in English), with a dedicated engine to ensure reliability, availability and serviceability. The architecture also uses AI-based preventative maintenance to diagnose and predict reliability issues, maximizing system uptime and reducing operational costs in large-scale AI deployments.
Security is a priority and in new architectures Blackwell Advanced confidential computing capabilities will be provided. This protects AI models and customer data without compromising performance, with for new native interface encryption protocols, especially important for privacy-sensitive industries such as healthcare and financial services.
Finally, the dedicated decompression engine accelerates database queries to deliver the highest performance in data analysis and data science. This functionality is crucial considering the growing volume of data and the demand for fast and efficient analysis.
Together, these new technologies position architecture Blackwell as an undisputed leader in power, efficiency and security for AI, HPC and data analytics applications in the future of the processing market.
Technology-derived products Blackwell will be ready for the market through strategic partnerships later this year. Among the pioneers in offering technology-driven cloud services Blackwell, the giants stand out AWS, Google Cloud, Microsoft Azure and Oracle Cloud, as well as participants in the company's cloud partnership program NVIDIAas the Applied Digital, CoreWeave, Crusoe, IBM Cloud and Lambda. Additionally, AI platforms such as Indosat Ooredoo Hutchinson, Nebius, Nexgen Cloud and Oracle EU Sovereign Cloud, among others, will also provide cloud and infrastructure services using the Blackwell.

Furthermore, NVIDIA also presented a new superchip that was made possible thanks to technology Blackwell, NVIDIA GB200 Grace Blackwell. This new chip connects two NVIDIA B200 Tensor GPUs Core à u grace da NVIDIA via an ultra-low power chip-to-chip interconnect NV Link of 900GB/s. The platform acts as a single GPU with 1,4 exaflops of AI performance and 30TB of fast memory, and is a building block for the latest DGX SuperPOD, which we will comment on below.
The new GB200 will be part of the system NVIDIA GB200 NVL72 for even more intensive compute workloads. It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace Us interconnected by NV Link fifth generation and promises to offer up to 30 times more performance compared to the same number of Tensor Core H100 GPUs da NVIDIA for LLM inference workloads, and reduces cost and power consumption by up to 25x.
The next generation AI supercomputer 3c572t

Among the ments NVIDIA around architecture Blackwell, the DGX SuperPOD, the company's next supercomputer with AI-generating computing power on the scale of trillions of parameters. Powered by superchips Grace Blackwell GB200, the new DGX SuperPOD is built with systems NVIDIA DGXTM GB200 and provides 11,5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of memory, allowing you to scale further with additional racks.
O DGX SuperPOD is made up of eight or more systems DGX GB200, with the ability to expand to tens of thousands of GB200 Superchips interconnected through technology NVIDIA Quantum InfiniBand. To create considerable shared memory space and next-generation AI models, customers can implement a configuration that connects 576 GPUs Blackwell present in eight systems DGX GB200 via NV Link.
The new DGX SuperPOD combines the latest advances in accelerated computing, networking and software from NVIDIA to enable every company, industry and country to refine and generate their own AI.
Jensen Huang, founder and CEO of NVIDIA.
The new DGX SuperPOD is a complete AI supercomputer designed for data center scale that seamlessly integrates with high-performance storage provided by NVIDIA-certified partners to meet the demands of generative AI workloads. Each unit is assembled, connected and tested at the factory, resulting in fast and efficient deployment in customers' data centers.
Additionally, the supercomputer comes equipped with advanced predictive management capabilities, capable of continuously monitoring thousands of data points across hardware and software. This allows you to predict and correct possible sources of downtime and inefficiency, resulting in savings in time, energy and computational costs.
The integrated software is capable of detecting potential concerns in a system, planning maintenance, flexibly adjusting computing resources, and even automatically saving and resuming tasks to avoid any interruptions, even in the absence of system s. If the need to replace a component is identified, the cluster can activate its reserve capacity to ensure that the work in progress is completed on time.
In addition to the new supercomputer, the NVIDIA also presented the DGX B200, a unified AI supercomputing platform designed for model training, fine-tuning, and inference. It presents the system DGX B200, which is the sixth generation of designs DGX rack mounted. The system includes eight GPUs NVIDIA B200 Tensor Core and two processors Intel Xeon 5th generation.
It delivers AI performance of up to 144 petaflops, 1.4TB of GPU memory, and 64TB/s of memory bandwidth, delivering 15x faster real-time inference for trillion-parameter models compared to the previous generation. The systems DGX B200 also have advanced networking features, including NVIDIA ConnectXTM-7 NICs e BlueField-3 DPUs, providing up to 400 gigabits per second of bandwidth per connection, ensuring fast AI performance with networking platforms NVIDIA Quantum-2 InfiniBand e NVIDIA SpectrumTM-X Ethernet.
O NVIDIA DGX SuperPOD with DGX GB200 and systems DGX B200 will be available in 2024 for global partners of NVIDIA.
X800 series, NVIDIA's new network switches 1x161f

A NVIDIA also announced today the new X800, the company's series of switches made for large-scale AI processing. The series Quantum-X800 InfiniBand e Spectrum-X800 Ethernet are the world's first capable of 800Gb/s end-to-end throughput, pushing the limits of network performance for compute and AI workloads.
These switches feature advanced software that further powers AI, cloud computing, data processing, and HPC applications (High-Performance Computing) in all types of data centers. They are specially designed to integrate seamlessly with the recently launched line of products based on the architecture Blackwell da NVIDIA, ensuring exceptional performance in all aspects.
Among the early adopters of these innovations are cloud computing giants like Microsoft Azure and Oracle Cloud, highlighting the relevance and impact of this technology on the advancement of AI on a global scale. In addition Coreweave, a leading cloud computing infrastructure company, is also among the early adopters of these innovations, demonstrating the rapid acceptance and relevance of these advancements in the industry.
This new switch series sets a new standard in AI-dedicated infrastructure, offering the highest performance and advanced features to meet the ever-increasing demands of cloud and enterprise AI applications. With promises of significant improvements in the speed of processing, analyzing and executing AI workloads, this technology promises to accelerate the development and deployment of AI solutions across the world.
NVIDIA announces AI weather simulator 5js5l

A NVIDIA also announced today, during the GTC 2024, the launch of its latest weather simulator, Earth-2, marking a breakthrough in the field of climate modeling. Developed with cutting-edge artificial intelligence technology, Earth-2 offers a cloud platform for simulating and visualizing global climate on an unprecedented scale.
One of the most impressive features of the Earth-2 is the use of APIs that employ advanced AI models, including the revolutionary CorrDiff, which generates images with 12,5 times greater resolution than current numerical models, in a fraction of the time and energy consumption. This ability to generate high-resolution simulations with unprecedented speed and energy efficiency is a remarkable achievement in the field of climate modeling.
Furthermore, the Earth-2 uses the DGX Cloud da NVIDIA to provide full acceleration for climate and weather solutions, including optimized AI pipelines and GPU acceleration for numerical weather prediction models. This means that s of Earth-2 have access to a wide range of tools and resources to create accurate and detailed climate simulations at different scales, from the global atmosphere to specific local weather events such as typhoons and turbulence.
Companies like The Weather Company are exploring ways to integrate meteorological data from the Earth-2 with its visualization tools, allowing customers to better understand the impact of real weather conditions on their operations and planning. Other companies, such as Spire and Meteomatics, are taking advantage of the resources of Earth-2 to improve the accuracy of your weather forecasts and provide more accurate insights to your customers.
Ultimately, the Earth-2 represents a step towards a deeper and more accurate understanding of global climate and extreme weather events. With its cutting-edge technology and wide range of potential applications, this new simulator has the potential to change the way we understand and prepare for the climate challenges of the 21st century.
Availability of Omniverse Cloud APIs 5l1p3w

Na GTC 2024, NVIDIA announced the availability of APIs from Omniverse Cloud which can be used by devs and companies, providing greater integration with the main design and automation software on the market.
With Omniverse Cloud APIs, developers can now easily integrate the core technologies of omniverse directly into existing software applications, powering the creation, simulation and operation of physics-based digital twins. This new development represents a significant milestone in companies' ability to design, test and validate products and processes virtually, even before they are built in the physical world.
The five new APIs of Omniverse Cloud, which can be used individually or collectively, include:
- USD Yield – generates renders completely ray-traced Data RTXTM OpenUSD.
- USD Write – allows s to modify and interact with data OpenUSD.
- USD Query – allows interactive queries and scenarios.
- USD Notify – tracks USD changes and provides updates.
- Omniverse Channel – connects s, tools, and worlds to enable collaboration across scenes.
Big names in the industry are already adopting APIs from Omniverse Cloud in their software portfolios. For example, companies like Siemens, Ansys, Cadence, Dassault Systèmes, among others, are integrating the technologies of omniverse to offer customers an even more immersive and functional experience.
These advances not only promise to revolutionize the way companies design, build and operate industrial products and processes, but also have the potential to boost the competitiveness, resilience and sustainability of companies around the world.
Companies Adopt NVIDIA DRIVE Thor in Transportation 93f27

Finally, the NVIDIA also commented in his presentation at the GTC 2024, on the adoption of the DRIVE Thor by the transport sector and various companies.
O DRIVE Thor is much more than a simple car computer, it is a fundamental piece in the transformation of the transport sector, driving everything from enger vehicles to long-distance trucks, robo-taxis and autonomous delivery vehicles. With its architecture, the Thor offers not only advanced cockpit capabilities, but also safe automated and autonomous driving, all on a centralized platform, using new architecture processors Blackwell da NVIDIA, enabling processing with LLM and generative AIs for decision making.
Several leading companies in the electric vehicle sector have already adopted the DRIVE Thor in your next generation projects. A BYD, a global giant in the automotive industry, is expanding its collaboration with NVIDIA, incorporating Thor into its electric vehicle fleets. In addition Hyper and XPENG are among the companies that chose the Thor to power its future fleets of autonomous vehicles.
Furthermore, DRIVE Thor is gaining prominence in the freight transport and logistics sector. Companies like Nuro, Plus, Waabi and WeRide are leading the way in adopting this advanced AI system for their autonomous driving projects. From developing driving technologies for commercial and consumer vehicles to creating autonomous trucking solutions at scale, these companies are relying on the power and effectiveness of DRIVE Thor to drive innovation in their respective fields.
With its launch scheduled for production vehicles next year, the DRIVE Thor promises to revolutionize the automotive industry, with its performance and its ability to provide reliable autonomous driving.
Source: NVIDIA.
Learn more
reviewed by Glaucon Vital in 18 / 3 / 24.