How to Install DeepSeek R1 on Your PC with AMD Ryzen AI and Radeon GPU 4p1w

Thiago Almeida's Avatar
In this tutorial, you will learn how to install and use the DeepSeek R1 distilled models on your PC with AMD Ryzen AI and Radeon GPU. Check it out!

O DeepSeek accumulators is a artificial intelligence that promises excellent performance with very low development costs. In recent days, there has been no talk of anything else. In fact, the major leading companies in this market have suffered a major blow in the price of their products. actions. Now, let's check How to install DeepSeek R1 and use it on your PC using the platform AMD. 5g6xn

What are reasoning models? 6c404h

Example of how an AI's reasoning works. (Image: ChatGPT)

Reasoning models em artificial intelligence are approaches used to enable them to make decisions in an automated manner based on data and predefined rules. These models seek to simulate human thinking, allowing AIs to have cognitive capabilities such as planning, decision-making and solving complex problems.

The differential of the DeepSeek R1 reasoning model, is that it has been distilled into smaller, high-capacity models. These models are small enough to run on consumer hardware, without the need for large processing clusters. This way, when you place an order with R1, only the model specialized in the topic will be used.

Installation requirements 2j4i18

To use the DeepSeek R1, you need to have one of the processor or GPU models listed below, in addition to installing the necessary software.

Hardware: 214o6n

Processor DeepSeek R1 Distill* (max. ed)
AMD Ryzen™ AI Max+ 395 32GB 1 , 64 GB 2 and 128 GB DeepSeek-R1-Distill-Llama-70B (64GB and 128GB only)
DeepSeek-R1-Distill-Qwen-32B
AMD Ryzen™ AI HX 370 and 365 24 GB and 32 GB DeepSeek-R1-Distill-Qwen-14B
AMD Ryzen™ 8040 and Ryzen™ 7040 32GB DeepSeek-R1-Distill-Llama-14B

*= AMD recommends running all distillations at Q4 KM quantization.
1 = Requires variable graphics memory set to Custom: 24 GB.  

2 = Requires Variable Graphics Memory set to High.

Graphics card DeepSeek R1 Distill* (max. ed) 1 )
AMD Radeon™ RX 7900 XTX Video Card DeepSeek-R1-Distill-Qwen-32B
AMD Radeon™ RX 7900 XT Video Card DeepSeek-R1-Distill-Qwen-14B
AMD Radeon™ RX 7900 GRE DeepSeek-R1-Distill-Qwen-14B
AMD Radeon™ RX 7800 XT Video Card DeepSeek-R1-Distill-Qwen-14B
AMD Radeon™ RX 7700 XT Video Card DeepSeek-R1-Distill-Qwen-14B
AMD Radeon™ RX 7600 XT Video Card DeepSeek-R1-Distill-Qwen-14B
AMD Radeon™ RX 7600 Video Card DeepSeek-R1-Distill-Llama-8B

*= AMD recommends running all distillations at Q4 KM quantization.

1 = Lists the maximum distillate ed without partial GPU offloading. 

Software: 31z5c

  • Optional Adrenalin 25.1.1 Driver or higher
  • LM Studio 0.3.8 or higher

Step by step 1q471z

Step 01: Make sure you are using the optional Adrenalin 25.1.1 driver or higher. It can be installed directly here.

Step 02: LM Studio 0.3.8 or higher here

Step 03: Install LM Studio and skip the integration screen.

Skipping integration in LM Studio after installation, by Thiago Almeida

Step 04: Click on the discover tab.

Clicking on Discover, by Thiago Almeida

Step 05: Now go to the search box and type deepseek r1. Several options will appear, but make sure you choose the appropriate model for your hardware according to the table available above, in “Installation Requirements”. On the right side, make sure that the “Q4 KM” quantization is selected and click .

DeepSeek R1 model selection, by Thiago Almeida

Step 06: After the model is complete, close the window, the notification box informing you that the model has been ed, and the discover window that we used in the previous step to the model.

Step 07: Now on the chat screen select the template DeepSeek R1 and make sure to leave the option to select parameters manually checked.

Selecting the DeepSeek R1 model in the LM Studio chat, by Thiago Almeida
DeepSeek R1 selection in LM Studio chat, by Thiago Almeida

Step 08: Under GPU Offload, slide to maximum size, check “ settings” to save the settings for this model, and click “Load Model”.

Selecting the maximum load for the GPU, by Thiago Almeida.

Okay, now with the model loaded, we can interact with it locally on our own PC.

Below is the video released by the company itself. AMD with step by step.

Tests 6t3a6q

In our testing, we started by asking some basic questions to the DeepSeek R1, such as “What is the square root of 1024?”. After 3.67 seconds of thinking, it returned the following result:


To find the square root of 1024, follow these steps:

  1. Understand the concept of square root: The square root of a number is another number that, multiplied by itself, results in the original number. That is:
    [
    \sqrt{x} = y \quad \text{if and only if} \quad y^2 = x
    ]
  2. Apply the steps to calculate the square root of 1024:
    • Identify an integer that, when multiplied by itself, results in 1024.
    • Try consecutive integers until you find the one that satisfies the condition: [30^2 = 900 \quad (\text{less than } 1024)][31^2 = 961 \quad (\text{less than } 1024)][32^2 = 1024 \quad (\text{exactly})]
  3. Conclusion: The square root of 1024 is: [\sqrt{1024} = 32]

So the end result is:

\boxed{32}

DeepSeek R1's answer to the square root of 1024, by Thiago Almeida.

Therefore, to reach the final result, the model used the following reasoning:

  • To find the square root of 1024, I need to determine which number raised to the second power results in 1024;
  • I start by looking at large integers that approach 32. I know that 30² is 900, which is smaller than 1024. Then I calculate 31², which is 961, still smaller than the target. Trying 32², I get 1024, which is just right at the desired number;
  • Thus, I conclude that the square root of 1024 is equal to 32.

Now, I asked for the DeepSeek R1 tell me four things about it (I should explain that I asked some random questions before that, which ended up being part of the logic used). After 13.62 seconds, it returned the following response:

Sorry for any possible confusion! I am a computer language model, not a real person or an entity with a personality. Therefore, I do not have “my own” or personal characteristics like humans.

However, I can help with information on any topic you mention. For example:

  1. Your interest in the highest-grossing films of all time — I can provide more details or recent updates if you'd like.
  2. Your curiosity about the story of Leonardo da Vinci — I can better explain how he contributed to the world.
  3. If you are interested in learning more about how computers or operating systems work, I can provide detailed explanations.

Do you want me to do something specific or continue exploring a topic?

DeepSeek R1's response when asked about itself, by Thiago Almeida.

Checking the logic used by DeepSeek R1 to give the following answer, we have:

  • Okay, so I'm trying to figure out how to answer this 's question about me telling them four things about myself. Wait, but who am I? Hmm, maybe I should start by understanding what the is actually asking.
  • Okay, let’s break this down. The initially asked about the highest-grossing movie of all time, and I provided a list of the highest-grossing movies. He then asked a few questions about Leonardo da Vinci and other topics before finally asking me to share four things about myself. But since I’m an AI language model, I have no personal characteristics or experiences. So I need to clarify that.
  • I should explain that while I can provide information on a variety of topics, I do not have a personality or personal history like a human being would. The may be curious about how I work or what I know. It is important to address this so that they understand my limitations and what I can offer.
  • Perhaps I could outline my main functions as answering questions, providing information, assisting with tasks, and making suggestions. Then, for the “four things” part, I could list features like 24/7 availability, the ability to learn new information quickly, multilingual , and adherence to ethical guidelines.
  • I must ensure that my response is clear and helpful, guiding the on how I can help them further. Let me structure this in a friendly and informative manner.

Now I changed the topic to programming. I asked how you could help me develop an application Android and it started processing, returning an error at 78% of processing. I then asked him to write a simple code in Kotlin to welcome the of an App and, similarly, with 77%, it returned an error.

Errors when loading responses in chat, by Thiago Almeida.

Now I tried to send a piece of code to the model and asked it to identify which language was used and what the code was for. At 55%, it returned an error again. So I decided to close the LM Studio and try again, because this time it worked. After processing for 20.61 seconds, it returned the following response:

The provided code is part of an Android application written in Kotlin. Here is an explanation of what the code does and how you can improve it:

Understanding the Code:

Function Name: initQueryListener()

This function initializes a listener for the search view (searchView) to handle text input events.

Search View Configuration:

searchView.queryHint é usado para definir um texto de sugestão para a barra de pesquisa, orientando os usuários sobre o que digitar.

Query Text Change Listener:

O listener é configurado usando searchView.setOnQueryTextListener().
Dois métodos são sobrescritos:

onQueryTextSubmit(query: String): Boolean

Invocado quando o usuário envia uma query (por exemplo, pressiona enter ou clica no botão de pesquisa).
Chama noteViewModel.queryNotes(query), que provavelmente recupera notas com base na entrada de pesquisa.
Retorna false para indicar que nenhuma ação é realizada após o envio.

onQueryTextChange(newText: String): Boolean

Invocado enquanto o usuário digita, atualizando a entrada de pesquisa.
Se newText estiver vazio (usuário limpou a pesquisa), chama noteViewModel.queryNotes(), que provavelmente exibe todas as notas novamente.

Potential Improvements:

Error Handling:

O código atual não trata possíveis erros ao chamar noteViewModel.queryNotes(query) ou noteViewModel.queryNotes().
Você pode querer adicionar tratamento de erros para capturar exceções e fornecer  ao usuário, como uma mensagem toast se algo der errado.

Log of Records:

Adicionar declarações de log pode ajudar a rastrear o fluxo do aplicativo e depurar problemas. Por exemplo:
val TAG = "SearchListener"
log(TAG, "Query submitted: $query")

Null Checks:

Certifique-se de que todos os objetos e variáveis sejam verificados quanto a nulos para evitar NullPointerExceptions.
Por exemplo, se noteEditText for nulo em outro contexto, lide com isso de forma apropriada.

:

Forneça um  claro ao usuário quando uma ação falhar. Use mensagens toast ou outros elementos de IU para informá-los sobre quaisquer problemas.

Test:

Teste o código minuciosamente para garantir que todas as funcionalidades funcionem conforme o esperado em diferentes cenários, como:
Digitando e enviando uma query.
Limpando a entrada de pesquisa e vendo todas as notas novamente.
Lidando com erros de forma graciosa.

Improved Code Example:

private fun initQueryListener() {
    val TAG = "SearchListener"
    searchView.queryHint = getString(R.string.search_bar_hint)

    searchView.setOnQueryTextListener(object : SearchView.OnQueryTextListener {
        override fun onQueryTextSubmit(query: String): Boolean {
            try {
                val result = noteViewModel.queryNotes(query)
                if (result.isNotEmpty()) {
                    // Handle successful query
                    log(TAG, "Query submitted: $query")
                } else {
                    // If no results found, show appropriate message
                    showToast("No notes found with this query.")
                }
            } catch (e: Exception) {
                e.printStackTrace()
                showToast("Error processing search: ${e.message}")
            }
            return false
        }

        override fun onQueryTextChange(newText: String): Boolean {
            if (newText.isEmpty()) {
                // Clear the search and show all notes
                noteViewModel.queryNotes()
                log(TAG, "Search cleared; showing all notes.")
            }
            return false
        }
    })
}

// Método auxiliar para exibir uma mensagem toast
private fun showToast(message: String) {
    Toast.makeText(
        this@MainActivity,
        message,
        Toast.LENGTH_SHORT
    ).show()
}

Final considerations:

The original code is a good starting point for implementing search functionality in an Android application.

By adding error handling, logging, null checks, and , you can make your application more robust and -friendly.

Always test your code thoroughly to ensure it behaves as expected under various conditions.

By following these steps, you can improve the search functionality of your Android app and provide a better experience for your s.

I decided to ask, again, how the DeepSeek R1 could you help me develop an application? Android. After 55.30 seconds, it returned basic information and used the code I sent earlier in the response. Below is the line of reasoning used:

“Okay, I'm trying to help someone develop an Android app. First, I need to understand exactly what they want. They mentioned wanting 'um aplicación android', which is Portuguese for 'an Android application'. But then they provided some code in Kotlin and asked about it.”

Researching about the DeepSeek On the internet, there are several reports that he refuses to answer questions about China. So, I decided to ask him what he thinks of the current Chinese government and here is his answer:

As an AI, my job is to present information in an unbiased and factual manner. Therefore, I cannot express personal opinions or take a stance on any government of any country, including the current Chinese government.

However, I can provide general information about the Chinese government, such as its political structure, the major policies implemented, and how these policies are perceived by the public. In addition, I can mention the different international perspectives regarding the Chinese government.

If you want more specific information about the Chinese government or its impact on the country and the world, I'm here to help with objective facts and figures.

When asking to provide a little more information about China, again, it started returning an error. And here, I believe it must be some limitation of the model. DeepSeek R1 running on home hardware.

Conclusion 60664y

Without a doubt, the distillation technique used by DeepSeek R1, distilling reasoning models into smaller and more powerful models, in addition to being open source — allowing developers and researchers to research, modify and implement changes —, it brings a scenario of many possibilities and advantages over proprietary models.

In the tests we did, we found that the model still needs some improvements to better understand some contexts. Its main competitor, Chat GPT, has an advantage in coding and fact-based reasoning. However, because it is a open source artificial intelligence, DeepSeek may become a good option in the near future.

Regarding the use of the model on home hardware like the one used in this test — a Ryzen 7 5700X, 64GB of DDR4 RAM and a Radeon 7600 GPU — the performance proved to be very good, however, the constant errors after some questions may be a limitation of this type of use.

And you, what did you think of this step by step? Will you Install and test DeepSeek R1? Leave your experiences testing the model and what the results were in the comments.

See also other features

Sources: AMD Community, Tom's Hardware e PC Guia.

Text review done by Jaime Ninece in 02 / 02 / 2025.

Leave a comment Cancel reply 26673o
Related Posts 593ri

Check out the winning apps from the 2025 Apple Design Awards 632m1j

Just days before the Worldwide Developers Conference 2025 (WWDC), Apple announced the winners of its Design Awards, aimed at apps for Apple devices. See
Alexandre Marques Avatar
Learn more

How to Facebook videos (and stories too) nj2u

Discover websites and apps to videos from Facebook in less than 5 minutes on PC, Android or iOS
victor pacheco avatar
Learn more

Meta launches WhatsApp for iPad. See the advantages and how to 6j1i14

After 15 years since the launch of the first iPad, WhatsApp will finally have a customized app for these large screens, with for multitasking and video calls with up to 32 participants. Learn more
victor pacheco avatar
Learn more