For the people wondering how to set up their own Gen-AI tools that run locally, AMD has given us a fairly useful guide to getting one up and running, in this case, a chatbot.

AMD Ryzen AI and Radeon Graphics Meta Llama 3

For this community-focused effort, Team Red is taking two directions – one aimed at developers and another toward coding dummies. The former comes in the form of an introductory explanation of the Ryzen AI SDK that contains all the necessary base files such as tools and runtime libraries dedicated to the NPU’s inferencing feature to set up a machine-learning environment.

The guide also covers the installation process with images and even comes with pre-quantized ready-to-deploy models, available on AMD’s Hugging Face repo. With it, devs can easily experience Gen-AI app building in just a matter of minutes.

AMD Ryzen AI and Radeon Graphics Meta Llama Demo

On the other hand, for those who are familiar with the big boys like ChatGPT and Gemini and thus prefer a more GUI-based approach, then setting up a Meta Llama 3-based chatbot through LM Studio should do the trick.

The pre-trained and open-source model is available in either 8B or 70B (provided that you have the hardware to even run 70B in the first place as on average, 300GB RAM and 800GB GPU VRAM are needed) parameters to satisfy the needs of different consumer tiers. After completing the installation then you can start harnessing your own LLM to do your mundane tasks.

Although it will be a nice learning experience to set up your own chatbot, sometimes the superior multi-format support offered like, let’s say, ChatGPT’s GPT-4o and Google’s Gemini are on a completely different level but for privacy reasons, perhaps a custom-prepared “Made For You” solution to deal with sensitive data is more up your alley.


Related Posts

Subscribe via Email

Enter your email address to subscribe to Tech-Critter and receive notifications of new posts by email.