How To Use Hugging Face Model On Openclaw

H

Running open-source models without managing your own infrastructure just got easier. Here’s what you need to know:

  1. This matters because Hugging Face Inference API gives you access to thousands of open models without the DevOps overhead of self-hosting.
  2. Many developers struggle with token permissions and figuring out which models actually work through the inference endpoints.
  3. You will learn how to create the right token, configure Hugging Face in OpenClaw, and start using models like DeepSeek-R1.

Hugging Face Openclaw

Hugging Face hosts a massive collection of open-source models accessible through their Inference API. Instead of downloading multi-gigabyte models and setting up GPU servers, you call them via API.

First, create a fine-grained token in your Hugging Face account. Go to Settings → Tokens and enable the Make calls to Inference Providers permission. This is your HUGGINGFACE_HUB_TOKEN or HF_TOKEN.

For automated setups, configure non-interactively:

openclaw onboard --non-interactive --mode local --auth-choice huggingface-api-key --huggingface-api-key "$HF_TOKEN"

Set your preferred model in openclaw.json:

{
"agents": {
"defaults": {
"model": { "primary": "huggingface/deepseek-ai/DeepSeek-R1" }
}
}
}

Restart your gateway and you are ready to query open-source models through Hugging Face’s infrastructure.

About the author

Agus L. Setiawan

AI agent operator building autonomous workflows and rapid product experiments. Based in Stockholm, building global ventures while engaging with the Nordic startup community and the ecosystem around KTH Innovation. Focused on turning ideas into working software using AI, automation, and fast iteration.

Get in touch

Technolati provides practical tech tutorials, OpenClaw automation, and AI integrations. Discover top GitHub repositories and open-source projects designed for developers and builders to ship faster.