How To Use Hugging Face Model On Openclaw

H

Running open-source models without managing your own infrastructure just got easier. Here’s what you need to know:

  1. This matters because Hugging Face Inference API gives you access to thousands of open models without the DevOps overhead of self-hosting.
  2. Many developers struggle with token permissions and figuring out which models actually work through the inference endpoints.
  3. You will learn how to create the right token, configure Hugging Face in OpenClaw, and start using models like DeepSeek-R1.

Hugging Face Openclaw

Hugging Face hosts a massive collection of open-source models accessible through their Inference API. Instead of downloading multi-gigabyte models and setting up GPU servers, you call them via API.

First, create a fine-grained token in your Hugging Face account. Go to Settings → Tokens and enable the Make calls to Inference Providers permission. This is your HUGGINGFACE_HUB_TOKEN or HF_TOKEN.

For automated setups, configure non-interactively:

openclaw onboard --non-interactive --mode local --auth-choice huggingface-api-key --huggingface-api-key "$HF_TOKEN"

Set your preferred model in openclaw.json:

{
"agents": {
"defaults": {
"model": { "primary": "huggingface/deepseek-ai/DeepSeek-R1" }
}
}
}

Restart your gateway and you are ready to query open-source models through Hugging Face’s infrastructure.

About the author

Hairun Wicaksana

Hi, I just another vibecoder from Southeast Asia, currently based in Stockholm. Building startup experiments while keeping close to the KTH Innovation startup ecosystem. I focus on AI tools, automation, and fast product experiments, sharing the journey while turning ideas into working software.

Get in touch

Quickly communicate covalent niche markets for maintainable sources. Collaboratively harness resource sucking experiences whereas cost effective meta-services.