Advertisement
AI development today is less about starting from scratch and more about building on what's already working. OpenAI made that shift easier by making powerful language models accessible to everyone. But the real twist came when the open-source space stepped up. Now, Hugging Face is leading a different kind of conversation — one that's more open, more flexible, and just as capable. And with the new Messages API support on Hugging Face, that shift from OpenAI to open LLMs isn’t just possible — it’s smoother than ever.
Let’s take a closer look at what this shift looks like and how the Messages API fits into it.
Before we even get to the Messages API, it’s helpful to understand what OpenAI made easy — and where things were missing. OpenAI’s ChatGPT API gave developers a straightforward way to build chat-based tools. You didn’t have to structure prompts manually or engineer complex tokens. You just sent a list of messages, and it worked.
Here’s an example of how simple it looked:
json
CopyEdit
[
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What's the capital of France?" }
]
That structure — roles like "system", "user", and "assistant" — isn’t just cosmetic. It allows for a natural back-and-forth format that’s closer to real human conversation. You can maintain context, inject behavior instructions, and scale up without rethinking your prompt every time.
But OpenAI’s API has one catch: it’s closed. You can’t run it locally, you can’t fine-tune it without paying extra, and you’re always bound by their infrastructure and pricing.
That’s where open models come in. But until recently, replicating the same simple chat API structure with an open model wasn’t easy.
Hugging Face’s new Messages API makes this entire shift a lot more natural. It introduces a chat-style interface for open models — without needing you to hack together your own conversation format. You send messages with roles, just like you would with OpenAI, and the model responds accordingly.
This means you can now choose from a growing list of open LLMs — like Meta’s LLaMA, Mistral, or Zephyr — and use them in a familiar way.
Here’s a quick look at what that would look like in code:
python
CopyEdit
from huggingface_hub import InferenceClient
client = InferenceClient("mistralai/Mistral-7B-Instruct-v0.2")
response = client.messages([
{"role": "system", "content": "You are a coding assistant."},
{"role": "user", "content": "Write a function to check for prime numbers in Python."}
])
print(response)
No need to think about how to format the prompt. No extra handling of special tokens. The Messages API takes care of it behind the scenes — and returns the response in plain text, ready to use.
With OpenAI, the model is fixed. You’re using GPT-3.5 or GPT-4. But with Hugging Face, you can plug into any model that supports the Messages API. That means the power is in your hands — and the options are expanding fast.
Some popular models that support the new format:
Meta’s LLaMA 3: Strong general-purpose model with wide community support.
Mistral 7B / Mixtral: Light, fast, and surprisingly strong at reasoning tasks.
Zephyr: Chat-tuned and well-suited for assistant-style responses.
Phi-3: Good performance on smaller hardware, optimized for code and conversation.
Each of these models has its own style, strengths, and quirks — but they can all be plugged into the Messages API the same way. No rewriting your application logic every time you want to switch.
This flexibility is especially useful for developers working in regulated or cost-sensitive environments. You can self-host the models, fine-tune them, or use Hugging Face’s hosted endpoints. Either way, you’re not tied down.
If you're ready to move from OpenAI’s API to open models using Hugging Face, the process is simple. Here’s what to do.
Start by installing the Hugging Face Hub client:
bash
CopyEdit
pip install huggingface_hub
If you plan to use hosted endpoints, make sure you have a Hugging Face account and an access token. You can get one from hf.co/settings/tokens.
Head to the Hugging Face Models page and filter for models that support the Messages API. Look for "chat" models or ones with instruction tuning.
Pick one that suits your use case — whether it's coding help, summarization, or general Q&A.
Set your token using the CLI:
bash
CopyEdit
huggingface-cli login
Or programmatically:
python
CopyEdit
from huggingface_hub import InferenceClient
client = InferenceClient("your-model-id", token="your-access-token")
Now, you're ready to send messages in a chat format.
python
CopyEdit
response = client.messages([
{"role": "system", "content": "You are a travel assistant."},
{"role": "user", "content": "Suggest a 3-day itinerary for Kyoto."}
])
The response comes back as plain text — just like OpenAI’s API — and you can plug it directly into your application.
From here, you can add more turns to the conversation, maintain history, and experiment with different models. Because the structure is consistent, switching from one model to another is as easy as changing the model name in the client.
And if you want more control, many of these models are also available for local use with libraries like transformers and vllm, allowing you to keep things offline and tweak behavior more deeply.
The Messages API on Hugging Face doesn’t just make open models easier to use — it brings them closer to the simplicity that made OpenAI’s tools so attractive in the first place. You now get the same chat format, the same role-based interaction, and a wide pool of models to choose from. Whether you're experimenting or building something serious, this shift puts you in charge of the tools you use — not the other way around.
Advertisement
Looking for the best Langchain alternatives in 2025? Explore 8 top LLM frameworks that offer simpler APIs, agent support, and faster development for AI-driven apps
Google debuts new tools and an agent protocol to simplify the creation and management of AI-powered agents.
Turn open-source language models into smart, action-taking agents using LangChain. Learn the steps, tools, and challenges involved in building fully controlled, self-hosted AI systems
Tech giants respond to state-level AI policies, advocating for unified federal rules to guide responsible AI use.
Explore how generative AI in financial services and other sectors drives growth, efficiency, and smarter decisions worldwide
Why Gradio stands out from every other UI library. From instant sharing to machine learning-specific features, here’s what makes Gradio a practical tool for developers and researchers
Looking to build practical AI that runs at the edge? The AMD Pervasive AI Developer Contest gives you the tools, platforms, and visibility to make it happen—with real-world impact
Can ChatGPT be used as a proofreader for your daily writing tasks? This guide explores its strengths, accuracy, and how it compares to traditional AI grammar checker tools
Learn how to delete your ChatGPT history and manage your ChatGPT data securely. Step-by-step guide for removing past conversations and protecting your privacy
Want to shrink a large language model to under two bits per weight? Learn how 1.58-bit mixed quantization uses group-wise schemes and quantization-aware training
Discover the top 9 open source graph databases ideal for developers in 2025. Learn how these tools can help with graph data storage, querying, and scalable performance
Discover the top eight ChatGPT prompts to create stunning social media graphics and boost your brand's visual identity.