Advertisement
AI development today is less about starting from scratch and more about building on what's already working. OpenAI made that shift easier by making powerful language models accessible to everyone. But the real twist came when the open-source space stepped up. Now, Hugging Face is leading a different kind of conversation — one that's more open, more flexible, and just as capable. And with the new Messages API support on Hugging Face, that shift from OpenAI to open LLMs isn’t just possible — it’s smoother than ever.
Let’s take a closer look at what this shift looks like and how the Messages API fits into it.
Before we even get to the Messages API, it’s helpful to understand what OpenAI made easy — and where things were missing. OpenAI’s ChatGPT API gave developers a straightforward way to build chat-based tools. You didn’t have to structure prompts manually or engineer complex tokens. You just sent a list of messages, and it worked.
Here’s an example of how simple it looked:
json
CopyEdit
[
{ "role": "system", "content": "You are a helpful assistant." },
{ "role": "user", "content": "What's the capital of France?" }
]
That structure — roles like "system", "user", and "assistant" — isn’t just cosmetic. It allows for a natural back-and-forth format that’s closer to real human conversation. You can maintain context, inject behavior instructions, and scale up without rethinking your prompt every time.
But OpenAI’s API has one catch: it’s closed. You can’t run it locally, you can’t fine-tune it without paying extra, and you’re always bound by their infrastructure and pricing.
That’s where open models come in. But until recently, replicating the same simple chat API structure with an open model wasn’t easy.
Hugging Face’s new Messages API makes this entire shift a lot more natural. It introduces a chat-style interface for open models — without needing you to hack together your own conversation format. You send messages with roles, just like you would with OpenAI, and the model responds accordingly.
This means you can now choose from a growing list of open LLMs — like Meta’s LLaMA, Mistral, or Zephyr — and use them in a familiar way.
Here’s a quick look at what that would look like in code:
python
CopyEdit
from huggingface_hub import InferenceClient
client = InferenceClient("mistralai/Mistral-7B-Instruct-v0.2")
response = client.messages([
{"role": "system", "content": "You are a coding assistant."},
{"role": "user", "content": "Write a function to check for prime numbers in Python."}
])
print(response)
No need to think about how to format the prompt. No extra handling of special tokens. The Messages API takes care of it behind the scenes — and returns the response in plain text, ready to use.
With OpenAI, the model is fixed. You’re using GPT-3.5 or GPT-4. But with Hugging Face, you can plug into any model that supports the Messages API. That means the power is in your hands — and the options are expanding fast.
Some popular models that support the new format:
Meta’s LLaMA 3: Strong general-purpose model with wide community support.
Mistral 7B / Mixtral: Light, fast, and surprisingly strong at reasoning tasks.
Zephyr: Chat-tuned and well-suited for assistant-style responses.
Phi-3: Good performance on smaller hardware, optimized for code and conversation.
Each of these models has its own style, strengths, and quirks — but they can all be plugged into the Messages API the same way. No rewriting your application logic every time you want to switch.
This flexibility is especially useful for developers working in regulated or cost-sensitive environments. You can self-host the models, fine-tune them, or use Hugging Face’s hosted endpoints. Either way, you’re not tied down.
If you're ready to move from OpenAI’s API to open models using Hugging Face, the process is simple. Here’s what to do.
Start by installing the Hugging Face Hub client:
bash
CopyEdit
pip install huggingface_hub
If you plan to use hosted endpoints, make sure you have a Hugging Face account and an access token. You can get one from hf.co/settings/tokens.
Head to the Hugging Face Models page and filter for models that support the Messages API. Look for "chat" models or ones with instruction tuning.
Pick one that suits your use case — whether it's coding help, summarization, or general Q&A.
Set your token using the CLI:
bash
CopyEdit
huggingface-cli login
Or programmatically:
python
CopyEdit
from huggingface_hub import InferenceClient
client = InferenceClient("your-model-id", token="your-access-token")
Now, you're ready to send messages in a chat format.
python
CopyEdit
response = client.messages([
{"role": "system", "content": "You are a travel assistant."},
{"role": "user", "content": "Suggest a 3-day itinerary for Kyoto."}
])
The response comes back as plain text — just like OpenAI’s API — and you can plug it directly into your application.
From here, you can add more turns to the conversation, maintain history, and experiment with different models. Because the structure is consistent, switching from one model to another is as easy as changing the model name in the client.
And if you want more control, many of these models are also available for local use with libraries like transformers and vllm, allowing you to keep things offline and tweak behavior more deeply.
The Messages API on Hugging Face doesn’t just make open models easier to use — it brings them closer to the simplicity that made OpenAI’s tools so attractive in the first place. You now get the same chat format, the same role-based interaction, and a wide pool of models to choose from. Whether you're experimenting or building something serious, this shift puts you in charge of the tools you use — not the other way around.
Advertisement
Want OpenAI-style chat APIs without the lock-in? Hugging Face’s new Messages API lets you work with open LLMs using familiar role-based message formats—no hacks required
How a groundbreaking AI model for robotic arms is transforming automation with smarter, more adaptive performance across industries
Discover why banks must embrace innovation in compliance to manage rising risks, reduce costs, and stay ahead of regulations
Learn how to delete your ChatGPT history and manage your ChatGPT data securely. Step-by-step guide for removing past conversations and protecting your privacy
GenAI helps Telco B2B sales teams cut admin work, boost productivity, personalize outreach, and close more deals with automation
Learn how to boost sales with Generative AI. Learn tools, training, and strategies to personalize outreach and close deals faster
Discover the top 9 open source graph databases ideal for developers in 2025. Learn how these tools can help with graph data storage, querying, and scalable performance
How Locally Linear Embedding helps simplify high-dimensional data by preserving local structure and revealing hidden patterns without forcing assumptions
Confused between lazy learning and eager learning? Explore the differences between these two approaches and discover when to use each for optimal performance in machine learning tasks
Google’s Agentspace is changing how we work—find out how it could revolutionize your productivity.
Google debuts new tools and an agent protocol to simplify the creation and management of AI-powered agents.
How to create RDD in Apache Spark using PySpark with clear, step-by-step instructions. This guide explains different methods to build RDDs and process distributed data efficiently