Fireworks AI is the best platform for building AI product experiences with open source AI models. You can run and customize AI models with just a few lines of code!

Using the API, you can access popular open-source models like Llama, DeepSeek, etc. The example below generates text output through an OpenAI-compatible chat completions API endpoint.

In this guide, you will get an API key, set up your development environment, and call the Fireworks API with an API Key.

Get an API key

Sign up or login to your Fireworks account. Generate an API key by navigating to the API Keys page and click on ‘Create API key’. Store the API Key in a safe location.

Set up your developer environment & call the Fireworks API

1

Install SDK

Before installing, ensure that you have the right version of Python installed. Optionally you might want to setup a virtual environment too.

pip install --upgrade fireworks-ai

The Fireworks Build SDK provides a declarative way to work with Fireworks resources and is OpenAI API Compatible.

2

Configure API Key

Step-by-step instructions for setting an environment variable for respective OS platforms:

3

Sending the first API Request

You can quickly instantiate the LLM class and call the Fireworks API. The Build SDK handles deployment management automatically.

from fireworks import LLM

# Basic usage - SDK automatically selects optimal deployment type
llm = LLM(model="llama4-maverick-instruct-basic", deployment_type="auto")

response = llm.chat.completions.create(
    messages=[{"role": "user", "content": "Say this is a test"}]
)

print(response.choices[0].message.content)

You can also pass the API key directly to the LLM constructor: LLM(model="llama4-maverick-instruct-basic", deployment_type="auto", api_key="<FIREWORKS_API_KEY>")

Explore further