[Beta] Monitor Logs in Production
This is in beta. Expect frequent updates, as we improve based on your feedback.
LiteLLM provides an integration to let you monitor logs in production.
👉 Jump to our sample LiteLLM Dashboard: https://admin.litellm.ai/
Debug your first logs
1. Get your LiteLLM Token
Go to admin.litellm.ai and copy the code snippet with your unique token
2. Set up your environment
Add it to your .env
import os
os.env["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
Turn on LiteLLM Client
import litellm
litellm.client = True
3. Make a normal completion()
call
import litellm
from litellm import completion
import os
# set env variables
os.environ["LITELLM_TOKEN"] = "e24c4c06-d027-4c30-9e78-18bc3a50aebb" # replace with your unique token
os.environ["OPENAI_API_KEY"] = "openai key"
litellm.use_client = True # enable logging dashboard
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
Your completion()
call print with a link to your session dashboard (https://admin.litellm.ai/<your_unique_token>)
In the above case it would be: admin.litellm.ai/e24c4c06-d027-4c30-9e78-18bc3a50aebb
Click on your personal dashboard link. Here's how you can find it 👇
👋 Tell us if you need better privacy controls
3. Review request log
Oh! Looks like our request was made successfully. Let's click on it and see exactly what got sent to the LLM provider.
Ah! So we can see that this request was made to a Baseten (see litellm_params > custom_llm_provider) for a model with ID - 7qQNLDB (see model). The message sent was - "Hey, how's it going?"
and the response received was - "As an AI language model, I don't have feelings or emotions, but I can assist you with your queries. How can I assist you today?"
🎉 Congratulations! You've successfully debugger your first log!