# Showing full traces of LLM calls with Langfuse

Valohai LLM provides a visual overview of each of your evaluations, but what if you want to dive deeper into each trace produced with each evaluation?

We offer an integration to Langfuse that allows you to access the traces that are tied to evaluations directly from the Valohai LLM interface. You will need to have Langfuse set up to capture traces and Valohai LLM will take care of connecting each result to a trace.

### 1. Install Langfuse and Valohai LLM

```bash
pip install langfuse valohai-llm
```

### 2. Set environment variables

```bash
# Langfuse credentials (from your Langfuse project settings)
export LANGFUSE_PUBLIC_KEY="pk-..."
export LANGFUSE_SECRET_KEY="sk-..."
export LANGFUSE_HOST="https://cloud.langfuse.com"  # or your self-hosted URL

# Valohai LLM credentials
export VALOHAI_LLM_API_KEY="your-jwt-key"
export VALOHAI_LLM_URL="https://llm.valohai.com"  # or your instance
```

### 3. Write your evaluation code

Your evaluation code should be setup to include Langfuse tracing. Valohai LLM will capture the trace URLs automatically and include them in the results that get posted to Valohai LLM.

#### Posting individual results

This example evaluates an LLM's summarization ability by asking GPT-4 to summarize an article and scoring the output. In practice you'd run this in a loop over many articles, posting one result per article. You can notice that posting a result to Valohai LLM works just as it does without using Langfuse for tracing.

```python
import valohai_llm  # importing this installs the Langfuse hook automatically
from langfuse.openai import openai  # Langfuse's drop-in OpenAI wrapper

# This call is traced by Langfuse automatically
response = openai.chat.completions.create(
    model="gpt-4",
    messages=[{"role": "user", "content": "Summarize this article: ..."}],
)

# Compute your evaluation metrics however you like
accuracy = evaluate_summary(response.choices[0].message.content)

# Post to Valohai LLM — the langfuse_last_trace_url is included automatically
valohai_llm.post_result(
    task="summarization-eval",
    metrics={"accuracy": accuracy},
    labels={"model": "gpt-4", "dataset": "cnn-dailymail"},
)
```

#### Running as a task

If your evaluations are configured as tasks in Valohai LLM (with parameters and datasets), the integration works the same way. The `task.run()` method calls `post_result()` internally for each evaluation, so trace URLs are captured automatically.

This example evaluates correctness across a dataset, the task defines which models to test and which dataset items to run against, and the library handles the iteration. Each item's prompt is sent to the LLM and the response is checked against an expected answer.

```python
import valohai_llm
from langfuse.openai import openai

task = valohai_llm.get_current_task()

def evaluate(*, params, item):
    # The Langfuse-instrumented client captures the trace URL automatically
    response = openai.chat.completions.create(
        model=params["model"],
        messages=[{"role": "user", "content": item["prompt"]}],
    )
    correct = 1 if response.choices[0].message.content == item["expected"] else 0
    return {"correct": correct}

task.run(evaluate, item_labels=["category"])
```

Each invocation of `evaluate` produces a Langfuse trace and a Valohai LLM result, linked together automatically.

> The trace URL in global state gets **overwritten** on each new span start, not accumulated. If your evaluation function makes one LLM call per invocation, the URL will correctly correspond to that call's trace. If you make multiple LLM calls within a single evaluation, the metadata will only hold the URL for the last trace that started.

### 4. View your results in Valohai LLM

When you open Valohai LLM, you will find a direct link to your result's trace in the right most column of the results table. Click on the icon to open the trace in Langfuse.

<figure><img src="https://4109720758-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Ff3mjTRQNkASbnMbJqzJ2%2Fuploads%2FaE7la569WN16k4qVG7pZ%2FScreenshot%202026-03-04%20at%2014.22.17.png?alt=media&#x26;token=594e7b41-df83-4209-8f28-bf056d0d4bf3" alt=""><figcaption></figcaption></figure>
