NEW in v0.12.0 - Built-in tracing infrastructure for debugging, analyzing, and monitoring agent behavior.
“A trace is to a web agent what a commit history is to a Git repo.”
from sentience import SentienceBrowser, SentienceAgent
from sentience.llm_provider import OpenAIProvider
from sentience.tracing import Tracer, JsonlTraceSink
from sentience.agent_config import AgentConfig
# 1. Create a tracer with JSONL file sink
tracer = Tracer(
run_id="shopping-bot-run-123",
sink=JsonlTraceSink("trace.jsonl")
)
# 2. Configure agent behavior (optional)
config = AgentConfig(
snapshot_limit=50, # Max elements per snapshot
temperature=0.0, # LLM temperature
max_retries=1, # Retries on failure
capture_screenshots=True, # Include screenshots in traces
screenshot_format="jpeg", # jpeg or png
screenshot_quality=80 # 1-100 for JPEG
)
# 3. Create agent with tracing enabled
browser = SentienceBrowser()
llm = OpenAIProvider(api_key="your-key", model="gpt-4o")
agent = SentienceAgent(browser, llm, tracer=tracer, config=config)
# 4. Use agent normally - all actions are automatically traced
with browser:
browser.page.goto("https://amazon.com")
agent.act("Click the search box")
agent.act("Type 'magic mouse' into search")
agent.act("Press Enter")
# Trace events are written to trace.jsonlEach action generates multiple trace events saved to the JSONL file:
Event Types:
step_start - Agent begins executing a goalsnapshot - Page state capturedllm_query - LLM decision madeaction - Action executed (click, type, press)step_end - Step completed successfullyerror - Error occurredExample trace.jsonl:
{"v":1,"type":"step_start","ts":"2025-12-26T10:00:00.000Z","run_id":"run-123","seq":1,"step_id":"step-1","data":{"step_index":1,"goal":"Click the search box","attempt":0,"pre_url":"https://amazon.com"}}
{"v":1,"type":"snapshot","ts":"2025-12-26T10:00:01.000Z","run_id":"run-123","seq":2,"step_id":"step-1","data":{"url":"https://amazon.com","element_count":127,"timestamp":"2025-12-26T10:00:01.000Z"}}
{"v":1,"type":"llm_query","ts":"2025-12-26T10:00:02.000Z","run_id":"run-123","seq":3,"step_id":"step-1","data":{"prompt_tokens":1523,"completion_tokens":12,"model":"gpt-4o","response":"CLICK(42)"}}
{"v":1,"type":"action","ts":"2025-12-26T10:00:03.000Z","run_id":"run-123","seq":4,"step_id":"step-1","data":{"action":"click","element_id":42,"success":true,"outcome":"dom_updated","duration_ms":234,"post_url":"https://amazon.com"}}
{"v":1,"type":"step_end","ts":"2025-12-26T10:00:03.000Z","run_id":"run-123","seq":5,"step_id":"step-1","data":{"success":true,"duration_ms":3142,"action":"click"}}
Configure agent behavior with AgentConfig:
from sentience.agent_config import AgentConfig
config = AgentConfig(
# Snapshot settings
snapshot_limit=50, # Max elements to include (default: 50)
# LLM settings
temperature=0.0, # LLM temperature 0.0-1.0 (default: 0.0)
max_retries=1, # Retries on failure (default: 1)
# Verification
verify=True, # Verify action success (default: True)
# Screenshot settings
capture_screenshots=True, # Capture screenshots (default: True)
screenshot_format="jpeg", # "jpeg" or "png" (default: "jpeg")
screenshot_quality=80 # 1-100 for JPEG (default: 80)
)
agent = SentienceAgent(browser, llm, config=config)Compute fingerprints to detect when page state hasn't changed:
from sentience import snapshot
from sentience.utils import compute_snapshot_digests
snap = snapshot(browser)
# Compute both strict and loose digests
digests = compute_snapshot_digests(snap.elements)
print(digests["strict"]) # sha256:abc123... (changes if text changes)
print(digests["loose"]) # sha256:def456... (only changes if layout changes)
# Use for loop detection
if current_digest == previous_digest:
print("Agent is stuck in a loop!")Digest Types:
Format snapshots for LLM consumption:
from sentience.formatting import format_snapshot_for_llm
snap = snapshot(browser)
# Format top 50 elements for LLM context
llm_context = format_snapshot_for_llm(snap, limit=50)
print(llm_context)
# Output:
# [1] <button> "Sign In" {PRIMARY,CLICKABLE} @ (100,50) (Imp:10)
# [2] <input> "Email address" {CLICKABLE} @ (100,100) (Imp:8)
# [3] <link> "Forgot password?" @ (150,140) (Imp:5)Format Explanation:
[ID] - Element ID for actions<role> - Semantic role (button, input, link, etc.)"text" - Element text (truncated to 50 chars){CUES} - Visual cues (PRIMARY, CLICKABLE)@ (x,y) - Screen position(Imp:score) - Importance score (0-10)NEW in v0.12.0+ - Upload traces and screenshots to cloud storage for remote viewing, analysis, and collaboration.
The create_tracer() function signature:
def create_tracer(
api_key: str | None = None,
run_id: str | None = None,
api_url: str | None = None,
logger: SentienceLogger | None = None,
upload_trace: bool = False,
goal: str | None = None,
agent_type: str | None = None,
llm_model: str | None = None,
start_url: str | None = None,
screenshot_processor: Callable[[str], str] | None = None,
)
Cloud tracing enables Pro, Builder, Teams, and Enterprise tier users to:
from sentience import SentienceBrowser, SentienceAgent
from sentience.llm_provider import OpenAIProvider
from sentience.tracer_factory import create_tracer
# 1. Create tracer with automatic tier detection
tracer = create_tracer(
api_key="sk_pro_xxxxx", # Pro/Builder/Teams/Enterprise tier key
run_id="shopping-bot-123", # Gateway requires UUID format
upload_trace=True, # Set to True if you want cloud upload
goal="Buy a laptop from Amazon",
agent_type="Amazon Shopping Agent",
llm_model="gpt-4o",
start_url="https://www.amazon.com",
screenshot_processor=None, # function for PII redaction
)
# 2. Create agent with tracer
browser = SentienceBrowser(api_key="sk_pro_xxxxx")
llm = OpenAIProvider(api_key="your_openai_key", model="gpt-4o")
agent = SentienceAgent(browser, llm, tracer=tracer)
# 3. Use agent normally - traces automatically uploaded
with browser:
browser.page.goto("https://amazon.com")
agent.act("Click the search box")
agent.act("Type 'wireless mouse' into search")
agent.act("Press Enter")
# 4. Upload to cloud (happens automatically on close)
tracer.close() # Uploads trace + screenshots to cloudThe create_tracer() function automatically detects your tier and configures the appropriate sink:
Pro/Builder/Teams/Enterprise Tier (with API key and upload enabled):
CloudTraceSink (uploads to cloud)Local-only tracing (opt-out of cloud upload):
JsonlTraceSink (local-only even with API key)Free Tier (no API key):
JsonlTraceSink (local-only)Graceful Fallback:
After uploading, access your traces via:
GET /api/traces/list to list all runsAlways close the tracer:
try:
# Your agent code
pass
finally:
tracer.close() # Ensures upload even on errors
Use non-blocking uploads for long-running agents:
tracer.close(blocking=False) # Don't wait for upload
Set meaningful run IDs and metadata:
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id=f"amazon-shopping-{datetime.now().strftime('%Y%m%d-%H%M%S')}",
upload_trace=True,
goal="Buy a laptop from Amazon",
agent_type="Amazon Shopping Agent",
llm_model="gpt-4o",
start_url="https://www.amazon.com",
screenshot_processor=None, # function for PII redaction
)
Enable screenshots for debugging:
config = AgentConfig(capture_screenshots=True)
agent = SentienceAgent(browser, llm, tracer=tracer, config=config)
Implement custom trace storage by extending the TraceSink interface. This allows you to store traces in databases, cloud storage, or any custom backend.
from sentience.tracing import TraceSink
class CustomTraceSink(TraceSink):
"""Base interface for trace storage"""
def emit(self, event_dict: dict) -> None:
"""Write a single trace event"""
raise NotImplementedError
def close(self) -> None:
"""Close the sink and flush any pending writes"""
raise NotImplementedErrorStore traces directly in a database:
from sentience.tracing import TraceSink, Tracer
import psycopg2
import json
class DatabaseTraceSink(TraceSink):
"""Store traces in PostgreSQL database"""
def __init__(self, connection_string: str):
self.conn = psycopg2.connect(connection_string)
self.cursor = self.conn.cursor()
# Create traces table if it doesn't exist
self.cursor.execute("""
CREATE TABLE IF NOT EXISTS trace_events (
id SERIAL PRIMARY KEY,
run_id TEXT NOT NULL,
seq INTEGER NOT NULL,
type TEXT NOT NULL,
timestamp TIMESTAMPTZ NOT NULL,
data JSONB NOT NULL,
created_at TIMESTAMPTZ DEFAULT NOW()
)
""")
self.conn.commit()
def emit(self, event_dict: dict) -> None:
"""Insert trace event into database"""
self.cursor.execute("""
INSERT INTO trace_events (run_id, seq, type, timestamp, data)
VALUES (%s, %s, %s, %s, %s)
""", (
event_dict["run_id"],
event_dict["seq"],
event_dict["type"],
event_dict["ts"],
json.dumps(event_dict["data"])
))
self.conn.commit()
def close(self) -> None:
"""Close database connection"""
self.cursor.close()
self.conn.close()
# Usage
tracer = Tracer(
run_id="run-123",
sink=DatabaseTraceSink("postgresql://user:pass@localhost/traces")
)
agent = SentienceAgent(browser, llm, tracer=tracer)Upload traces to S3, Google Cloud Storage, or other cloud providers:
from sentience.tracing import TraceSink
import boto3
import json
from typing import List
class S3TraceSink(TraceSink):
"""Store traces in AWS S3"""
def __init__(self, bucket: str, prefix: str = "traces/"):
self.s3 = boto3.client('s3')
self.bucket = bucket
self.prefix = prefix
self.events: List[dict] = []
def emit(self, event_dict: dict) -> None:
"""Buffer trace events in memory"""
self.events.append(event_dict)
def close(self) -> None:
"""Upload all events to S3"""
if not self.events:
return
run_id = self.events[0]["run_id"]
key = f"{self.prefix}{run_id}.jsonl"
# Convert events to JSONL
jsonl_content = "\n".join(json.dumps(e) for e in self.events)
# Upload to S3
self.s3.put_object(
Bucket=self.bucket,
Key=key,
Body=jsonl_content.encode('utf-8'),
ContentType='application/x-ndjson'
)
print(f"✅ Uploaded trace to s3://{self.bucket}/{key}")
# Usage
tracer = Tracer(
run_id="run-123",
sink=S3TraceSink(bucket="my-traces", prefix="production/")
)Write traces to multiple sinks simultaneously:
from sentience.tracing import TraceSink, JsonlTraceSink
from typing import List
class MultiSink(TraceSink):
"""Write to multiple trace sinks simultaneously"""
def __init__(self, sinks: List[TraceSink]):
self.sinks = sinks
def emit(self, event_dict: dict) -> None:
"""Emit to all sinks"""
for sink in self.sinks:
sink.emit(event_dict)
def close(self) -> None:
"""Close all sinks"""
for sink in self.sinks:
sink.close()
# Usage: Write to both local file and database
tracer = Tracer(
run_id="run-123",
sink=MultiSink([
JsonlTraceSink("trace.jsonl"),
DatabaseTraceSink("postgresql://localhost/traces"),
S3TraceSink("my-traces")
])
)Integrate Sentience tracing with your existing logging infrastructure using the SentienceLogger interface.
from typing import Protocol
class SentienceLogger(Protocol):
"""Protocol for optional logger interface."""
def info(self, message: str) -> None:
"""Log info message."""
...
def warning(self, message: str) -> None:
"""Log warning message."""
...
def error(self, message: str) -> None:
"""Log error message."""
...import logging
from sentience import create_tracer
# Use Python's built-in logging module
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
# Add handler to output to console
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter('[%(levelname)s] %(message)s'))
logger.addHandler(handler)
# Create tracer with logger
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id=f"amazon-shopping-{datetime.now().strftime('%Y%m%d-%H%M%S')}",
logger=logger # Pass standard Python logger (implements the protocol)
upload_trace=True,
goal="Buy a laptop from Amazon",
agent_type="Amazon Shopping Agent",
llm_model="gpt-4o",
start_url="https://www.amazon.com",
screenshot_processor=None, # function for PII redaction
)
# The logger will receive messages like:
# [INFO] Trace file size: 2.45 MB
# [INFO] Screenshot total: 0.00 MB
# [INFO] Trace completion reported to gateway
# [WARN] Failed to report trace completion: HTTP 500from sentience import create_tracer
class CustomLogger:
"""Custom logger implementation"""
def info(self, message: str) -> None:
print(f"[INFO] {message}")
# Send to monitoring service, file, etc.
def warning(self, message: str) -> None:
print(f"[WARN] {message}")
# Alert team, log to error tracking
def error(self, message: str) -> None:
print(f"[ERROR] {message}")
# Critical alert, page on-call engineer
custom_logger = CustomLogger()
tracer = create_tracer(
api_key="sk_pro_xxxxx", # Pro/Builder/Teams/Enterprise tier key
run_id="shopping-bot-123", # Gateway requires UUID format
upload_trace=True, # Set to True if you want cloud upload
goal="Buy a laptop from Amazon",
agent_type="Amazon Shopping Agent",
llm_model="gpt-4o",
start_url="https://www.amazon.com",
screenshot_processor=None, # function for PII redaction
)The logger receives the following types of messages:
Benefits:
Traces automatically survive process crashes and are recovered on next SDK initialization.
~/.sentience/traces/pending/ during executionfrom sentience import create_tracer, SentienceBrowser, SentienceAgent
from sentience.llm import OpenAIProvider
# Run 1: Agent crashes mid-execution
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id="run-1",
upload_trace=True,
goal="Click button on example.com",
agent_type="Example Agent",
llm_model="gpt-4o",
start_url="https://example.com",
screenshot_processor=None, # function for PII redaction
)
browser = SentienceBrowser(api_key="sk_pro_xxxxx")
llm = OpenAIProvider(api_key="your_key", model="gpt-4o")
agent = SentienceAgent(browser, llm, tracer=tracer)
with browser:
browser.page.goto("https://example.com")
agent.act("Click button") # Process crashes here - trace saved locally
# Run 2: SDK automatically recovers and uploads orphaned trace
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id="run-2",
upload_trace=True,
goal="Continue from previous run",
agent_type="Example Agent",
llm_model="gpt-4o",
start_url="https://example.com",
screenshot_processor=None, # function for PII redaction
)
# Prints: "⚠️ [Sentience] Found 1 un-uploaded trace(s) from previous runs"
# Prints: "✅ Uploaded orphaned trace: run-1"
browser = SentienceBrowser(api_key="sk_pro_xxxxx")
agent = SentienceAgent(browser, llm, tracer=tracer)
# Continue with run-2...~/.sentience/traces/pending/{run_id}.jsonlcreate_tracer() initialization~/.sentience/traces/pending/Always use meaningful run IDs:
run_id = f"shopping-{datetime.now().isoformat()}"
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id=run_id,
upload_trace=True,
goal="Buy a laptop from Amazon",
agent_type="Amazon Shopping Agent",
llm_model="gpt-4o",
start_url="https://www.amazon.com",
screenshot_processor=None, # function for PII redaction
)
Monitor cache directory:
ls -lh ~/.sentience/traces/pending/
Use try/finally for cleanup:
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id="run-123",
upload_trace=True,
goal="Complete task",
agent_type="My Agent",
llm_model="gpt-4o",
start_url="https://example.com",
screenshot_processor=None, # function for PII redaction
)
try:
# Agent code
agent.act("Do something")
finally:
tracer.close() # Ensures upload even on errors
Check logs for recovery messages:
Upload traces in the background to avoid blocking your script execution.
from sentience import create_tracer
tracer = create_tracer(
api_key="sk_pro_xxxxx",
run_id="run-123",
upload_trace=True,
goal="Example task",
agent_type="Example Agent",
llm_model="gpt-4o",
start_url="https://example.com",
screenshot_processor=None, # function for PII redaction
)
# ... agent execution ...
# Option 1: Blocking (default) - waits for upload to complete
tracer.close(blocking=True) # Script pauses here until upload finishes
print("Upload complete!")
# Option 2: Non-blocking - returns immediately
tracer.close(blocking=False) # Script continues immediately
print("Upload started in background!")
# Script can exit or continue with other workMonitor upload progress with callbacks:
def progress_callback(uploaded_bytes: int, total_bytes: int):
percent = (uploaded_bytes / total_bytes) * 100
print(f"Upload progress: {percent:.1f}% ({uploaded_bytes}/{total_bytes} bytes)")
tracer.close(blocking=True, on_progress=progress_callback)
# Output:
# Upload progress: 25.0% (262144/1048576 bytes)
# Upload progress: 50.0% (524288/1048576 bytes)
# Upload progress: 75.0% (786432/1048576 bytes)
# Upload progress: 100.0% (1048576/1048576 bytes)Use non-blocking uploads when:
Use blocking uploads when:
Cause: Your API key is valid but account is on Free tier
Solution:
upload_trace=False to suppress this messageCause: Network connectivity issue or API service temporarily unavailable
Solution:
Cause: Server error (temporary)
Solution:
~/.sentience/traces/pending/{run_id}.jsonlCause: Upload failed but trace is safely stored on disk
Solution:
All trace events follow this structure:
{
"v": 1, // Schema version
"type": "event_type", // Event type (step_start, snapshot, llm_query, action, step_end, error)
"ts": "2025-12-26T...", // ISO 8601 timestamp
"run_id": "run-123", // Run identifier
"seq": 1, // Sequence number (auto-increments)
"step_id": "step-1", // Step identifier (optional)
"data": {...}, // Event-specific data
"ts_ms": 1735210800000 // Unix timestamp in milliseconds (optional)
}
⚠️ Warning: Trace formats and internal fields may evolve and are not guaranteed to be stable across versions.
Event Types: