The snapshot() function captures the current page state and returns all interactive elements with semantic information.
from sentience import snapshot, SnapshotOptions, SnapshotFilter
# Basic snapshot (uses default options)
snap = snapshot(browser)
# With screenshot and limit
snap = snapshot(browser, SnapshotOptions(
screenshot=True,
limit=200
))
# Force local processing (no credits used)
snap = snapshot(browser, SnapshotOptions(use_api=False))
# With filtering
snap = snapshot(browser, SnapshotOptions(
filter=SnapshotFilter(
min_area=100,
allowed_roles=["button", "link"]
)
))
# Or use dict for filter (also supported)
snap = snapshot(browser, SnapshotOptions(
filter={"min_area": 100, "allowed_roles": ["button", "link"]}
))Credit Consumption:
api_key is provided, this calls the server-side /v1/snapshot endpoint which consumes 1 credit per call (metered billing).use_api=False for local processing (no credits, but no importance ranking).Payload Size Limit:
limit option to reduce the number of elements, or use use_api=False for local processing (no size limit).Python:
browser (SentienceBrowser): Browser instanceoptions (SnapshotOptions, optional): Snapshot configuration optionsSnapshotOptions fields:
screenshot (bool | ScreenshotConfig, optional): Capture screenshot. True for PNG, or {"format": "jpeg", "quality": 80}. Default: False.limit (int, optional): Maximum number of elements to return. Default: 50 (server) or all (local). Range: 1-500.filter (SnapshotFilter | dict, optional): Filter options:
min_area: Minimum element area in pixelsallowed_roles: List of roles to include (e.g., ["button", "link"])min_z_index: Minimum z-index valueuse_api (bool, optional): Force server API (True) or local extension (False). Auto-detects if None.show_overlay (bool, optional): Display visual overlay in browser highlighting detected elements. Default: False.goal (str, optional): Optional goal/task description for ML reranking.TypeScript:
browser (SentienceBrowser): Browser instanceoptions (object, optional):
screenshot (boolean | object): Capture screenshotlimit (number): Maximum elements to returnfilter (object): Filter optionsuse_api (boolean): Force server API or local extensionshow_overlay (boolean): Display visual overlay (default: false)goal (string, optional): Optional goal/task description for ML rerankingSnapshot object with:
elements: List of Element objects (sorted by importance)url: Current page URLviewport: Viewport dimensionstimestamp: Snapshot timestampscreenshot: Base64-encoded image (if requested)Each element in snapshot.elements has:
id: Unique identifier for clickingrole: Semantic role (button, link, textbox, heading, etc.)text: Visible text contentimportance: AI importance score (0-1000, higher = more important)bbox: Bounding box (x, y, width, height)visual_cues: Visual analysis (is_primary, is_clickable, background_color)in_viewport: Is element visible?is_occluded: Is element covered by another element?rerank_index (optional): 0-based rank after ML reranking (only when goal is provided)heuristic_index (optional): 0-based rank before ML reranking (original heuristic position)ml_probability (optional): ML model confidence score (0.0 - 1.0, higher = more confident)ml_score (optional): Raw logit score from ONNX model (for debugging and analysis)When show_overlay=True, Sentience displays a visual overlay in the browser highlighting all detected elements:
Color Coding:
is_primary=true)Visual Indicators:
importance scoreUse Cases:
# Example: Debug why a button isn't being clicked
from sentience import SnapshotOptions
browser.goto("https://example.com")
snap = snapshot(browser, SnapshotOptions(show_overlay=True)) # See what's detected
time.sleep(6) # Wait to inspect the overlay
# Check if your target button is in the results
button = find(snap, "role=button text~'Submit'")
if not button:
print("❌ Button not found - check the overlay to see what's detected")When you provide a goal parameter in SnapshotOptions, the server uses an ONNX-based machine learning model to rerank elements based on relevance to your goal. This dramatically improves element selection accuracy for agent tasks.
# Trigger ML reranking by providing a goal
snap = snapshot(browser, SnapshotOptions(
goal="Click the login button",
limit=50
))
# Elements are now sorted by ML relevance, not just heuristic importance
for element in snap.elements[:5]:
print(f"[{element.id}] {element.role}: {element.text}")
if element.ml_probability:
print(f" ML Confidence: {element.ml_probability:.2%}")
print(f" Moved from position {element.heuristic_index} → {element.rerank_index}")When ML fields are present:
goal is provided in SnapshotOptionsagent.act() (goals are passed automatically)goal is not specified (elements ranked by heuristic importance only)What the fields mean:
rerank_index: Final position after ML reranking (0 = most relevant to goal)heuristic_index: Original position before ML (shows how much ML changed the ranking)ml_probability: Model's confidence that this element is relevant (0.0-1.0)ml_score: Raw logit score before softmax (useful for debugging model behavior)