Agent Integration Guide
Build AI agents that understand what your users are browsing. This guide covers how to consume Periscope browser context in your agent, inject it into LLM prompts, and handle real-time navigation updates.
Overview
Periscope streams browser context -- page content, navigation events, text selections -- to your agent via WebSocket. Your agent subscribes to the current-context channel and receives updates as the user browses. This context gets injected into your LLM prompts so the agent can respond with full awareness of what the user is looking at.
Prerequisites
- A Lovelace account with a valid API key
- The Periscope browser extension installed and signed in
- Node.js 20+ and
@lovelace-ai/periscope-clientinstalled
pnpm add @lovelace-ai/periscope-client @anthropic-ai/sdk
How Agents Consume Browser Context
The integration flow works like this:
- User browses the web with the Periscope extension active
- The extension streams page context to the Periscope service
- Your agent subscribes to the
current-contextWebSocket channel - On each navigation or content update, your agent receives a context payload
- You inject the relevant fields into your LLM system prompt
- The LLM responds with awareness of the user's current page
Subscribing to Current Context
Connect to the WebSocket and subscribe to the current-context channel to receive updates whenever the user navigates or interacts with a page:
import { PeriscopeClient } from "@lovelace-ai/periscope-client";
const client = new PeriscopeClient({
baseUrl: "https://periscope.uselovelace.com",
wsUrl: "wss://periscope.uselovelace.com/ws",
token: process.env.LOVELACE_API_TOKEN,
});
await client.connect();
client.subscribe("current-context", (context) => {
console.log(`Now viewing: ${context.url}`);
console.log(`Title: ${context.title}`);
console.log(`Content length: ${context.content?.length ?? 0} chars`);
});
Prompt Injection Template
The most effective pattern is to include browser context in the system prompt. Here is a template that balances context richness with token efficiency:
function buildSystemPrompt(context: BrowserContext | null): string {
const basePrompt = `You are a helpful assistant with awareness of the user's current browsing activity.`;
if (!context) {
return `${basePrompt}\n\nThe user is not currently browsing any page.`;
}
return `${basePrompt}
## Current Browser Context
- **URL:** ${context.url}
- **Page Title:** ${context.title}
- **Last Updated:** ${new Date(context.timestamp).toISOString()}
### Page Content
${truncateContent(context.content, 4000)}
${context.selection ? `### User's Text Selection\n\n> ${context.selection}` : ""}
Use this context to provide relevant, page-aware responses. Reference specific content from the page when answering questions. If the user asks about something on the page, quote the relevant section.`;
}
Context Window Management
Page content can easily exceed your model's context window. Use these strategies to keep things under control.
Truncation Strategy
The simplest approach is to truncate page content to a fixed character limit. Place the limit based on your model's context window minus the tokens you need for conversation history and the response:
function truncateContent(
content: string | undefined,
maxChars: number,
): string {
if (!content) {
return "(No page content available)";
}
if (content.length <= maxChars) {
return content;
}
const truncated = content.slice(0, maxChars);
const lastParagraph = truncated.lastIndexOf("\n\n");
if (lastParagraph > maxChars * 0.8) {
return `${truncated.slice(0, lastParagraph)}\n\n[Content truncated - ${content.length - lastParagraph} characters omitted]`;
}
return `${truncated}\n\n[Content truncated - ${content.length - maxChars} characters omitted]`;
}
Summarization Approach
For longer pages, summarize the content first, then pass the summary into the main agent prompt. This costs an extra LLM call but preserves more semantic information:
async function summarizePageContent(
anthropic: Anthropic,
content: string,
): Promise<string> {
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 500,
messages: [
{
role: "user",
content: `Summarize this web page content in 2-3 paragraphs, preserving key facts, code snippets, and any data the user might ask about:\n\n${content.slice(0, 15000)}`,
},
],
});
return response.content[0].type === "text" ? response.content[0].text : "";
}
Useful Payload Kinds
Not all context events are equally valuable for agents. Focus on these:
| Kind | Description | Agent Use Case |
|---|---|---|
page.navigation | User navigated to a new page | Update the agent's understanding of what the user is looking at |
text.selection | User selected text on a page | High-signal indicator of what the user is interested in |
code.selection | User selected code in a code block or editor | Enables code-aware assistance, reviews, explanations |
tab.activated | User switched to a different tab | Update current context to the newly focused tab |
Filter events by kind to avoid noise from less relevant activity:
client.subscribe("current-context", (context) => {
const usefulKinds = [
"page.navigation",
"text.selection",
"code.selection",
"tab.activated",
];
if (!usefulKinds.includes(context.kind)) {
return;
}
updateAgentContext(context);
});
Handling Stale Context
Browser context can become stale if the WebSocket disconnects or the user stops browsing. Always check timestamps before using context in prompts:
function isContextFresh(
context: BrowserContext,
maxAgeMs: number = 5 * 60 * 1000,
): boolean {
const age = Date.now() - new Date(context.timestamp).getTime();
return age < maxAgeMs;
}
function buildPromptWithFreshnessCheck(context: BrowserContext | null): string {
if (!context || !isContextFresh(context)) {
return "The user's browser context is not currently available or is outdated.";
}
return buildSystemPrompt(context);
}
Handling Disconnections
The PeriscopeClient emits connection lifecycle events. Use them to track whether context is reliable:
let isConnected = false;
client.on("connected", () => {
isConnected = true;
});
client.on("disconnected", () => {
isConnected = false;
});
client.on("reconnected", () => {
isConnected = true;
});
Complete Example: Browser-Aware Agent
Here is a complete TypeScript agent that subscribes to current-context and responds with browser awareness using the Anthropic SDK:
import Anthropic from "@anthropic-ai/sdk";
import { PeriscopeClient } from "@lovelace-ai/periscope-client";
import * as readline from "node:readline";
const anthropic = new Anthropic();
const periscope = new PeriscopeClient({
baseUrl: "https://periscope.uselovelace.com",
wsUrl: "wss://periscope.uselovelace.com/ws",
token: process.env.LOVELACE_API_TOKEN,
});
let currentContext: BrowserContext | null = null;
interface BrowserContext {
url: string;
title: string;
content?: string;
selection?: string;
kind: string;
timestamp: string;
privacyLevel: string;
}
async function main(): Promise<void> {
await periscope.connect();
periscope.subscribe("current-context", (context: BrowserContext) => {
if (context.privacyLevel === "restricted") {
currentContext = null;
return;
}
currentContext = context;
console.log(`\n[Context updated: ${context.title}]`);
});
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
const conversationHistory: Array<{
role: "user" | "assistant";
content: string;
}> = [];
console.log("Browser-aware agent ready. Type a question.\n");
for await (const line of rl) {
const userMessage = line.trim();
if (!userMessage) continue;
conversationHistory.push({ role: "user", content: userMessage });
const systemPrompt = buildSystemPrompt(currentContext);
const response = await anthropic.messages.create({
model: "claude-sonnet-4-20250514",
max_tokens: 1024,
system: systemPrompt,
messages: conversationHistory,
});
const assistantText =
response.content[0].type === "text" ? response.content[0].text : "";
conversationHistory.push({ role: "assistant", content: assistantText });
console.log(`\nAssistant: ${assistantText}\n`);
}
}
function buildSystemPrompt(context: BrowserContext | null): string {
const base =
"You are a helpful assistant with real-time awareness of the user's browser.";
if (!context) {
return `${base}\n\nNo browser context is currently available.`;
}
const age = Date.now() - new Date(context.timestamp).getTime();
if (age > 5 * 60 * 1000) {
return `${base}\n\nBrowser context is stale (last updated ${Math.round(age / 1000)}s ago).`;
}
const content = context.content
? context.content.slice(0, 4000)
: "(no content captured)";
return `${base}
## Current Page
- URL: ${context.url}
- Title: ${context.title}
### Content
${content}
${context.selection ? `### User Selection\n> ${context.selection}` : ""}`;
}
main().catch(console.error);
Next Steps
- Learn about the browser context data model and what fields are available
- Explore real-time streaming for advanced WebSocket patterns
- Follow the Context-Aware Chatbot Tutorial for a step-by-step walkthrough