Pagination
List operations in the Probo MCP Server use cursor-based pagination.
How It Works
Section titled “How It Works”- Call a list tool (e.g.,
listRisks) without a cursor - Get results plus a
next_cursorif more data exists - Use the
next_cursorto fetch the next page - When
next_cursorisnull, you’ve reached the end
Request Parameters
Section titled “Request Parameters”{ "organization_id": "org_xxx", "size": 50, "cursor": "optional_cursor"}organization_id(required): Organization to querysize(optional): Items per page (default: 20, max: 100)cursor(optional): Cursor from previous response
Sorting
Section titled “Sorting”{ "organization_id": "org_xxx", "order_by": { "field": "CREATED_AT", "direction": "DESC" }}Common fields: CREATED_AT, UPDATED_AT, NAME
Filtering
Section titled “Filtering”{ "organization_id": "org_xxx", "filter": { "query": "security", "status": "OPEN" }}Response Format
Section titled “Response Format”{ "risks": [ { "id": "risk_abc123", "name": "Data breach risk", "residual_risk_score": 20 } ], "next_cursor": "eyJpZCI6InJpc2tfNTAiLCJvcmRlciI6MTcwMDAwMDAwMH0="}When there are no more pages:
{ "risks": [...], "next_cursor": null}Example Flow
Section titled “Example Flow”Request:
{ "organization_id": "org_abc123", "size": 50}Response:
{ "risks": [...], "next_cursor": "eyJpZCI6InJpc2tfMDUwIn0"}Next Request:
{ "organization_id": "org_abc123", "size": 50, "cursor": "eyJpZCI6InJpc2tfMDUwIn0"}Final Response:
{ "risks": [...], "next_cursor": null}Implementation Examples
Section titled “Implementation Examples”def fetch_all_risks(organization_id): """Fetch all risks using pagination.""" all_risks = [] cursor = None
while True: # Build request request = { "organization_id": organization_id, "size": 100 } if cursor: request["cursor"] = cursor
# Call API response = call_tool("listRisks", request)
# Collect results all_risks.extend(response["risks"])
# Check if done cursor = response.get("next_cursor") if not cursor: break
return all_risksasync function* paginateRisks(organizationId) { let cursor = null;
do { const request = { organization_id: organizationId, size: 100, ...(cursor && { cursor }) };
const response = await callTool("listRisks", request);
// Yield each risk for (const risk of response.risks) { yield risk; }
cursor = response.next_cursor; } while (cursor);}
// Usagefor await (const risk of paginateRisks("org_abc123")) { console.log(risk.name);}interface PaginatedResponse<T> { data: T[]; next_cursor: string | null;}
async function fetchAllPages<T>( toolName: string, organizationId: string, pageSize: number = 100): Promise<T[]> { const allItems: T[] = []; let cursor: string | null = null;
do { const response = await callTool<PaginatedResponse<T>>(toolName, { organization_id: organizationId, size: pageSize, ...(cursor && { cursor }) });
allItems.push(...response.data); cursor = response.next_cursor; } while (cursor);
return allItems;}
// Usageconst risks = await fetchAllPages("listRisks", "org_abc123");const vendors = await fetchAllPages("listVendors", "org_abc123");func paginateRisks(ctx context.Context, orgID string) <-chan Risk { ch := make(chan Risk)
go func() { defer close(ch)
var cursor *string for { req := ListRisksRequest{ OrganizationID: orgID, Size: 100, Cursor: cursor, }
resp, err := callTool(ctx, "listRisks", req) if err != nil { return }
for _, risk := range resp.Risks { select { case ch <- risk: case <-ctx.Done(): return } }
if resp.NextCursor == nil { break } cursor = resp.NextCursor } }()
return ch}
// Usagefor risk := range paginateRisks(ctx, "org_abc123") { fmt.Println(risk.Name)}Best Practices
Section titled “Best Practices”Page Size Selection
Section titled “Page Size Selection”Choose appropriate page sizes based on your use case:
- Small pages (20-50): Interactive UI, quick initial response
- Medium pages (50-100): Balanced performance for most cases
- Large pages (100+): Batch processing, data exports
Considerations:
- Larger pages mean fewer requests but more memory usage
- Smaller pages provide faster initial response times
- Network latency affects optimal page size
Cursor Handling
Section titled “Cursor Handling”Treat cursors as opaque tokens:
- Do: Store cursors exactly as received
- Do: Pass cursors without modification
- Don’t: Try to decode or modify cursors
- Don’t: Make assumptions about cursor format
Error Handling
Section titled “Error Handling”Implement robust error handling:
def fetch_with_retry(organization_id, max_retries=3): cursor = None all_results = []
while True: for attempt in range(max_retries): try: response = call_tool("listRisks", { "organization_id": organization_id, "cursor": cursor, "size": 100 }) break except Exception as e: if attempt == max_retries - 1: raise time.sleep(2 ** attempt) # Exponential backoff
all_results.extend(response["risks"])
cursor = response.get("next_cursor") if not cursor: break
return all_resultsConsistent Ordering
Section titled “Consistent Ordering”Always specify order_by for predictable results:
{ "organization_id": "org_xxx", "order_by": { "field": "CREATED_AT", "direction": "DESC" }}Without explicit ordering, results may appear in arbitrary order.
Combine with Filtering
Section titled “Combine with Filtering”Use filters to reduce the dataset size:
# Instead of fetching everything and filtering in codeall_risks = fetch_all_risks(org_id)high_risks = [r for r in all_risks if r["residual_risk_score"] > 15]
# Filter server-side during paginationhigh_risks = call_tool("listRisks", { "organization_id": org_id, "filter": { "min_residual_risk_score": 15 }})Performance Considerations
Section titled “Performance Considerations”Data Consistency
Section titled “Data Consistency”Be aware of data changes during pagination:
- New items: May or may not appear in subsequent pages
- Updated items: May change position in sort order
- Deleted items: Will not appear in subsequent pages
For consistent snapshots, complete pagination quickly.
Memory Management
Section titled “Memory Management”For very large datasets:
# Bad: Load everything into memoryall_risks = fetch_all_risks(org_id)process_risks(all_risks)
# Good: Process as you paginatedef process_risks_streaming(org_id): cursor = None while True: response = call_tool("listRisks", { "organization_id": org_id, "cursor": cursor, "size": 100 })
# Process this page immediately for risk in response["risks"]: process_single_risk(risk)
cursor = response.get("next_cursor") if not cursor: breakParallel Requests
Section titled “Parallel Requests”Avoid parallel requests with the same cursor:
# Bad: Race conditionscursor = get_current_cursor()results1 = fetch_page(cursor) # May invalidate cursorresults2 = fetch_page(cursor) # May fail
# Good: Sequential paginationcursor = get_current_cursor()results1 = fetch_page(cursor)cursor = results1["next_cursor"]results2 = fetch_page(cursor)Troubleshooting
Section titled “Troubleshooting”Invalid Cursor Error
Section titled “Invalid Cursor Error”Error: 400 Bad Request - Invalid cursor
Causes:
- Cursor has expired (TTL exceeded)
- Cursor was modified or corrupted
- Cursor from different organization/query
Solutions:
- Start pagination over from the beginning
- Verify cursor is passed exactly as received
- Complete pagination within cursor TTL
Inconsistent Results
Section titled “Inconsistent Results”Issue: Same items appearing multiple times or missing items
Causes:
- Data being modified during pagination
- Inconsistent sorting order
- Not using stable sort fields
Solutions:
- Use stable sort fields (e.g.,
id,created_at) - Complete pagination quickly
- Use timestamps or version fields for deduplication
Memory Issues
Section titled “Memory Issues”Issue: Out of memory errors with large datasets
Causes:
- Loading all pages into memory
- Page size too large
- Processing too slow
Solutions:
- Process items as you paginate (streaming)
- Reduce page size
- Use filtering to reduce dataset size
- Implement backpressure mechanisms