Skip to main content

Open Questions

Items that need a team discussion before finalising. These are known design gaps, not bugs.


total_count cost at scale

Every paginated response includes total_count, which requires a separate COUNT(*) query. This becomes expensive on large tables with filters applied.

Options to discuss:

  • Make it opt-in via ?include_total=true
  • Replace with has_more: boolean (Stripe / GitHub pattern)
  • Document that it may be approximate or cached for large datasets

Rate limiting, caching, and authentication docs

No documentation exists for:

  • Rate limits -- requests per minute/hour, how limits are communicated via headers (e.g. X-RateLimit-Remaining)
  • Caching -- ETag, Cache-Control, Last-Modified headers, especially for catalog data that changes rarely
  • Authentication -- OAuth flow details, token lifetimes, scopes/permissions model

These don't need to be on every endpoint page, but a top-level "Authentication & Rate Limits" page is expected for a production API.


Field selection

There is no ?fields=id,name parameter to reduce payload size. The Device DTO in particular is large (hardware specs, three browser extension blocks). For mobile clients or high-frequency polling, this matters.

Options to discuss:

  • Add a fields query parameter to all GET endpoints
  • Add it only to large-payload endpoints (devices, apps)
  • Skip it entirely and rely on consumers ignoring unused fields

EnrollmentToken value DB source

The docs say the value field is "derived from enrollment_tokens.token_hash". Plaintext cannot be derived from a hash. Either the DB stores both a hash (for validation) and an encrypted value (for display), or the column name in the docs is wrong.

Action: Clarify the actual DB source column and update the Client domain docs accordingly.


usage_level server-computed enum

The AI Apps analytics endpoints return a usage_level field (heavy, medium, light) derived from a Usage Score formula with fixed thresholds. This is the first v2 endpoint to return a server-computed classification enum -- no other analytics endpoint does this.

Concerns:

  • Changing the score formula or thresholds requires a backend deploy, not a configuration change
  • Clients cannot customize the classification buckets for their own UI needs
  • The underlying metrics (adoption rate, activity rate, raw score) are not exposed, so consumers cannot build alternative classifications

Options to discuss:

  • Keep as-is -- the classification is a stable business rule that all consumers should apply consistently
  • Expose the raw score alongside the enum -- return both usage_level and usage_score so consumers can classify on their own if needed
  • Return the underlying metrics -- expose adoption_rate and activity_rate and let the frontend compute the score and classification
  • Make thresholds configurable -- store thresholds per client so organizations can adjust what counts as "heavy" vs "medium" usage