AI Apps Analytics
All endpoints in this family are deprecated. usage_level is now available directly on the generic App Utilization endpoint, making these purpose-built routes redundant.
Deprecated Endpoints
| Deprecated Endpoint | Replacement |
|---|---|
GET /apps/ai/users | GET /utilization?category_ids=<ai_id> |
GET /apps/ai/recent | GET /utilization?category_ids=<ai_id>&filter=discovered |
GET /apps/ai/users/summary | GET /utilization/summary?category_ids=<ai_id> |
GET /apps/ai/recent/summary | GET /utilization/summary?category_ids=<ai_id>&filter=discovered |
GET /apps/ai/departments | GET /utilization/departments?category_ids=<ai_id> |
GET /apps/ai/departments/summary | GET /utilization/summary?category_ids=<ai_id> |
All replacement paths are relative to /clients/{client_id}/analytics/apps. The only field rename required is user_count → unique_users on the list endpoints.
The AI Apps analytics domain provides read-only endpoints for querying AI application usage across a client organization. It powers the AI App Usage dashboard with summary cards and paginated tables covering user adoption, newly discovered tools, and department-level breakdowns.
This domain lives under the Analytics namespace. It is completely separate from the identity/CRUD Apps domain -- that domain owns app records, this domain owns the AI-specific usage numbers.
What is an "AI App"?
An app is considered an "AI App" when it has at least one category matching the designated AI category. The filtering is done via a join through the catalog:
apps → catalog_applications → catalog_application_catalog_categories → catalog_categories
WHERE catalog_categories.name = 'AI'
This is consistent with how category_id filtering works in the existing App Utilization and List Apps endpoints. AI apps are never identified by hardcoded app IDs.
URL Structure
All AI Apps analytics endpoints live under:
/clients/{client_id}/analytics/apps/ai
Each report type has a summary endpoint (aggregate metrics for dashboard cards) and a list endpoint (paginated table data):
| Summary endpoint | List endpoint |
|---|---|
/analytics/apps/ai/users/summary | /analytics/apps/ai/users |
/analytics/apps/ai/recent/summary | /analytics/apps/ai/recent |
/analytics/apps/ai/departments/summary | /analytics/apps/ai/departments |
Endpoints
AI App Users
| Endpoint | Method | Description |
|---|---|---|
| AI App Users Summary | GET | Aggregate user adoption metrics for AI apps |
| AI App Users | GET | Paginated list of AI apps ranked by user count |
New AI Tools
| Endpoint | Method | Description |
|---|---|---|
| New AI Tools Summary | GET | Count of newly discovered AI tools in the period |
| New AI Tools | GET | Paginated list of AI apps first seen in the period |
AI By Department
| Endpoint | Method | Description |
|---|---|---|
| AI By Department Summary | GET | Total AI users across all departments |
| AI By Department | GET | Paginated list of departments with AI app usage metrics |
Shared Query Parameters
All six endpoints accept the same date parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
start_date | string | Yes | Start of period (YYYY-MM-DD) |
end_date | string | Yes | End of period, inclusive (YYYY-MM-DD) |
compare_start_date | string | No | Comparison period start (YYYY-MM-DD). Must be provided with compare_end_date. |
compare_end_date | string | No | Comparison period end, inclusive (YYYY-MM-DD). Must be provided with compare_start_date. |
List endpoints additionally accept sort_by, sort_order, page_size, and cursor.
When comparison dates are omitted, all change_pct fields in the response are null.
Reused DTOs
These endpoints reuse existing v2 DTOs for embedded identity objects:
- AppRef -- Lightweight app reference embedded in users and recent item rows
- DepartmentRef -- Department identity embedded in department item rows
No new shared DTOs are introduced. See the NestJS Class Validators doc for the domain-specific response DTOs.
Usage Level
The usage_level field on AI app items classifies each app's engagement intensity during the reporting period. It is derived from a Usage Score computed per app.
Note: This is a server-computed classification with fixed thresholds. See Open Questions for the ongoing discussion on whether to also expose the raw score or underlying metrics.
Usage Score formula
Usage Score = (0.35 × A) + (0.65 × R)
Where:
| Symbol | Name | Formula |
|---|---|---|
| U | Distinct users | Distinct users of the app in the period |
| N | Total org users | All users for the client |
| A | User Adoption Rate | U / N |
| D | Active days | Days the app was used in the period |
| T | Working days | Distinct days where any app usage was recorded for the client in the period |
| R | Activity Rate | D / T |
Activity is weighted more heavily (65%) than adoption (35%) because consistent daily use indicates deeper engagement than one-time adoption.
Computing T (working days)
T is defined as the number of distinct days where any app usage was recorded for the client. This avoids penalizing apps for weekends, holidays, or days the org simply had no activity.
SELECT COUNT(DISTINCT DATE(activity_date))
FROM app_usage_reports
WHERE client_id = $1
AND activity_date >= $period_start
AND activity_date < $period_end
This value is the same for every app within a given client + period combination, so it should be computed once and reused across all app score calculations in the same request.
Thresholds
| Usage Level | Score Range |
|---|---|
heavy | >= 0.5 |
medium | 0.2 -- 0.49 |
light | < 0.2 |
Example
An app with 20 of 50 org users (A = 0.4), used on 15 days in a period where the client had 22 working days (T = 22):
R = 15 / 22 = 0.682
Score = (0.35 × 0.4) + (0.65 × 0.682) = 0.14 + 0.443 = 0.583 → heavy
Data Sources
AI Apps analytics endpoints aggregate from app_usage_reports (same as App Utilization and App Activity), filtered to apps that have at least one AI category via the catalog join described above.
| Resource | Source Table | Granularity |
|---|---|---|
| AI Apps | app_usage_reports | One row per user per app per day (partitioned by month) |
The department breakdown additionally joins through user-department assignments to group metrics by department.
Constraints
- Max date range: 366 days for both primary and comparison periods. Requests exceeding this return
400. - Comparison symmetry: Both
compare_start_dateandcompare_end_datemust be provided together, or both omitted. - Timezone: All dates are in the client's configured timezone.