ChatGPT Enterprise workspace analytics guide

# Administrators
# Adoption
Drive action by understanding how your team is using ChatGPT Enterprise
March 10, 2026 · Last updated on March 13, 2026
Workspace analytics gives you a broad view of how adoption is progressing in your ChatGPT Enterprise workspace. It shows current usage patterns across seats, active users, trends, and benchmarks, with filters by date and group so you can see how adoption differs across parts of the organization.
Use it to answer practical questions about your adoption:
- Who has access, and how is adoption changing over time?
- Which groups are building repeat usage, and which are still early?
- Where should we look more closely to understand what is working and what may need support?
Workspace analytics helps you move from a high-level readout to a closer investigation. You can filter the data, compare cohorts, and drill into specific usage patterns to understand what is gaining traction and where it may be worth scaling what is already working.
What you can do with workspace analytics
Workspace analytics helps you:
- Show trends: show activation and engagement trends leaders can understand quickly.
- Diagnose drop-offs: identify where adoption is stalling (access → activation → weekly usage → depth).
- Target interventions: pick specific teams/cohorts and measure whether your changes worked.
- Scale visibility safely: share read-only analytics with the right stakeholders.
How to access workspace analytics
Workspace owners, admins, and analytics viewers (a new role) have access to workspace analytics in Workspace Settings > Workspace Analytics.
Consider assigning analytics viewer access to AI champions/enablement leads, department leads who are accountable for adoption, and program owners who prepare regular readouts. This keeps reporting and action planning focused without expanding admin permissions.
The first tab that will load is your overview tab, which shows:
- Seats purchased: licenses your org pays for
- Seats enabled: licenses assigned to real users (via provisioning/invites/SSO)
- Seats activated: enabled users who have begun using the product
- WAU: users active weekly
- Power users: enabled users in the top 20% by message usage, with activity in 3+ tools
Interpreting workspace analytics: funnel + trend
Think of workspace analytics as a window into how adoption is developing across your organization. It lets you see how usage evolves over time—from access, to first use, to repeat use, and eventually deeper adoption. If progress slows at any stage, focus on resolving that bottleneck before investing further downstream.
Key questions:
Question | Where to look | What it usually means |
Are people getting access? | Seats purchased vs. seats enabled | Setup/provisioning is (or isn’t) working |
Are people starting to use ChatGPT? | Seats enabled vs. active users | Onboarding is (or isn’t) turning access into first use |
Is usage becoming a habit? | WAU trend lines | Adoption is compounding—or fading after initial interest |
What’s driving usage changes? | Breakdowns by model/tool/connector + task insights | What actually changed month-over-month |
Are we “healthy” for our stage? | Benchmarks | Directional comparison to peer set (triage, not a verdict) |
How to read core metrics
Metric | What it tells you | If it’s weak, try this next |
Purchased vs enabled | Whether rollout mechanics are working | Fix provisioning, SSO, invitations, group rules; confirm comms/instructions |
Enabled vs active | Whether people got past setup and started using ChatGPT | Run onboarding that’s tied to one real workflow; use starter prompts + manager reinforcement |
WAU trend | Whether usage is becoming routine | Compare trend before/after a specific intervention (training, comms, workflow pack) |
WAU per activated user | Whether activated users are returning consistently | Look for “one-and-done” patterns; add habit-forming nudges, office hours, role-based prompt packs |
Messages per WAU | How much work is being done by weekly users | Often signals depth; investigate what workflows are driving volume (task insights) |
Benchmarks | Whether activation/engagement/depth look typical for peers | Use to prioritize where to investigate—not to auto-judge performance |
Diving deeper
The overview tab helps you understand the top-line story: how many people have access, how many are active, and whether usage is growing over time. The Users, GPTs, Projects, and Skills tabs help you explain why those patterns are happening by showing where usage is concentrating, what kinds of assets are gaining traction, and which behaviors look repeatable enough to scale.
A useful way to read these tabs is to treat them as a second layer of analysis on top of the overview metrics. If enabled seats are rising but active use is not, the Users tab can help you see which groups have not yet turned access into adoption. If WAU or messages per WAU are rising, the GPTs, Projects, and Skills tabs can help you see whether that growth is tied to a few specific assets or to broader workflow adoption.
- Users: See which groups are building repeat usage, which are still early, and where adoption may need more support.
- GPTs, Projects, and Skills: Look for assets and workflows that are attracting sustained engagement, not just one-time experimentation.
- Scaling signals: When usage is concentrated around a specific GPT, Project, Skill, or team, that can point to a strong use case worth documenting and sharing more broadly.
- Next step: Start with the workflows that show repeat engagement, then test focused enablement to help similar teams adopt them.
Across these tabs, look for the same pattern: usage that is not only high, but repeatable. A strong candidate for scaling usually shows up as continued engagement over time, concentration around a specific asset or workflow, and evidence that the pattern is tied to real work rather than one-time experimentation. That is the same reason task insights is useful: it helps connect usage back to concrete jobs to be done and identify workflows worth scaling.
Once you identify a promising GPT, Project, Skill, or high-usage cohort, the next step is not to broadcast it to the whole organization at once. Start by documenting the workflow clearly, identifying which teams have similar needs, and testing a focused intervention such as a workflow pack, office hours session, or manager-led example. Then measure whether adoption grows in the next few weeks using the same overview and deeper tabs. This keeps scaling grounded in evidence rather than anecdotes.
Using benchmarks comparison
Benchmarks give you a directional comparison against others in your industry. They help you answer a narrow question: are your activation, engagement, or depth metrics roughly in range for peers, or is something noticeably ahead or behind? That makes them useful as a reference point, not as a score or verdict.
A good way to use benchmarks is to start with the metric that stands out, then ask what part of the adoption journey it reflects. If activation is low relative to peers, look first at access and onboarding. If weekly activity is low, check whether people found a real workflow, came back after first use, or dropped off after initial curiosity.
Being below benchmark does not automatically mean something is wrong, and above benchmark does not automatically mean adoption is healthy. Use benchmarks to guide better questions, then confirm the story with the rest of your analytics and with conversations with the teams closest to the work.
Task insights
Use task insights to connect activity to concrete jobs to be done. Look for workflows that repeat across teams, where strong usage is tied to a specific task pattern, and where one-time experimentation is not yet turning into repeat behavior.
This is most helpful for:
- Finding repeatable workflows worth scaling
- Understanding differences between high-usage and low-usage teams
- Translating usage telemetry into an enablement plan
Reviewing workspace analytics regularly
Use this as a lightweight monthly operating rhythm for reviewing adoption and choosing your next intervention.
Step | What to do | Output you want | Example result statement |
1) Check the funnel | Calculate purchased → enabled, then enabled → active | A clear “main constraint” statement | “Adoption is limited by activation: many seats are enabled, but a smaller share have started using ChatGPT.” |
2) Look at the trend | WAU trend line | A direction statement: rising / flat / falling | “Weekly active users have been rising steadily over the past month.” |
3) Choose two cohorts | One lagging group + one healthy comparison | A targeted focus area (not the whole company) | “Team A shows strong weekly usage, while Team B has similar access but much lower activity.” |
4) Diagnose drivers | Breakdowns + task insights | A hypothesis about what’s driving the pattern | “The increase in usage appears tied to growing adoption of a connector-based workflow.” |
5) Pick one intervention | Training, workflow pack, office hours, manager message | A single, testable action | “We’ll run a short enablement session for Team B focused on the workflows Team A is using.” |
6) Measure before/after | Compare 2–4 weeks pre/post | A plain-language result statement | “Weekly activity in Team B increased after the targeted training session.” |
Example prompts for analysis
You can go even deeper if you export your data from the analytics dashboard. Below are some simple prompts to analyze the data. Just click on the use case and drop your CSV into chat.
Tip: Create a Custom GPT or a skill to automate this in the future.
Use case | When to use |
|---|---|
Weekly internal status updates for admins, champions, or enablement leads. | |
Leadership reviews, CIO updates, QBRs, and sponsor briefings. | |
Keeping an internal champions network aligned on where to focus. | |
When certain departments lag behind adoption | |
Planning education or enablement programs. | |
Finding internal advocates and use cases worth scaling. | |
When leaders need interpretation, not just metrics. | |
Spotting issues before they become larger rollout problems. | |
When you want department-level or cohort-level insights beyond the base export. | |
Establishing a durable review cadence and standard format. |
Admin FAQ
Q: Can admins export the data?
A: Yes. Admins can generate on-demand CSV exports for Users, GPTs, and Projects. Exports are available for a selected week or month, and custom date ranges are not currently supported.
Q: Does workspace analytics replace the Compliance API?
A: No. Workspace analytics is an aggregated analytics experience, not a raw log interface. It does not expose message text, file contents, or item-level compliance records. For raw logs, legal and security workflows, and compliance controls, use the Compliance API.
Q: Can we segment analytics by team, department, or business unit?
A: Some views support segmentation by SCIM groups, which can represent teams, departments, or business units depending on how your identity provider is configured. If you need additional breakdowns, export the data and combine it with your internal systems.
Q: What should admins know about Task Insights privacy and controls?
A: Task Insights is designed for aggregate analysis only and does not expose individual prompts or conversations. It is enabled by default, and workspace admins or owners can turn it off in Workspace settings under Usage Analytics.
Q: Can admins customize the impact survey?
A: No. The impact survey is currently fixed and cannot be customized, though workspace admins and owners can opt out of OpenAI-created impact surveys in Workspace settings.
Dive in
Related
11:26
Video
Getting Started with ChatGPT Enterprise for Government Employees
By David Sperry • Jul 16th, 2025 • Views 1.4K
11:26
Video
Getting Started with ChatGPT Enterprise for Government Employees
By David Sperry • Jul 16th, 2025 • Views 1.4K
