Every professional services firm has the same dirty secret: the most valuable conversations in the company — client calls, strategy sessions, scope negotiations — evaporate the moment someone clicks "Leave Meeting." We decided to fix that. We connected Alpha Agent to the Zoom API, pointed it at 13 months of meeting recordings, and turned 129 calls into a searchable, client-tagged knowledge base. Here's exactly how we did it, what we found, and why it matters for any firm that bills by the hour.
The Problem: $106 Hours of Conversations, Zero Institutional Memory
Last Rev is a digital engineering consultancy. We build headless CMS platforms, design systems, and AI-powered applications for clients like Diligent, Integral Ad Science, Procore, and SmartNews. On any given week, our team runs 25–30 Zoom calls: client syncs, sprint planning, design reviews, SOW negotiations, and internal standups.
Each of those meetings generates decisions, commitments, and context. And almost none of it was being captured systematically. We had a patchwork of meeting notes in Google Docs, Slack messages that scrolled off-screen, and the occasional "can someone remind me what we agreed to?" message that no one could answer confidently.
The cost wasn't abstract. Consider:
- An account manager joins a client relationship mid-stream and has zero context on what's been discussed in the previous six months of weekly calls
- A client asks "didn't we decide X on our last call?" and no one can confirm or deny with certainty
- Action items from a meeting get lost because they were spoken but never written down
- The same topic gets re-debated across three meetings because no one remembers it was already resolved
We're a 29-person shop with meetings spanning 14 active clients.1 The problem isn't that people don't care about documentation — it's that manually capturing meeting output is tedious, inconsistent, and always the first thing that gets dropped when people are busy doing actual work.
The Approach: Server-to-Server OAuth + Structured AI Processing
Alpha Agent is our internal AI operations platform — an autonomous agent that can connect to APIs, process data, build applications, and take action across our tool stack. We built a Zoom Meeting Intelligence skill that works like this:
- Authenticate via Zoom server-to-server OAuth — no per-user consent flows. The agent has account-level access to recordings and transcripts across the organization.
- Pull recordings in 30-day windows — Zoom's API limits date range queries, so the agent iterates month-by-month to build a complete picture. For our initial run, this meant scanning January 2025 through February 2026.
- Download VTT transcripts — Zoom auto-generates WebVTT transcripts for most recordings. For the handful without one, Alpha Agent falls back to downloading the M4A audio and running it through OpenAI's Whisper API.
- Run each transcript through a structured intelligence extraction pipeline — a carefully tuned prompt that outputs a one-paragraph summary, a list of key decisions, structured action items with owners and priorities, sentiment analysis, and topic tags.
- Store everything in Supabase — each processed meeting becomes a row in our
zoom_transcriptstable with the raw VTT, structured summary, action items as JSONB, client tags, and processing timestamps. - Tag by client — the agent matches meeting topics and attendee lists against known client names ("Diligent Weekly Sync" → Diligent, "IAS Sprint Review" → Integral Ad Science).
The entire pipeline runs incrementally. Before pulling any recording, Alpha Agent checks SELECT MAX(start_time) FROM zoom_transcripts and only fetches meetings after that watermark. No duplicate processing, no wasted API calls.
The Numbers: What 129 Meetings Revealed
After processing completed, here's what our Supabase table contained:
| Metric | Value |
|---|---|
| Total meetings processed | 133 (129 with full transcripts, 4 pending) |
| Date range covered | January 14, 2025 – February 17, 2026 |
| Total meeting time | 6,400 minutes (~106.7 hours) |
| Unique participants identified | 29 people |
| Action items extracted | 620 |
| Decisions captured | 232 |
| Clients auto-tagged | 14 (Diligent, IAS, Procore, SmartNews, WOW, Lively, Orix, AnswerAI, and more) |
| Meeting sentiment breakdown | 72% productive · 10% casual · 8% tense · 7% neutral |
Let those numbers sink in. 620 action items were spoken into existence across those meetings. Before this system, the vast majority of them lived only in the memories of the people on the call — memories that fade, conflict, and eventually disappear.
What the Intelligence Actually Looks Like
Each processed meeting produces a structured record. Here's a real example from a contract review call:
Topic: MSA & SOW Contract Structure Review
Date: February 6, 2025 · Duration: 14 min
Attendees: Adam Harris, Krista Schlumpberger
Sentiment: ProductiveSummary: Reviewed contract documents including the MSA and SOW structure. Walked through how the SOW works as an addendum to the MSA. Discussed how each new support contract renewal overrides the previous SOW while the MSA remains the signed base document. Agreed to clarify with Susan that each new SOW is technically an addendum to the original MSA.
Decisions:
- New SOWs override previous ones as addendums to the MSA
- MSA is valid for 12 months as the base contract
Action Items:
- Krista Schlumpberger → Clarify with Susan that new SOWs are addendums to the MSA (priority: medium)
- Krista Schlumpberger → Review and compare contract documents for consistency (priority: medium)
That 14-minute call produced two decisions and two action items with clear owners. Without this system, those decisions would live only in the heads of two people — and the next time someone asked "do we need a new MSA for this client?", the answer would be a shrug followed by 30 minutes of Dropbox archaeology.
Pattern Recognition Across Conversations
Individual meeting summaries are useful. But the real power is in aggregate intelligence — patterns that emerge only when you can query across all 129 conversations.
Topic Frequency Analysis
Our top discussion topics across all meetings tell a story about where the company's energy is going:
- AI features — 17 meetings (13%). AI is the dominant topic across client and internal conversations.
- Migration — 15 meetings (11%). CMS platform migrations (Contentful → Sanity, legacy → headless) are a core service line.
- AI/ML — 15 meetings. Combined with #1, AI-related topics appear in 24% of all meetings.
- Design — 12 meetings. Design system work and component development are consistently discussed.
- API Development — 12 meetings. Integration work is a steady drumbeat.
- Deployment — 11 meetings. DevOps and release management surface regularly.
- Budget/Costs — 8 meetings. Financial discussions appear in 6% of calls — less than you'd expect.
This data is immediately actionable. If AI topics dominate 24% of our conversations, that should be reflected in our hiring priorities, our marketing, and our service offerings. Before this analysis, that insight was a vague hunch. Now it's a number.
Sentiment as a Leading Indicator
The sentiment tagging is surprisingly informative. Across all meetings:
- 72% productive — the meeting had clear outcomes and forward momentum
- 10% casual — informal check-ins, quick syncs
- 8% tense — scope disagreements, timeline pressure, difficult conversations
- 7% neutral — status updates, information sharing
That 8% "tense" figure is a goldmine for account management. When you can filter by client and see that the last three meetings with Client X were tagged as "tense," you've got an early warning system for churn risk — long before the client sends the dreaded "we need to talk" email. We can now see tension patterns weeks before they escalate.
Client Intelligence Profiles
By filtering the database by client_id, we can instantly generate a profile of any client relationship:
- Diligent: 18 meetings, predominantly focused on migration and component development, sentiment trending productive
- IAS / Integral Ad Science: 12 meetings, heavy on API development and testing/QA, action items concentrated around deployment timelines
- WOW: 8 meetings, design-heavy, consistently casual sentiment (a healthy relationship signal)
- Procore: 6 meetings, focused on database and deployment topics
When an account manager needs to prep for a client call, they no longer need to ask five people "what have we been talking about with this client?" They query the database. Full context in seconds.
The Technical Architecture
For the technically inclined, here's how the pieces fit together:
Zoom Cloud Recordings
↓ (Server-to-Server OAuth, account-level access)
Alpha Agent Agent
↓ (30-day window iteration, incremental processing)
┌──┴──┐
VTT M4A Audio
│ ↓
│ Whisper API (fallback)
│ ↓
└──→ Clean Transcript
↓
Claude AI Pipeline
(structured extraction prompt)
↓
┌─────┼─────┐
Summary Actions Decisions
Tags Owners Sentiment
↓
Supabase (zoom_transcripts table)
↓
┌────────┼────────┐
Query Slack Client
API Posts Profiles
Key architectural decisions:
- Supabase as the persistence layer — JSONB columns for action items and decisions give us flexible querying without a rigid schema. We can add fields without migrations.
- Incremental watermark processing — the agent checks
MAX(start_time)before each run, so it never reprocesses meetings. This matters when you're paying per API call for both Zoom and the AI models. - Client matching via topic + attendee heuristics — meeting topics like "Diligent Weekly Sync" are obvious, but the agent also cross-references attendee email domains against our CRM for ambiguous topics.
- VTT-first, Whisper as fallback — Zoom's native transcripts are free and instant. Whisper costs money and requires downloading large audio files. The agent only invokes Whisper when the VTT is missing.
What This Means for Account Managers
If you run a professional services firm — agency, consultancy, law firm, MSP — this capability changes how you do account management. Specifically:
1. Onboarding a new account manager takes minutes, not weeks
When someone takes over a client relationship, they need context. Traditionally, that means shadow-reading months of Slack threads, sitting in on calls, and asking "so what's the deal with this client?" Now, they query the database: "Show me all Diligent meetings, ordered by date." Eighteen meeting summaries, complete with decisions and action items, in a single page. Full context in 15 minutes.
2. "What did we agree to?" has an authoritative answer
Contract disputes, scope disagreements, and he-said-she-said situations evaporate when you have a timestamped, AI-generated summary of every conversation. We had a real case where a client questioned whether a particular feature was in scope. We searched the transcripts, found the exact meeting where it was discussed, and pulled up the summary showing it was explicitly deferred to Phase 3. Conversation over.
3. Action items don't fall through the cracks
620 action items across 129 meetings. That's an average of 4.8 action items per meeting. Before this system, how many of those were being tracked? Generously, maybe half. The rest disappeared into the ether. Now every commitment is captured, attributed to an owner, and queryable.
4. Sentiment trending replaces gut feelings
Account health is usually assessed by gut feeling — "I think they're happy" or "something felt off in that last call." With sentiment tagging across every meeting, you can see the trend. Three productive meetings followed by two tense ones? That's a pattern worth investigating before the relationship deteriorates.
The Time Math
Let's do the ROI calculation:
- Manual meeting notes: Assume 15 minutes per meeting for a diligent note-taker (many meetings got zero notes). At 129 meetings, that's 32 hours of note-taking labor — if someone actually did it for every meeting.
- Alpha Agent processing time: The entire 129-meeting backlog was processed in approximately 4 hours of automated pipeline execution. Ongoing processing happens within minutes of each meeting ending.
- Quality difference: AI-generated summaries are consistent, structured, and complete. Human notes are variable, often incomplete, and formatted differently by every person.
- Retrieval time: Finding information in scattered Google Docs and Slack threads: 10–30 minutes per query. Finding it in a structured database: 10 seconds.
The initial backlog processing alone saved roughly 28 hours of work that would have been needed to manually review and document those meetings. But the real savings are ongoing — every week, 25–30 meetings are automatically processed, saving approximately 6–8 hours of manual capture and making every meeting's content instantly accessible.2
What Surprised Us
A few things we didn't expect:
The casual meetings were the most valuable to capture. The formal client presentations had agendas and follow-up emails. It was the 4-minute impromptu syncs — "hey, quick question about the deployment" — that contained critical decisions no one would have bothered documenting. One 4-minute call between two engineers resolved a JIRA configuration issue that had been blocking a sprint. Without automated capture, that decision would have lived in two people's heads and nowhere else.
Topic frequency analysis influenced our service strategy. Seeing that AI-related topics dominated 24% of all conversations gave us hard data to invest more aggressively in AI service offerings. It wasn't a guess — it was extracted from actual client and internal conversations.
Attendee patterns revealed meeting culture issues. We could see which meetings consistently had too many attendees (meetings with 6+ people that should have been 3), and which critical conversations were happening between only two people with no documentation trail.
Privacy and Ethics
A few important notes on how we handle this responsibly:
- All recordings processed are from our own Zoom account — we don't process external recordings
- Zoom displays recording consent notices to all participants at the start of each meeting
- Transcripts and summaries are stored in our private Supabase instance, not shared externally
- Individual meeting content is not posted to public channels without explicit approval
- The system processes content for operational utility — it does not build individual performance profiles or scoring
How to Build This for Your Firm
The core components are straightforward:
- Zoom server-to-server OAuth app — set this up in the Zoom Marketplace. It gives you account-level API access without per-user consent flows. You need the
recording:readanduser:readscopes. - A processing pipeline — something that can download VTT transcripts, parse them, and send them to an LLM for structured extraction. This can be a simple script or an AI agent like Alpha Agent.
- A structured storage layer — Supabase, Postgres, Airtable, whatever. The key is JSONB or similar flexible storage for action items and decisions, plus full-text search on summaries.
- Client matching logic — a mapping from meeting topics and attendee domains to your client/project taxonomy.
- Incremental processing — a watermark-based approach so you never reprocess meetings or waste API calls.
Or you can skip all of that and let Alpha Agent do it. We built this skill in under a day and the agent handles everything autonomously — authentication, pagination, transcript download, AI processing, storage, and client tagging.
The Bigger Picture
This Zoom intelligence project is one piece of a larger pattern we're seeing in AI operations: the transformation of ephemeral business conversations into durable, queryable institutional knowledge.
Most companies have years of meeting recordings sitting in Zoom's cloud, untouched and unsearchable. Those recordings contain decisions, commitments, strategy discussions, and client context that could dramatically improve how the business operates — if anyone could access it in a useful format.
The technology to unlock this has existed for a couple of years (transcription APIs, LLMs for extraction, structured databases for storage). What's been missing is the orchestration — something that connects all the pieces, handles the edge cases (missing transcripts, ambiguous client names, 30-day API windows), and runs reliably without human babysitting.
That's what Alpha Agent provides. Not just the AI — the operational intelligence to deploy it against real business problems and extract real business value.
129 meetings. 620 action items. 232 decisions. 14 clients tagged. 106 hours of conversations transformed from ephemeral audio into permanent, searchable institutional knowledge.
Your firm's meetings contain the same untapped intelligence. The question is whether you're going to keep letting it evaporate, or start capturing it. Let's talk about building this for your team.
Footnotes
- Client count based on unique
client_idvalues in ourzoom_transcriptsSupabase table as of February 2026. Clients include: Diligent, IAS/Integral Ad Science, Procore, SmartNews, WOW, Lively, Orix, AnswerAI, and internal projects. - Time savings estimates based on industry averages of 15–20 minutes per meeting for manual note-taking and summarization. See Harvard Business Review, "Dear Manager, You're Holding Too Many Meetings" (2022) and Otter.ai Meeting Statistics Report for supporting data on meeting time costs.