Overview
A martech platforms product managers could not quickly locate prior user research scattered across Dovetail, Notion, and Google Drive. Teams repeated interviews, missed relevant findings, and made decisions without a shared evidence base. Intelligex deployed a permissions?aware search that indexed research notes, clips, and decks across those repositories, with an AI layer that generated study briefs and linked them to Jira epics. Teams reused credible insights, redundant outreach declined, and decisions carried richer contextwithout replacing existing research or planning tools.
Client Profile
- Industry: Marketing technology (ad serving, marketing automation, analytics)
- Company size (range): Multi?product SaaS with cross?functional squads
- Stage: Established research and knowledge tools; findings scattered and hard to reuse
- Department owner: Product Management & R&D
- Other stakeholders: UX Research, Design/Content, Customer Success, Sales/Enablement, Marketing Ops, Data/Analytics, Legal/Privacy, IT/Security
The Challenge
Prior studies lived as notes, highlight reels, and presentations across Dovetail, Notion, and shared drives. File names and tags varied by team, and attachments like transcripts or recordings were not consistently searchable. When PMs kicked off discovery, they asked Research for anything similar, waited on ad hoc collections, and often re?recruited users to repeat earlier conversations. Valuable signals were rediscovered rather than reused.
Linkage to delivery was weak. Epics in Jira referenced a meeting or a link to a deck, but not a consolidated brief with methods, sample, and key findings. Designers and PMs debated what counted as strong evidence, while Research was pulled into repetitive requests. Sensitive content raised risk: raw notes sometimes contained personally identifiable information (PII) and contractual restrictions from customer NDAs, so teams were cautious about sharing broadly.
Security and governance were non?negotiable. Any solution needed to respect source permissions and align to zero?trust principles such as those outlined in NIST SP 800?207, avoid creating unmanaged copies, and provide audit trails. Leadership wanted a way to find and cite credible research quickly, tie it to epics, and keep Legal/Privacy comfortable with access and redaction.
Why It Was Happening
Root causes were fragmentation and inconsistent taxonomies. Each team labeled studies in its own languagepersonas, campaigns, funnelsso near?duplicate topics were hard to discover across tools. Content formats spanned text, PDFs, and media, with uneven metadata. There was no shared workflow that created durable links from Jira epics to prior studies or that produced a brief summarizing methods, findings, and caveats.
Ownership was diffuse. Research curated repositories but was not staffed to fulfill every ad hoc request; PMs owned roadmaps and epics; Design translated insights to patterns; and IT/Security owned access. Without a permissions?aware index and a lightweight way to generate trustworthy study briefs, teams defaulted to fresh interviews and internal rehashes.
The Solution
Intelligex implemented a permissions?aware enterprise search spanning Dovetail, Notion, and Google Drive, with an AI service that generated study briefs and mapped them to Jira epics. The index mirrored source permissions, redacted sensitive fields in previews, and presented consolidated search results with filters for persona, market, method, and recency. The AI service assembled brief pagesmethods, participants, key findings, artifacts, and caveatsthen routed them for human review before linking to the relevant epic. PMs and designers could search once, preview safely, and pull a vetted brief directly into planning.
- Integrations: Read?only connectors to Dovetail, Notion, and Google Drive (for Drive, reference: Google Drive); status and links surfaced in Jira (reference: Jira); enterprise search stack aligned to patterns such as Elastic Enterprise Search.
- Indexing and metadata: Full?text and attachment indexing (including OCR for PDFs and image?embedded text), extraction of study metadata (method, audience, product area), and support for team?specific tags mapped to a canonical taxonomy.
- AI?generated study briefs: Draft briefs summarizing research question, methodology, key findings, artifacts, and limitations with links to original materials. Drafts entered a review queue for Research or PM approval before publication.
- Jira linkage: Briefs embedded in or linked from epics and stories via status badges and macros; deep links returned to the system of record for full context under source permissions.
- Permissions and redaction: Source ACLs mirrored; previews masked PII and NDA?restricted fields; unmasking required explicit roles. Activity was logged for audit.
- Taxonomy and tags: Canonical topic and persona tags with crosswalks to legacy labels; suggested tags from content were reviewed by humans before becoming canonical.
- Dashboards and alerts: Views of research coverage by product area and persona; alerts when a new study matched active epics or when an epic lacked linked evidence.
- Security and governance: Least?privilege access; immutable logs of queries, brief approvals, and edits; alignment to zero?trust concepts per NIST SP 800?207.
Implementation
- Discovery: Cataloged research repositories and common study types; gathered example notes, decks, and transcripts; mapped persona and topic vocabularies; reviewed how epics referenced research; identified PII/NDAs patterns for masking.
- Design: Defined the canonical taxonomy and tag crosswalks; specified indexing sources, metadata fields, and redaction rules; designed the study brief template and approval roles; agreed on Jira badge placements and search filters.
- Build: Configured read?only connectors, indexing, and OCR; implemented the AI brief generator and review queue; mapped tags to the taxonomy; integrated Jira status badges and deep links; created dashboards and alerts.
- Testing/QA: Ran in shadow mode: generated briefs and search results while teams continued manual lookups. Compared outputs to prior research syntheses; tuned redaction and tag suggestions; validated permission mirroring and audit logs. Included a human?in?the?loop review with Research and PMs.
- Rollout: Enabled for priority product areas first; kept manual research packs as a controlled fallback; expanded after teams consistently used briefs in planning and reviews.
- Training/hand?off: Delivered short sessions for PMs, Researchers, Designers, and Success on searching, briefs, approvals, and linking to epics. Updated SOPs for research tagging and citation. Transferred ownership of taxonomy, brief templates, and redaction patterns to Research Ops and Product Ops under change control.
Results
Discovery cycles started from credible, reusable insight. PMs searched once, opened a vetted brief, and linked it to their epics with the original artifacts one click away under source permissions. Research spent less time re?packaging old work and more time on new studies and synthesis. Repeated outreach declined as teams recognized and reused relevant findings for similar questions.
Planning became more evidence?driven. Product reviews referenced the same briefs across squads, and debates centered on gaps and trade?offs rather than on hunting for prior work. Legal and Security were comfortable with previews and redaction, and auditors could see who accessed what. The toolchain stayed the same; the difference was a permissions?aware search and a lightweight brief workflow that made existing knowledge usable.
What Changed for the Team
- Before: Research lived in multiple tools with inconsistent tags. After: A single, permissions?aware search indexed content with a canonical taxonomy.
- Before: PMs re?interviewed users to confirm known findings. After: Vetted study briefs were linked to epics with sources attached.
- Before: Sensitive notes limited sharing. After: Previews applied redaction and honored source permissions with unmask under role control.
- Before: Epics referenced meetings and scattered decks. After: Epics carried briefs with methods, findings, and caveats.
- Before: Research Ops fielded repetitive requests. After: Teams self?served credible context and requested only true gaps.
- Before: Reviews debated where evidence lived. After: Decisions referenced the same briefs and dashboards showing coverage.
Key Takeaways
- Make research discoverable and governed; permissions?aware search and redaction unlock reuse safely.
- Summarize once, reuse everywhere; study briefs give PMs and Designers a shared, citable artifact.
- Align language; a canonical taxonomy and tag crosswalks bring scattered repositories into one lens.
- Keep humans in the loop; brief approvals and tag curation maintain trust and relevance.
- Link to delivery; embedding briefs in Jira connects insights to decisions and outcomes.
- Integrate, dont replace; index Dovetail, Notion, and Drive and add orchestration around them.
FAQ
What tools did this integrate with? The solution indexed research content from Dovetail, Notion, and Google Drive in read?only mode, exposed results with a permissions?aware search layer aligned to patterns like Elastic Enterprise Search, and linked approved study briefs to epics and stories in Jira. It used existing identity for single sign?on and role mapping.
How did you handle quality control and governance? Source ACLs were mirrored; previews masked PII and NDA?restricted content; and all queries and views were logged. AI?generated briefs went through a human review queue before publishing. Taxonomy changes and tag crosswalks lived under change control owned by Research Ops and Product Ops. Design aligned with zero?trust concepts per NIST SP 800?207.
How did you roll this out without disruption? The service ran in shadow mode first, producing search results and draft briefs while teams continued existing practices. After tuning redaction, tags, and templates with Research and PMs, search was made available for priority areas, and brief links appeared in new epics. Manual research packs remained as a controlled fallback during early cycles.
How were PII and NDAs respected? The index stored metadata and snippets; full documents remained in the source system. Previews applied redaction patterns for PII and contractually restricted terms. Unmask required specific roles, and audit logs recorded views and downloads. Legal reviewed patterns during design and as new data types appeared.
How were briefs validated and kept current? Draft briefs were routed to a designated reviewer (usually Research or a PM) for approval and tagging. When new, relevant studies appeared, the system suggested updates to linked briefs and notified epic owners. Version history preserved prior wording and sources so decisions remained interpretable over time.
Can this surface gaps where new research is needed? Yes. Dashboards showed coverage by product area, persona, and method. Epics without linked evidence were flagged, and PMs could request studies from within Jira with context about similar work already completed.
Department/Function: IT & InfrastructureLegal & ComplianceProduct Management & R&D
Capability: Enterprise Search & Knowledge Management
Get a FREE
Proof of Concept
& Consultation
No Cost, No Commitment!


