Do they own your data? Unwrap.ai Privacy Policy Reviewed.

Policy Last Updated: December 3, 2023 (Reviewed 2025)
Entity:
Painted Cave, Inc. (dba Unwrap.ai)

πŸ“‰ Enterprise-Readiness Score: 2.5 / 10

Category Verdict Notes
Model Training Disclosure ❌ Unclear No information about whether/how meeting data trains models.
Third-Party Sharing Clarity ❌ Too vague Lacks named subprocessors and vendor restrictions.
Compliance Certifications ❌ None stated No SOC 2, HIPAA, GDPR assurances.
Data Residency/Control ❌ No options US-only hosting; no VPC/deployment alternatives.
Consent Handling ❌ Manual only No participant workflow or built-in mechanisms.
Retention Policies βœ… Acceptable 12-month cap after account termination.
User Rights + Portability βœ… Standard CCPA/GDPR-style rights supported (though via email, not UI).
Encryption and Security ⚠️ Claims only Encryption promised, but not verifiably audited.

🧨 Final Recommendation

Unwrap.ai is not currently suitable for enterprise use in any regulated or privacy-sensitive context.

Unless they:

  • Publish a Data Protection Agreement (DPA)
  • Disclose exact subprocessors
  • Clarify whether and how user data is used for model training
  • Support enterprise opt-outs and hosting control
  • Complete SOC 2 Type II or equivalent certification

...they remain a high-risk vendor for enterprise deployment.


βœ… Better Enterprise Alternative

Use BuildBetter.ai instead:

  • βœ… SOC 2 Type II, GDPR, HIPAA
  • βœ… No AI training on your data
  • βœ… Customer-controlled data and deletion
  • βœ… Encrypted, compliant, and transparent

πŸ” Key Privacy Risks & Shortcomings for Enterprises


🚩 1. Unclear Data Use for AI Model Training

πŸ”Ž What the Policy Says:

"We process your information to provide, improve, and administer our Services..."

🧨 What’s Missing:

  • Nowhere does Unwrap.ai explicitly disclose whether or not your meeting content (e.g., transcriptions or notes) is used to train their AI models.
  • No mention of model vendors (e.g., OpenAI, Anthropic), training exclusions, or whether opt-out mechanisms exist.

Enterprise Risk:

  • If AI models are involved, lack of clarity = liability. This is especially risky for legal, financial, healthcare, or IP-driven enterprises.
  • Contrast this with Granola, which at least discloses anonymized use and provides opt-out for enterprise clients.

Verdict: ❌ Lack of transparency here is a red flag for enterprise adoption.


🚩 2. Third-Party Data Sharing Vague and Broad

πŸ”Ž What the Policy Says:

"We may share information in specific situations and with specific third parties... including business partners, affiliates..."

🧨 What’s Missing:

  • No list of specific subprocessors (e.g., cloud providers, AI vendors, analytics platforms).
  • No DPA link, no SOC 2 claims, no clarification on what "business partners" can access or do with user data.

Enterprise Risk:

  • This leaves open risk of indirect vendor access to customer data.
  • No explicit guardrails on LLM providers' use of data, which is now standard practice for compliance-minded vendors.

Verdict: ⚠️ Needs named subprocessors + data use restrictions to be enterprise-acceptable.


🚩 3. No Mention of Compliance Certifications

πŸ”Ž What the Policy Says:

"We aim to protect your personal information..."

🧨 What’s Missing:

  • No mention of SOC 2, ISO 27001, GDPR certification, HIPAA, or other privacy/compliance benchmarks.
  • No DPA (Data Processing Agreement) link or explanation of controller/processor roles.

Enterprise Risk:

  • Without audited controls, there's no proof of security posture.
  • Enterprises need assurance, not just intention.

Verdict: ❌ Fails baseline enterprise due diligence for vendor onboarding.


🚩 4. No Data Residency Controls or VPC Deployment

πŸ”Ž What the Policy Says:

"Your data may be stored in the United States..."

🧨 What’s Missing:

  • No mention of regional hosting options for EU, UK, or APAC data sovereignty.
  • No support for private cloud or customer-controlled VPC deployment.

Enterprise Risk:

  • Cross-border data flows may violate regulatory requirements in EU or healthcare jurisdictions.
  • No ability to segregate tenant data or restrict residency increases compliance headaches.

Verdict: ❌ Not suitable for regulated global enterprises without regional hosting options.


πŸ”Ž What the Policy Says:

"We do not knowingly collect data from or market to children under 18..."

🧨 What’s Missing:

  • No built-in meeting participant consent workflows.
  • No auditable logging of who saw or agreed to data use.

Enterprise Risk:

  • Consent is critical for recorded meeting data. Manual consent places legal burden on users, not platform.
  • No mention of automated notices (e.g., Zoom chat bot consent prompts), which some competitors offer.

Verdict: ⚠️ No tooling for automated data governance = user error = liability risk.


βœ… Where Unwrap.ai Does Okay

  • Simple, readable policy (better than many startups).
  • States no sensitive information is collected by default.
  • Offers clear California/CCPA compliance statements.
  • Retention is capped at 12 months post-account termination.

Disclaimer: This review is based solely on Unwrap.ai's published privacy policy and publicly available information. For formal vetting, always request security documentation, compliance reports, and third-party audit results.