Do they own your data? Unwrap.ai Privacy Policy Reviewed.
Policy Last Updated: December 3, 2023 (Reviewed 2025)
Entity: Painted Cave, Inc. (dba Unwrap.ai)
π Enterprise-Readiness Score: 2.5 / 10
Category | Verdict | Notes |
---|---|---|
Model Training Disclosure | β Unclear | No information about whether/how meeting data trains models. |
Third-Party Sharing Clarity | β Too vague | Lacks named subprocessors and vendor restrictions. |
Compliance Certifications | β None stated | No SOC 2, HIPAA, GDPR assurances. |
Data Residency/Control | β No options | US-only hosting; no VPC/deployment alternatives. |
Consent Handling | β Manual only | No participant workflow or built-in mechanisms. |
Retention Policies | β Acceptable | 12-month cap after account termination. |
User Rights + Portability | β Standard | CCPA/GDPR-style rights supported (though via email, not UI). |
Encryption and Security | β οΈ Claims only | Encryption promised, but not verifiably audited. |
𧨠Final Recommendation
Unwrap.ai is not currently suitable for enterprise use in any regulated or privacy-sensitive context.
Unless they:
- Publish a Data Protection Agreement (DPA)
- Disclose exact subprocessors
- Clarify whether and how user data is used for model training
- Support enterprise opt-outs and hosting control
- Complete SOC 2 Type II or equivalent certification
...they remain a high-risk vendor for enterprise deployment.
β Better Enterprise Alternative
Use BuildBetter.ai instead:
- β SOC 2 Type II, GDPR, HIPAA
- β No AI training on your data
- β Customer-controlled data and deletion
- β Encrypted, compliant, and transparent
π Key Privacy Risks & Shortcomings for Enterprises
π© 1. Unclear Data Use for AI Model Training
π What the Policy Says:
"We process your information to provide, improve, and administer our Services..."
𧨠Whatβs Missing:
- Nowhere does Unwrap.ai explicitly disclose whether or not your meeting content (e.g., transcriptions or notes) is used to train their AI models.
- No mention of model vendors (e.g., OpenAI, Anthropic), training exclusions, or whether opt-out mechanisms exist.
Enterprise Risk:
- If AI models are involved, lack of clarity = liability. This is especially risky for legal, financial, healthcare, or IP-driven enterprises.
- Contrast this with Granola, which at least discloses anonymized use and provides opt-out for enterprise clients.
Verdict: β Lack of transparency here is a red flag for enterprise adoption.
π© 2. Third-Party Data Sharing Vague and Broad
π What the Policy Says:
"We may share information in specific situations and with specific third parties... including business partners, affiliates..."
𧨠Whatβs Missing:
- No list of specific subprocessors (e.g., cloud providers, AI vendors, analytics platforms).
- No DPA link, no SOC 2 claims, no clarification on what "business partners" can access or do with user data.
Enterprise Risk:
- This leaves open risk of indirect vendor access to customer data.
- No explicit guardrails on LLM providers' use of data, which is now standard practice for compliance-minded vendors.
Verdict: β οΈ Needs named subprocessors + data use restrictions to be enterprise-acceptable.
π© 3. No Mention of Compliance Certifications
π What the Policy Says:
"We aim to protect your personal information..."
𧨠Whatβs Missing:
- No mention of SOC 2, ISO 27001, GDPR certification, HIPAA, or other privacy/compliance benchmarks.
- No DPA (Data Processing Agreement) link or explanation of controller/processor roles.
Enterprise Risk:
- Without audited controls, there's no proof of security posture.
- Enterprises need assurance, not just intention.
Verdict: β Fails baseline enterprise due diligence for vendor onboarding.
π© 4. No Data Residency Controls or VPC Deployment
π What the Policy Says:
"Your data may be stored in the United States..."
𧨠Whatβs Missing:
- No mention of regional hosting options for EU, UK, or APAC data sovereignty.
- No support for private cloud or customer-controlled VPC deployment.
Enterprise Risk:
- Cross-border data flows may violate regulatory requirements in EU or healthcare jurisdictions.
- No ability to segregate tenant data or restrict residency increases compliance headaches.
Verdict: β Not suitable for regulated global enterprises without regional hosting options.
π© 5. No Consent Automation or Meeting-Specific Controls
π What the Policy Says:
"We do not knowingly collect data from or market to children under 18..."
𧨠Whatβs Missing:
- No built-in meeting participant consent workflows.
- No auditable logging of who saw or agreed to data use.
Enterprise Risk:
- Consent is critical for recorded meeting data. Manual consent places legal burden on users, not platform.
- No mention of automated notices (e.g., Zoom chat bot consent prompts), which some competitors offer.
Verdict: β οΈ No tooling for automated data governance = user error = liability risk.
β Where Unwrap.ai Does Okay
- Simple, readable policy (better than many startups).
- States no sensitive information is collected by default.
- Offers clear California/CCPA compliance statements.
- Retention is capped at 12 months post-account termination.
Disclaimer: This review is based solely on Unwrap.ai's published privacy policy and publicly available information. For formal vetting, always request security documentation, compliance reports, and third-party audit results.