The Privacy Risks of AI Writing Tools Explained
Most AI writing tools send your text to remote servers the moment you type it. Browser extensions like Grammarly transmit every keystroke for cloud processing, and the text is often stored to improve the service. For most personal writing this is a minor inconvenience. For professionals handling confidential documents, legal files, or patient data, it is a genuine compliance and liability risk.
What data do AI writing tools collect?
The data collection varies significantly by tool, but the default architecture for most cloud-based writing assistants follows the same pattern: your text leaves your device, travels to a remote server, gets processed, and the result is returned. Along the way, the server logs the content.
Grammarly's privacy policy states that it stores text "to provide and improve our services." This means your writing is not just processed in real time but retained and potentially used for model training on an anonymised basis. Grammarly reports 30 million daily active users, which means 30 million people's text is processed on Grammarly's servers every single day.
The risk extends beyond intentional data use. In 2018, a security researcher discovered a bug in the Grammarly Chrome extension that allowed any website to read a user's Grammarly authentication token, giving it access to all documents the user had written or corrected through Grammarly. The vulnerability was patched within days, but it illustrated how a browser extension sitting in the network path between your keyboard and every text field becomes an attractive target.
LanguageTool processes text in the cloud by default. Its premium tier offers a self-hosted option, which moves processing on-premise, but most users run the cloud version without realising their text is leaving their machine.
Which popular tools send your text to the cloud?
Understanding the data flow for each major tool helps you make an informed choice. The critical distinction is whether text is processed locally on your device or routed through a remote server.
| Tool | Data processing | Stored? | Account required? |
|---|---|---|---|
| Grammarly (browser ext) | Cloud | Yes - for service improvement | Yes |
| LanguageTool (cloud) | Cloud | Temporarily | Optional |
| Apple Intelligence (macOS 15) | On-device or Private Cloud Compute | Claims not retained in PCC | Apple ID |
| macOS autocorrect (NSSpellChecker) | On-device | No | No |
| Charm | On-device | No | No |
Apple Intelligence takes a hybrid approach. Basic writing suggestions run locally on Apple Silicon. More complex requests are routed to Apple's Private Cloud Compute infrastructure. Apple has published detailed claims that PCC nodes do not retain user data and are independently auditable, but this still involves data leaving the device for some requests. Apple Intelligence also requires macOS 15 Sequoia and an Apple Silicon Mac, excluding all Intel Mac users and anyone on macOS 14.
What are the real risks of cloud-based writing tools?
The risk varies substantially by how you use your computer. According to the IBM 2023 Cost of a Data Breach Report, 67% of enterprise security breaches involve third-party data sharing. Writing tools that route text through external servers represent exactly this kind of third-party exposure.
The risk profile breaks down roughly like this:
Personal use (lower risk). If you primarily type emails to friends, social media posts, and personal notes, cloud writing tools present low practical risk. A data breach at a writing tool company might expose embarrassing personal communications, but the financial and legal consequences are limited.
Professional use (higher risk). Knowledge workers often type NDA-covered content, client proposals, internal strategy documents, and confidential business communications. Many enterprise data security policies explicitly prohibit sending this type of content through unapproved third-party services. A cloud writing tool installed as a browser extension automatically processes all browser-based text input, including company intranets, HR systems, and CRM tools, potentially violating these policies without the user realising.
Healthcare and legal (highest risk). Attorneys typing client case notes into a system where a cloud writing tool is active may be exposing attorney-client privileged communications to third-party servers. Healthcare workers typing patient information face HIPAA exposure. Consumer versions of cloud writing tools are not designed for these compliance requirements.
How do on-device writing tools protect privacy?
On-device writing tools process text entirely within your computer's local hardware. There is no network request, no remote server, and nothing to intercept. Even if the tool's developer were compromised in a data breach, they would have no stored copy of your writing to leak because it never left your device.
Charm takes this approach by design. Its spelling correction (Spells), grammar correction (Polish), and word prediction (Oracle) all run on-device using a compact language model. No account is required, which means there is no user profile to associate your writing with, and no server-side logs of corrections made. The app has no network connectivity requirements for its core features.
For users who want enhanced grammar correction beyond the on-device model, Charm supports an optional OpenAI API key. When this is enabled, grammar correction queries are sent to OpenAI's API using your own key. This is transparent and opt-in rather than invisible and automatic. Users who require full on-device operation can use Charm without enabling this option at all.
Frequently asked questions
Is Grammarly safe to use?
For general personal use, Grammarly carries low risk. It does send all text you type to its servers and stores data to improve its service. For anyone writing NDAs, legal documents, patient notes, or confidential business communications, this represents a meaningful privacy exposure that many enterprise security policies prohibit.
What data does autocorrect collect on Mac?
macOS built-in autocorrect (NSSpellChecker) is dictionary-based and processes everything locally on your device. It does not send text to Apple. Apple Intelligence on macOS 15 uses Private Cloud Compute for complex requests, where Apple claims data is not retained, but basic corrections remain on-device.
Can my employer see what I type with writing tools?
If your employer provides Grammarly Business or a similar enterprise cloud tool, IT administrators can access usage reports and potentially text logs depending on the plan. On-device tools like Charm have nothing to share because no data is ever transmitted. Always check your employer's acceptable use policy for installed software.
What is on-device AI?
On-device AI runs the entire language model on your local hardware. No text leaves your computer. Processing happens in milliseconds without a network request. Charm uses on-device AI for all three of its core features, meaning your writing stays entirely private by design, with no account needed.
Are cloud writing tools HIPAA compliant?
Most consumer cloud writing tools, including the free tier of Grammarly, are not HIPAA compliant. Grammarly does offer a Business plan with a BAA for enterprise customers, but the default product is not suitable for Protected Health Information. On-device tools like Charm avoid this issue entirely because no data is ever transmitted.
Write privately. No cloud, no account, no exposure.
Charm corrects spelling, grammar, and predicts words entirely on your Mac. $9.99, yours forever.