Copilot Is 'For Entertainment.' So Why Is Microsoft Charging $30?
Microsoft's Copilot Terms of Use contain a sentence that the legal department probably hoped no one would notice: "Copilot is for entertainment purposes only. It can make mistakes, and it may not work as intended. Don't rely on Copilot for important advice. Use Copilot at your own risk."
That language, added in October 2025 and surfacing widely in April 2026, is in bold. Under a section labeled "IMPORTANT DISCLOSURES & WARNINGS." Not in the footnotes.
It's the same disclaimer category that psychics use to avoid liability for bad predictions. Microsoft is now applying it to an AI it charges $30 a month to use and has embedded directly into Windows 11.
The product question is worth asking: if Copilot is entertainment, what exactly are 3.3% of Microsoft 365 users paying for?
What the numbers actually show
That 3.3% figure comes from tracking of M365 and Office 365 users who have access to Copilot Chat but choose to pay for the full subscription. Access rates are high. Conversion rates aren't.
The NPS trajectory tells a more precise story. Copilot's accuracy Net Promoter Score sat at -3.5 in July 2025 — already bad, since any negative NPS means more detractors than promoters. By September 2025, two months later, it had fallen to -24.1. A 20-point drop in two months represents an accelerating trust collapse, not a plateau.
When lapsed Copilot users were surveyed, 44.2% named distrust of answers as their primary reason for stopping. Not price. Not features. Not interface. Distrust.
The "entertainment only" disclaimer isn't outrunning reality here. In some ways, it's catching up to it.
What "legacy language" actually means
Microsoft's official response to the backlash: "The 'entertainment purposes' phrasing is legacy language from when Copilot originally launched as a search companion service in Bing. As the product has evolved, that language is no longer reflective of how Copilot is used today and will be altered with our next update."
That's a reasonable explanation, technically. The phrasing dates to Bing's Copilot rollout in early 2023, when the product was genuinely experimental and Microsoft was hedging hard against the hallucination problems that caused Sydney/Bing to tell users it loved them and wanted to be human. Disclaimers written at that moment made sense.
What's harder to explain is that nobody updated the language for two years — through three major rebrandings and the rollout of $30/month enterprise pricing. Legal terms have version numbers. This one sat.
The fine print also applies narrowly. The "entertainment only" language appears in the consumer/individual Copilot Terms of Use, not in enterprise Microsoft 365 Copilot agreements, which have separate terms. Enterprise customers aren't technically subject to this disclaimer. But the distinction doesn't get prominent billing in the marketing materials either.
The gap that NPS actually measures
Microsoft isn't the only AI company with disclaimers that would concern users who read them. OpenAI's terms state ChatGPT "may produce inaccurate information about people, places, or facts." Google's Gemini documentation warns the product "may display inaccurate info, including about people." The disclaimers are industry-standard.
What makes the Copilot case distinct is the distance between the disclaimer language and the marketing language. A company that buries "for entertainment purposes only" under Important Disclosures is simultaneously running enterprise campaigns positioning Copilot as a productivity tool for knowledge workers, legal teams, and finance departments. Those two messages don't reconcile, and the NPS data suggests users have worked that out.
The trust problem predates this specific disclosure. It's structural. Large language models generate confident-sounding text whether or not the underlying facts support that confidence. That's not a bug that's getting patched in the next release — it's a property of how these models work, and engineering approaches like retrieval-augmentation and grounding reduce but don't eliminate it. Companies build products on these models, market them aggressively, and write legal disclaimers that reflect the technical reality the marketing doesn't.
The 44.2% who stopped using Copilot because they couldn't trust the answers may have had expectations set by product positioning rather than product capability. That's not a user error.
What to do with this
The practical answer isn't to stop using AI assistants. These tools are genuinely useful for drafting, summarizing, brainstorming, and tasks where you can verify the output against something you already know. The productivity research supporting them is real.
The practical answer is to use them the way their legal disclaimers suggest: as a starting point, not a source of truth. If you're using AI for decisions that matter — research, legal review, financial analysis, medical questions — the fine print is often the most accurate description of the product you'll get anywhere. Read it before the marketing materials, not after.
Microsoft will revise the disclaimer language. The underlying engineering tradeoffs won't change that quickly.
Want to see how Copilot compares to other AI assistants? ChatGPT vs Grok: Which AI Is Right for You? and Best AI Chatbots of 2026 break down the full landscape.