Privacy is moving from niche concern to growth driver

Duck.ai, the chatbot built by DuckDuckGo, appears to be benefiting from a shift in how users think about AI services. ZDNET reports that web traffic to Duck.ai reached 11.1 million visits in February, up more than 300% from January, according to Similarweb. The figure remains small next to the biggest chatbots, but the growth rate is what stands out.

The story suggests that privacy is not merely a branding detail anymore. It may be becoming a meaningful product differentiator in consumer AI, especially as more users start asking what happens to the prompts, metadata, and personal information they feed into chatbot systems.

What Duck.ai is offering differently

ZDNET describes Duck.ai as a privacy-first chatbot that extends DuckDuckGo’s familiar privacy positioning into generative AI. Rather than relying on a proprietary large language model, the service calls models from providers including Anthropic, OpenAI, and Meta on the user’s behalf. The point of that arrangement is to shield the user’s IP address and other personal information from direct exposure to those providers.

The report also cites Duck.ai’s privacy policy, which says the company has agreements with model providers limiting how they can use data from anonymous requests. According to that policy, prompts and outputs are not used to develop or improve the providers’ models, and received information is deleted once no longer necessary to provide outputs, at most within 30 days except for limited safety and legal cases.

That combination gives Duck.ai a distinctive position in the market. It is not competing on model originality. It is competing on how the model access layer is mediated and what that means for user privacy.

The traffic numbers are small but revealing

ZDNET places Duck.ai’s 11.1 million February visits alongside much larger estimated totals for major rivals, including 5.4 billion for ChatGPT, 2.1 billion for Gemini, and 290.3 million for Claude. By scale, Duck.ai remains a minor player. By momentum, however, it is suddenly worth paying attention to.

The jump matters because it suggests that a subset of users is actively looking for an AI experience with stronger privacy assurances. That could reflect broader anxieties about surveillance, data retention, corporate training practices, or the consequences of feeding sensitive material into chat systems that are optimized for learning and monetization.

It may also reflect growing public literacy. Early chatbot adoption was often driven by novelty and capability. More mature usage brings harder questions about confidentiality, profiling, and what counts as responsible product design.

Why privacy has become more salient now

Concerns around chatbot privacy are not new, and ZDNET notes as much. What may be changing is the scale of exposure. As AI tools move into everyday browsing, office work, coding, research, and personal planning, users are sharing more intimate and commercially sensitive material with them. That raises the cost of vague privacy practices.

In that environment, Duck.ai’s pitch is straightforward: users can access frontier models without directly handing over as much identifiable information to the underlying providers. Whether that guarantee is sufficient for every use case is another matter, but it is simple enough to resonate.

ZDNET also suggests that new features may be helping drive traffic. That means privacy alone may not explain the surge. Still, privacy appears to be the core narrative making the service stand out.

A business signal for the AI market

The traffic growth also sends a signal to the broader AI ecosystem. Consumer demand may no longer be shaped only by model power, speed, or multimodal features. Trust architecture can matter too. Companies that assume users will tolerate broad data collection in exchange for convenience may face more pushback as alternatives become easier to try.

This is particularly relevant for platform intermediaries. DuckDuckGo is effectively wrapping multiple frontier models in a different governance and privacy layer. That suggests there is room in the AI market for companies that do not necessarily win by training the best model themselves, but by designing a safer or more controlled route to existing models.

If that pattern expands, the industry may see more competition around policy, data handling, and user control rather than just raw model benchmarks.

The limits of the current moment

At the same time, the numbers should be kept in perspective. Even after a 300%-plus rise, Duck.ai remains tiny next to the dominant players. A surge from a small base can indicate momentum without guaranteeing long-term market impact. It also remains possible that some of the growth reflects temporary attention driven by news cycles or feature releases rather than a durable migration in user behavior.

Still, temporary or not, the spike is a useful indicator. It shows that privacy-centered positioning can attract attention in a field that has often treated data extraction as an acceptable cost of offering advanced AI.

What comes next

The important question is whether privacy becomes a standard expectation or stays a niche selling point. If users continue to reward services that minimize data exposure, larger AI providers may face pressure to offer clearer controls, stronger data separation, and more transparent retention limits. If not, Duck.ai’s rise may remain an interesting but limited side story.

For now, the lesson is narrower and more concrete. Consumer AI competition is evolving. Users are not only asking which chatbot is smartest. They are increasingly asking who sees their prompts, how those prompts are used, and whether there is a safer way to access powerful models. Duck.ai’s recent traffic surge suggests that those questions are starting to drive behavior, not just headlines.

This article is based on reporting by ZDNET. Read the original article.