How Venice handles your privacy

How Venice handles your privacy

Venice is rapidly growing and receiving earnest questions about how our privacy works. Privacy is not just a tagline for us – it’s one of our two foundational principles. Venice is private, and Venice is uncensored. Here's a detailed look at at how we approach privacy.

Venice.ai

TL;DR:

  • All major AI companies collect and store your conversations forever

  • Venice does not collect or store your conversations

Venice is rapidly growing and receiving earnest questions about how our privacy works. Privacy is not just a tagline for us – it’s one of our two foundational principles. Venice is private, and Venice is uncensored.

Here's a detailed look at how we approach privacy.

The state of privacy in the current AI landscape

First, some context: in general, all AI companies store everything they can get their hands on – their entire business model depends on it.

Every prompt, every response, your personal info, everything is saved, analyzed, and able to be shared with ad networks, partners, rogue employees, hackers, and governments. Even if an AI company wants to keep this data safe (and we believe they do), data breaches happen all the time and often go unreported. And any letter from a government agency and the data is handed over.

The only way to offer actual privacy is to never have or store the user’s data in the first place. Venice’s ethos is not to spend resources protecting your data better than anyone else; rather, we simply don’t have it.

This would be absurd for most companies. But our DNA is of the crypto world. Zero knowledge is the best knowledge.

It’s also worth mentioning that there are many current limitations with AI infra that prevent full privacy at scale. Unless you fully run your own models locally (which you can absolutely do), there will be some privacy tradeoffs and inherent required trust when using a hosted service instead.

Our objective is to be transparent about these tradeoffs and describe how privacy works within Venice so you can make your own informed decisions.

How Venice handles your privacy

Our privacy philosophy is “You don’t have to protect what you do not have”.

All your content - prompts, responses, generated images, document uploads - are never saved on any Venice infrastructure -- instead, they stay encrypted in your local browser.

Your conversation history also lives in your local browser - that's why chats on one device won't show up on another, even with the same account. Clear your browser data or delete your chat history, and those conversations are gone forever. Note: Venice provides download features for you to save your conversation history.

But obviously your browser is not doing the hard work of executing the inference. Powerful GPUs are needed, so what happens when your prompts are sent to the GPU’s powering Venice?

Your prompts go from your browser, through a Venice-controlled proxy service which distributes the requests to decentralized GPUs. We acquire these GPUs from a variety of partners. The open-source models that you can access through Venice are hosted on these GPUs on software designed and operated by Venice. This software sees only the raw prompt context - no user data, no IP, no other identifying info. But they do see the plain text of the prompt, because they have to in order to generate the responses.

Note: in the future, we hope to integrate some of the frontier encryption (such as homomorphic encryption) which allows AI inference to be done on encrypted text. But today, interacting directly with LLMs using encrypted data is still an active area of research and is not feasible in any manner that would please a user. It’s way too slow, and too expensive.

Until then, the GPU must have the prompt (and only the prompt) in plain text.

Each request is thus isolated and anonymized, then streams back to your browser through our proxy. It’s stored there in plain text where you can read it.

The communication over Venice’s infrastructure is secured using SSL encryption throughout this entire journey. Using SSL encryption is standard, yes, but combining it with local-only storage and decentralized processing creates meaningful privacy protection that none of the mainstream AI companies offer.

An observer can here point out that someone with physical access to the GPU could intercept the plaintext prompts.

This is true.

Though if someone physically breached the GPU’s, they could access only the plain text prompts, without any identifying information. There’s no way to know who sent them, and they’d be in random order, processed among thousands of different Venice users.

Importantly, once a prompt is processed, it is purged from the GPU (and the next is loaded, processed, returned, etc). The prompts and responses do not persist on the GPU; they are transient, persisting only as long as is required to execute your request.

This is how we keep your AI conversations private. The same structure applies for both text and image generation.

Short of running AI models locally on your own hardware, Venice offers the strongest privacy protections in the industry.

Data Venice does track

Now let's be transparent about what we do track. While your conversations are private, we do track some basic telemetry data to run Venice – and you should know what that is.

For all account types, Venice logs event data on how users use the product, such as signing in, creating new chats, organizing and filtering chats, etc. This looks like, “User XYZ signed in. User ABC deleted a convo, etc.” Venice doesn’t know what User XYZ signed in to do, nor what convo User ABC deleted.

Here’s what we track per account type:

  • No Account Users: Without an account, Venice collects basic metadata on a user, such as timezone, browser type, and IP address. Some of these values are used to prevent abuse of the Venice platform by malicious actors, and some of these values are used to optimize the user experience. You’re welcome to use a VPN to abstract some of these details away.

  • Free Account Users: When a Free Account is created, for the purposes of account verification,Venice also collects a user email address in addition to the above metadata. Or, if the user creates a free account with a web3 crypto wallet, Venice then doesn’t require email (unless the user chooses to provide it), and instead records Venice the user's public key (wallet address) for future verification

  • Pro Account Users: If a user pays for a Pro account with a credit card, only our payment services provider, Stripe, receives this information, which is not shared with Venice. If a user pays with crypto, Venice records the user's public key that paid.

For Free and Pro Account users we also track points and referrals.

For email authentication we use Clerk.io and for web3 logins, we utilize Wallet Connect - both are industry standard services. For email marketing we utilize Customer.io. We can and should make a version of the app that doesn’t even track these things, but given the other competing priorities on our roadmap, we haven’t done this yet.

When you share a chat, the conversation is encrypted in your browser, then the encrypted data is stored on our servers for 14 days.

The decryption keys are only in the URL you share - we never have access to them so we can’t see the content of your shared conversation, but anyone who you share the URL with, can decrypt (we thought this was quite clever!)

If you don’t want Venice to track your IP, we suggest using a VPN. And if you don’t want Venice to have your email, we suggest using a disposable email service or only using a web3 crypto wallet to sign-in and/or pay.

We're also looking into third-party verification so you won't have to take our word for any of this. Proving our infrastructure actually does what we say is a key priority for us in 2025.

Venice is the most private AI platform available

Put simply, all your conversations with Venice are substantially private, and where plaintext is required on the GPU for the moment of processing, identifying information is never present or connected. No data persists other than in the user’s browser.

Our commitment to privacy is resolute, and today Venice is the most private AI service available, unless you are running models yourself on your own hardware.

We value the scrutiny, and appreciate the adversarial environment that improves such systems.

ad intellectum infinitum

Back to all posts
Room