Venice is different in a few ways:
Venice is permissionless. Anyone, from anywhere, can use Venice to access open-source machine intelligence.
Venice doesn’t spy on you. The platform doesn't record any of your info (other than email and IP address), and doesn't see your conversations, or the responses. Venice doesn’t (and can’t) share any of this information with other parties (corporations or governments) because it doesn’t have it. Venice's entire infrastructure and ethos is aligned around respecting individual privacy.
Venice doesn’t censor the AI’s responses. The platform remains neutral - it doesn't filter content other than the “Safe Venice” mode to limit adult content, which Pro accounts can turn off. Centralized AI companies add substantial (and unspecified) amounts of censorship and bias to the answers. Venice doesn’t censor or bias answers under the request of politicians/governments. Our infrastructure is set up to be permissionless and neutral. Note: each model has been trained by its publisher with its own rules and boundaries. Venice provides access to multiple models, and users the ability to choose the ones they’re most comfortable with.
All AI models on Venice are open-source and transparent. The platform shows you which models are being provided, and the weights/designs of those models can be found online. Venice provides transparency into its technology where centralized AI companies can’t and won’t.
You must use the Safari browser to add Venice to your Home Screen.
Open Venice.ai in Safari on your iOS device.
Tap the “Share” icon in Safari.
Select “Add to Home Screen” from the options.
Confirm the installation by tapping the “Add” button.
Open Chrome, go to Venice.ai
Tap settings (three dots), scroll down, select “Install app”
Tap “Add to homescreen”
Venice has three tiers of Users, with different limits:
No Account: Limit: 25 text prompts and 10 image prompts per day.
Free Account: Limit: 100 text prompts and 20 image prompts per day.
Pro Account: Limit: Unlimited text prompts and 1,000 image prompts per day.
These limits are subject to change.
Venice enables switching between LLM models, and the languages supported by each model will vary.
Venice does not collect identifying information about its users other than email and IP address. Instead, it utilizes your local browser storage to hold settings and prompt information, and this data isn’t ever shared with the Venice servers.
Venice uses Clerk.io to process its authentication and Customer.io to communicate with Customers. For registered users, these platforms will track your login credentials including your email address using cookies.
Clerk - Clerk cookies are required to login to Venice. Details regarding Clerk cookies can be found here.
Customer.io - We use this to email users and to track certain events like login or points generation. Information on Customer.io cookies can be found here.
Viral Loops - user waitlist and referrals application. If you signed up for Venice via the waitlist function, cookies may be used to track your email to show you your position within the waitlist. However, Viral Loops also prefers to use local storage, which you can learn more about here.
Venice uses local browser storage, so your content may be wiped at any time for reasons outside of our control. Please save content outside of Venice that you wish to keep permanently.
For more information on how Venice handles data, please see our Privacy Blog, Privacy Policy and Terms of Use.
Venice offers access to multiple open-source AI models for text chat and image and code generation. Each model is unique and may respond to similar prompts with a different response style (for example more detailed vs. concise).
Numerous factors affect the models “personality” and contribute to their distinct characteristics, such as the process and data they are trained on, the design of the neural network, the model parameters which enable them to make predictions and decisions based on its learned knowledge, and fine tuning - the process of optimizing an existing pre-trained AI model to better perform better or in a specialized way.
There is no “right” or “best” model to use. We encourage you to experiment with all of the models and explore their nuances.
Click on the links below for detailed information about the models currently available on Venice.
Text Chat & Code Generation
Hermes-2-Pro-Llama-3-8B: published by Nous Research, this is an upgraded, retrained version of Nous Hermes 2, consisting of an updated and cleaned version of the OpenHermes 2.5 Dataset, and maintains excellent general task and conversation capabilities. This is the default text model in Venice.
Dogge Llama 3 70B: a fine-tuned version of base model Llama-3-70b.
Image Generation
Fluently XL v4: published by Project Fluently, this model is the final upgraded version of the previous Fluently V4 fine-tuned Stable Diffusion model in Venice. Project Fluently says this upgrade provides improved overall aesthetics, lighting and contrast, and more realistic renderings of anatomy and nature. This is the default image model in Venice.
Playground v2.5: Published by Models Lab, this model has been tuned to produce realistic photography, cartoon imagery and Anime.
Dreamshaper: published by Lykon, this model is a fine-tuned version of Stable Diffusion's 1.5 base model, which has been trained to produce better character art, photorealism without sacrificing range (Lykon says it can still do art and anime pretty well), and improved humans.
PixArt Sigma: published by PixArt, this model was also trained on Stable Diffusion's 1.5 model, and is designed to deliver high image quality, greater adherence to user prompts, and images in 4K resolution (Pro users: note this when experimenting with Venice's high definition feature).
There is a drop-down menu within the chat bar on the right side that allows toggling between chat and image models. Venice provides access to the leading open-source models, which will update and change over time.
You can switch between chat and image models within the same conversation. Changing the chat or image model (and toggling between chat and image models) will not reset your conversation. Your conversation will continue uninterrupted until you choose to start a new chat.
Previous conversations you've had are stored within your browser and are available in the drawer, which can be opened by clicking on the arrow located on the left of your screen.
Unlike all leading generative AI apps, Venice does not see or save users’ text or image prompts (or the AI responses) on our servers. All conversation history is only stored locally on your device. For more information about our approach to privacy, check out this blog.
Chat history is stored locally in your browser and Venice keeps no copy of this history. It is not possible to save your history or share it between browsers. You can always download the images that you create or copy and paste your conversations onto another program to save them outside of the Venice interface.
You can delete your chat history using the “Clear History” button in the Settings menu. This history is and was only ever saved in your browser. Venice never has access to it.
No. Your chat and image history is stored in your local browser.
Large Language Models are not truth machines. They rely on the probabilistic nature of text generation, and you should not rely on the Output as a sole source of truth or factual information. The open-source large language models Venice accesses to respond to your query can produce incorrect answers and may also produce offensive or dangerous content. You are responsible for what you do with these tools.
Venice also may not necessarily return the same Output to everyone. The Venice platform offers you the ability to toggle between models. The responses you receive will depend on the language model you are using and several other factors, such as the specific wording of the query, the context provided, and the evolving nature of the large language model itself.
Venice's chat responses are generated by accessing open-source LLM models published on a fixed date in time. The models generally don't have information about after that moment in time, so they will provide different information to some questions based on when they were published.
Venice will be connected to the internet in the near future, which will improve the accuracy of time-based questions.
If you believe Venice has returned an incorrect answer, we recommend toggling to a different model, asking your query again, changing the specific wording of your query, and then checking other sources. The more serious your question, the more you should verify with other sources.
The ability to disable Safe Mode within image generation is available to Pro Account users.
Safe mode can be disabled in the Venice image user settings. Click on the image icon to the left of the prompt input field, then click on the settings gear. The Safe Mode seeting is located near the bottom of the menu.
God Mode is access to amend system prompts in Venice, giving you the ability to instruct the AI specifically how you wish it to interact with you. Read the God Mode blog for helpful tips on how to refine your interactions with Venice in unique and beneficial ways using customized system prompts.
The system prompt tells the AI how you want it to behave. For example, you can instruct it to talk like a poet or academic, only in the Queen's English, or like a friendly helper. You can also instruct it on what not to do or say. There is no limit to the system prompt instructions, but being specific is helpful.
Customized system prompts are available to Pro Account users, and are accessible from the Venice chat settings menu. Click on the chat icon to the left of the prompt input field, then click on the settings gear. The field to add custom system prompts is located at the top of the menu.
Your conversation history lives locally on your devices browser. When starting a conversation on another device, Venice doesn’t have the context from the conversation on the other device, which may impact the answer.
Venice can review PDF and TXT files, and analyze and summarize their content. Documents must first be uploaded.
1. Within the Chat function, click the paperclip located to the left of the chat input field.
2. Upload the PDF or TXT file.
3. Once uploaded, type the instructions for the model to undertake within the chat field.
Currently, Venice supports PDF's with up to approximately 22,000 words. Venice will support document uploads of up to 500,000 words in the near future.
Currently, Venice can review documents with up to approximately 22,000 words. If you receive an error after uploading a document, it likely exceeds Venice's currently threshold.
In the near future, Venice will support document uploads of up to 500,000 words.
Points are a digital form of reward that does not have monetary value. Registered users can earn Points for particular activities conducted on the Venice platform.
Points have no current utility but allow users to track their usage.
Points may not be exchanged or traded between users, or combined with or transferred to another Venice account.
Venice is part of the Morpheus decentralized AI network. If you have at least one Morpheus token (MOR), you can sign in with Metamask and will automatically be upgraded to a Pro account. You do not need to spend the MOR token - you receive Pro account access simply by having the token in your wallet (must be >1.00 balance).
Note: MOR is an Arbitrum token, so you will need to switch Metamask to the Arbitrum network instead of Ethereum.
To learn more about Morpheus, visit mor.org
Visit https://venice.ai/sign-in. Enter your email in the box and click continue. On the following screen, click “Forgot password?”. Then click “Reset Password” and follow the instructions.
Innovation is an iterative process and we value feedback from our users. Please email the team at support@venice.ai.
All data about prompts and AI responses are stored locally in your browser. The only data Venice may have is your email and IP address. Please submit your request to delete this data via email to DataPrivacy@venice.ai.
For more information about our approach to privacy, check out this blog.