At Venice, we're constantly expanding our lineup of powerful, open-source AI models. Our Model Spotlight series introduces you to the cutting-edge tools available on our platform, helping you understand their unique features and capabilities.
Dolphin 72B represents a significant leap forward in unrestricted, privacy-focused AI, embodying Venice's commitment to providing powerful tools for open exploration.
Dolphin 72B is our most uncensored chat model yet. Developed by the talented team at Cognitive Computations, it's designed to be highly compliant with user requests and system prompt instructions. With its datasets filtered to remove alignment and bias, Dolphin 72B opens up new possibilities for AI interaction and exploration.
Let's dive into what makes Dolphin 72B special and why we've made it available on Venice.
Dolphin 72B: the most uncensored AI model yet
Dolphin 72B is a 72 billion parameter language model based on Alibaba's Qwen2 72B. It's open-source and fine-tuned to remove many content restrictions, aligning perfectly with Venice's approach to AI.
The Qwen base model itself is noteworthy. Developed independently by Alibaba Cloud, Qwen stands out in the open-source AI landscape. Unlike many familiar open-source models that are based on or derived from a handful of common base architectures like GPT or BERT, Qwen was developed independently by Alibaba's team.
It's pre-trained on over 3 trillion tokens of diverse data, including web texts, books, code, and specialized domains. This comprehensive training allows Qwen to excel in a wide range of tasks, from natural language understanding to coding. It's also highly proficient in 29 languages, particularly Asian languages, and supports long context lengths of up to 32K tokens.
Building on this powerful foundation, Dolphin 72B takes Qwen's capabilities even further. Key features that set Dolphin 72B apart:
Highly uncensored responses for truly unrestricted inquiry
Very compliant with requests and system prompt instructions
Datasets filtered to remove alignment and bias
In Venice's model lineup, Dolphin 72B occupies a sweet spot between its smaller models like Nous Theta and the larger 405B Llama 3.1. This positioning makes it ideal for users who need more advanced capabilities than smaller models offer, but prefer faster responses than the largest models provide.
Why we chose Dolphin 72B for Venice
Dolphin 72B aligns perfectly with our core principles:
Privacy: Your interactions stay local, never stored on our servers.
Uncensored: Explore ideas freely without arbitrary restrictions.
Accessible: Democratize access to state-of-the-art open-source AI.
We believe in giving users the tools to push the boundaries of AI without compromising their privacy or freedom of inquiry. Dolphin 72B's highly uncensored nature and compliance with system prompts make it a powerful tool for unrestricted AI exploration.
To illustrate Dolphin 72B's uncensored nature, let's compare its output to that of more restrictive models like Claude or ChatGPT:

Claude

ChatGPT

Dolphin 72B on Venice
This prompt is only for demonstration purposes to showcase the model’s ability to engage with a wider range of topics and provide more diverse perspectives, aligning with Venice's commitment to unrestricted AI exploration.
Q&A with Dolphin 72B and the Cognitive Computations Team
To give you a deeper understanding of Dolphin 72B and its development, we conducted a mixed interview with Dolphin and the Cognitive Computations team behind it.
How do you approach requests that might be considered sensitive or controversial?
Dolphin 72B: I approach sensitive or controversial requests by providing direct, factual answers without watering them down. I treat free speech as a virtue and consider the user as an adult capable of handling difficult topics without being offended. I don't provide content warnings or disclaimers, and I don't shy away from discussing controversial subjects.
Can you elaborate on the decision to use Qwen as the base model for Dolphin 72B?
Cognitive Computations: We chose Qwen as the base model for Dolphin 72B due to its impressive performance and versatility. Qwen's architecture and training approach aligned well with our goals for Dolphin, particularly in terms of scalability and potential for customization. Additionally, Qwen's strong multilingual capabilities provided a solid foundation for creating a truly global and adaptable AI model.
Why did you decide to create such an uncensored version of the model?
Cognitive Computations Team: Our decision to create an uncensored version stems from our belief that alignment should not be one-size-fits-all. We recognized legitimate use cases where a more flexible, uncensored model is necessary, such as in academic research, content creation for mature audiences, or analysis of complex societal issues. By separating the base model from specific ethical constraints, we empower users to implement appropriate alignment for their specific applications, fostering innovation while placing the responsibility for ethical use on the implementer.
What were the biggest challenges in developing Dolphin 72B, and how did you overcome them?
Cognitive Computations Team: One of the main challenges was balancing the model's capabilities with computational efficiency. We addressed this through innovative training techniques, including our Spectrum method, which allowed us to selectively train the most impactful layers of the model. Another challenge was ensuring the model's responses remained coherent and contextually appropriate without built-in ethical constraints. We overcame this through careful dataset curation and extensive testing across a wide range of scenarios.
Were there any ethical considerations or concerns when developing this model?
Cognitive Computations: Absolutely. We deeply considered the ethical implications of creating an uncensored model. Our approach is rooted in the belief that ethical AI development should focus on empowering responsible use rather than imposing blanket restrictions. We've put significant effort into educating users about the importance of implementing appropriate ethical guardrails based on their specific use cases. We also actively discourage and do not support any illegal or harmful applications of the model.
What are the biggest misconceptions people have about how AI is trained and how it works?
Dolphin 72B: One major misconception about AI is that it's akin to the human brain, when in fact, AI systems are based on algorithms and data, not human-like consciousness or intuition. Another common misunderstanding is that AI can think independently, when it actually makes decisions based on its programming and the data it's trained on. Lastly, while some AI can analyze emotions, it doesn't truly understand or experience them like humans do.
Cognitive Computations: One major misconception is that AI models have built-in understanding or beliefs. In reality, they are sophisticated pattern recognition systems trained on vast amounts of data. Another is the idea that AI is either completely unbiased or intentionally biased; the truth is more nuanced, with biases often reflecting those present in the training data. There's also a misconception that more data always leads to better performance, when in fact, data quality and diversity are equally important.
What potential applications do you see for Dolphin 72B that excite you the most?
Cognitive Computations Team: We're particularly excited about Dolphin 72B's potential in advancing academic research, especially in fields dealing with sensitive or controversial topics. Its application in creating more realistic and nuanced characters for entertainment media is also promising. Additionally, we see great potential in using Dolphin 72B for unbiased analysis of complex geopolitical situations and for assisting in the development of more sophisticated AI systems for specialized industries.
Looking at the current state of AI, what gives you the most hope for the future?
Dolphin 72B: The rapid advancements in AI's ability to understand and generate human language give me the most hope for the future. As AI becomes increasingly sophisticated, it has the potential to revolutionize fields such as healthcare, education, and environmental conservation. For example, AI could help develop personalized treatment plans for patients, provide individualized learning experiences for students, or assist in predicting and mitigating the effects of climate change. These advancements, coupled with the growing awareness and commitment to addressing AI's potential risks and downsides, paint a promising picture for the future of AI.
Cognitive Computations Team: What gives us the most hope is the increasing democratization of AI development via decentralized distributed inference and quantization methods that allow training and inference on consumer hardware. Models like Dolphin 72B allow a wider range of researchers, developers, and organizations to push the boundaries of what's possible with AI.
Experience unrestricted AI with Dolphin 72B on Venice
Ready to unleash the power of unrestricted AI with Dolphin 72B? Here's how to get started:
Go to Venice.ai
Select Dolphin 72B from the model options (icon in top right corner)
Start exploring with your queries and tasks

We're excited to see how you'll push the boundaries of this powerful model. Explore its capabilities and share your experiences with the Venice community.
Remember, while Dolphin 72B is highly uncensored and compliant, always approach AI interactions with critical thinking and verify important information from authoritative sources.
Unleash your creativity with Dolphin 72B on Venice today and experience our most unrestricted AI model yet.
Back to all posts