AI Video Generation is live on Venice: A Complete Guide

AI Video Generation is live on Venice: A Complete Guide

Generate professional AI generated videos with Venice. Text-to-video & image-to-video with private or anonymized models. Start creating AI generated videos now on Venice.

Venice.ai

Video generation is now live on Venice for all users.

You can create videos on Venice using both text-to-video and image-to-video generation. This release brings state-of-the-art video generation models to our platform including Sora 2 and Veo3.1.

Try Video Generation on Venice

Access the best AI video models on Venice

Our new video feature supports two creation modes.

Text-to-video lets you describe a scene and generate it from scratch. Image-to-video takes your existing images and animates them based on your motion descriptions.

Venice provides access to both open-source and industry-leading proprietary AI video generation models, including access to OpenAI’s recently launched Sora 2, Google's Veo 3.1, and Kling 2.5 Turbo—currently the highest quality models available on the market.

Text-to-Video Models:

  • Wan 2.2 A14B – Most uncensored text-to-video model (Private)

  • Wan 2.5 Preview – Text-to-video based on WAN 2.5, with audio support (Private)

  • Kling 2.5 Turbo Pro – Full quality Kling video model (Anonymized)

  • Veo 3.1 Fast – Faster version of Google's Veo 3.1 (Anonymized)

  • Veo 3.1 Full Quality – Full quality Google Veo 3.1 (Anonymized)

  • Sora 2 – Extremely censored faster OpenAI model (Anonymized)

  • Sora 2 Pro – Extremely censored full quality OpenAI model (Anonymized)

Image-to-Video Models:

  • Wan 2.1 Pro – Most uncensored image-to-video model (Private)

  • Wan 2.5 Preview – Image-to-video based on WAN 2.5, with audio support (Private)

  • Ovi – Fast and uncensored model based on WAN (Private)

  • Kling 2.5 Turbo Pro – Full quality Kling video model (Anonymized)

  • Veo 3.1 Fast – Faster version of Google's image-to-video model (Anonymized)

  • Veo 3.1 Full Quality – Full quality Google image-to-video (Anonymized)

  • Sora 2 – Extremely censored faster OpenAI model (Anonymized)

  • Sora 2 Pro – Extremely censored full quality OpenAI model (Anonymized)

Each model brings different strengths to the table, from speed to quality to creative freedom. Certain models also support audio generation. Supported models will change as newer and better versions become available.

Privacy levels explained

Video generation on Venice operates with two distinct privacy levels. Understanding these differences helps you make informed choices about which models to use for your projects.

Private models run through Venice's privacy infrastructure. Your generations remain completely private—neither Venice nor the model providers can see what you create and no copy of them is stored anywhere other than your own browser. These models offer true end-to-end privacy for your creative work.

Anonymized models include third-party services like Sora 2, Veo 3.1, and Kling 2.5 Turbo. When using these models, the companies can see your generations, but your requests are anonymized. Venice submits generations on your behalf without tying them to your personal information.

The privacy parameters are clearly disclosed in the interface for each model. For projects requiring complete privacy, use models marked as "Private." For access to industry-leading quality where anonymized submissions are acceptable, the "Anonymized" models provide the best results currently available.

How to use Venice’s AI video generator

Text-to-Video Generation

Creating videos from text descriptions follows a straightforward process.

Step 1: Navigate to the model selector, select “text-to-video” generation interface, and choose your preferred model. For this example we’ll choose Wan 2.2 A14B

Step 2: Write your prompt describing the video you want to create (for tips read the Prompting tips section below)

Step 3: Before generation, adjust settings to your specifications (read below for more information on video generation settings)

Step 4: Click "Generate Video". You can see the amount of Venice Credits the generation will consume in the lower right corner of the screen (for more information on Venice Credits, read the section below). Generation takes anywhere from 1-3 minutes, sometimes longer depending on the selected model.

Image-to-Video Generation

Animating existing images adds motion to your static visuals.

Step 1: Navigate to the video generation interface. Select "Image to Video" mode and choose your preferred model. For this example we’ll select Wan 2.1 Pro

Step 2: Upload your source image and write a prompt describing how the image should animate. The model will use your image as the first frame and animate it according to your motion description.

Step 3: Before generation, adjust settings to your specifications (read below for more information on video generation settings)

Step 4: Click "Generate Video". You can see the amount of Venice Credits the generation will consume in the lower right corner of the screen (for more information on Venice Credits, read the section below). Generation takes anywhere from 1-3 minutes, sometimes longer depending on the selected model.

Settings and additional features

Video generation includes several controls for customizing your output and managing your creations. Not all models support these settings, so make sure you select the appropriate model for your needs.

  • Duration: Set your video length to 4, 8, or 12 seconds depending on your needs.

  • Aspect Ratio: Choose from supported resolutions based on your selected model.

  • Resolution: Available options depend on the model selected. Sora 2 supports 720p, while Sora 2 Pro adds a 1080p option.

  • Parallel Variants Generation: Generate up to 4 videos simultaneously to explore different variations or test multiple prompts at once. Credits are only charged for videos that generate successfully.

Video generation also supports the following additional features:

  • Regenerate: Create new variations of your video using the same prompt and settings. Each generation produces unique results.

  • Copy Last Frame and Continue: Continue your video by using the final frame of a completed generation as the starting point for a new clip.

You can access all your video generations in one place: the Library tab. The new Library tab lets you scroll through everything you've created across both images and videos.

This organization makes it simple to review past work, download favorites, or continue refining previous concepts.

Understanding Venice Credits

Video generation uses Venice Credits as its payment mechanism. Venice Credits represent your current total balance from three sources:

  • Your DIEM balance (renews daily if you have DIEM staked)

  • Your USD balance (also used for the API)

  • Purchased Venice Credits

How credits work:

The conversion rate is straightforward:

  • 1 USD = 100 Venice Credits

  • 1 DIEM = 100 Venice Credits per day

  • Your credit balance = (USD paid + DIEM balance) × 100

When you generate a video, credits are consumed in this priority order:

  1. DIEM balance first – If you have staked DIEM, these credits get consumed first since they renew daily. Each Venice Credit costs 0.01 DIEM.

  2. Purchased Venice Credits second – If you've purchased credits directly, they're used after your daily DIEM allocation.

  3. USD balance third – If you've used up your purchased credits but still have a USD balance for API usage, it converts to credits at the same rate as DIEM.

Obtaining credits:

Pro subscribers receive a one-time bonus of 1,000 credits when they upgrade. Additional credits can be purchased directly through your account from the bottom-left menu or by clicking on the credits button in the prompt bar.

You can purchase credits with your credit card or crypto.

Credits do not expire and remain in your account until used. Purchased Venice Credits and USD balances are consumed on a one-time use basis and do not regenerate, replenish, or renew. Your credit balance displays at the bottom of the chat history drawer, giving you constant visibility into your available resources.

If a video generation fails, you'll automatically receive your credits back. Credits are only deducted for successfully completed generations. If you experience any issues with credit charges or refunds, contact [email protected] for assistance.

AI prompting tips for better videos

Effective prompts make the difference between generic output and compelling video content. Think of your prompt as directing a cinematographer who has never seen your vision: more specificity helps with realising your vision exactly, but leaving some details open can lead to creative interpretation by the models with unexpected results.

Describe what the camera sees

Start with the visual fundamentals. What's in the frame? A "wide shot of a forest" gives the model a lot of creative freedom to interpret. "Wide shot of a pine forest at dawn, mist rolling between trees" provides clearer direction. Include the subject, setting, and any key visual elements.

Specify camera movement

Static shots, slow pans, dolly movements—camera motion shapes how viewers experience your video. "Slow push-in on character's face" or "Static shot, fixed camera" tells the model exactly how the frame should move. Without camera direction, the model will choose for you.

Set the look and feel

Visual style controls mood as much as content. "Cinematic" is vague. "Shallow depth of field, warm backlight, film grain" gives the model concrete aesthetic targets. Reference specific looks when possible: "handheld documentary style" or "1970s film with natural flares."

Keep actions simple

One clear action per shot works better than complex sequences. "Character walks across the room" is open-ended. "Character takes four steps toward the window, pauses, looks back" breaks motion into achievable beats. Describe actions in counts or specific gestures.

Balance detail and freedom

Highly detailed prompts give you control and consistency. Lighter prompts encourage the model to make creative choices. "90s documentary interview of an elderly man in a study" leaves room for interpretation. Adding specific lighting, camera angles, wardrobe, and time of day locks in your vision. Choose your approach based on whether you want precision or variation.

Experiment with finding the right prompt length

Video generation handles prompts best when they fall between extremes. Too much detail—listing every visual element, lighting source, color, and motion—often means the model can't incorporate everything and may ignore key elements. Too little detail gives the model free rein to interpret, which can produce unexpected results. Aim for 3-5 specific details that matter most to your shot: camera position, subject action, setting, lighting direction, and overall mood. This range gives the model enough guidance without overwhelming it.

Example prompt structure:

[Visual style/aesthetic] [Camera shot and movement] [Subject and action] [Setting and background] [Lighting and color palette]

"Cinematic 35mm film aesthetic. Medium close-up, slow dolly in. Woman in red coat turns to face camera, slight smile, she says something to the camera. Rainy city street at night, neon reflections in puddles. Warm key light from storefront, cool fill from street lamps."

Video generation responds well to filmmaking terminology. Shot sizes (wide, medium, close-up), camera movements (pan, tilt, dolly, handheld), and lighting descriptions (key light, backlight, soft vs hard) all help guide the output toward your intended result.

Get started with Venice’s AI video generator

Video generation is now available to all Venice users. We’re looking forward to seeing your creations.

Join our Discord to learn from the Venice community and share your generations.

Try Video Generation on Venice

Back to all posts
Room