Press ESC to close

How to Make a Brainrot Video (Legally and Safely): Tools, AI Images, and Veo 3

Oxford Languages tracks “brain rot” as a living phenomenon of digital culture, referring to an overload of shallow content and the sense of being “numbed” by endless scrolling. In this article, we use it only as context for a clip style—not as an endorsement of mindless content. See Oxford Languages – Updates. (languages.oup.com)

Want to learn more about brainrot? Read our article:
The complete guide to the characters of the “Italian brainrot” phenomenon — everything in one place

Quick workflow

  • Length: 15–45 s, hook within 2 s.
  • Editing: short shots (≈0.4–1.0 s), cut to the beat, A/B rhythm.
  • Text and TTS: short sentences, no profanity (risk of limited ads). See YouTube Help – Advertiser-friendly content guidelines. (support.google.com)
  • Legal music: ideally the YouTube Audio Library; licenses are clear and copyright-safe. (support.google.com)
  • Reusing other people’s clips: must be clearly transformative (commentary, your own direction). See YouTube – Reused content/YPP. (support.google.com)
  • Copyright in the EU: exceptions (quotation, teaching, parody/pastiche) are narrow; the goal is legal original creation. See the DSM Directive. (eur-lex.europa.eu)

How to generate a “brainrot” image by combining 2–3 things (practical process)

Option A – Midjourney (fast track, no installs)

  1. In Discord, use the /blend command (2–5 reference photos). It works reliably for “combining concepts” (e.g., a range hood + a chihuahua). See Midjourney docs – Blend Images on Discord. (docs.midjourney.com)
  2. In the “prompt” field, add a style:
    “3D toy render, smooth clay, soft studio lighting, big round eyes, glossy plastic, playful brain-rot style, 16:10”.
  3. After generating, choose a variation, Upscale, and use Vary (Region) to fine-tune details (e.g., “stainless-steel surface,” “fluffy ears”).

Option B – Stable Diffusion XL (full control, local)

  1. Install ComfyUI and the SDXL model (or SD 3.5). Official info: Stability AI – SDXL/cores. (Stability AI)
  2. Add the IP-Adapter node (lets you blend 2–3 reference images into a single visual). See ComfyUI IPAdapter Plus. (GitHub)
  3. Workflow: Text Prompt → SDXL Base + IP-Adapter (ref.1 range hood) + IP-Adapter (ref.2 chihuahua)SamplerVAE Decode.
  4. In prompts, stick to “3D toy render / vinyl figure / clay” and lower the CFG scale to preserve the shape of both objects.

Option C – Adobe Firefly (a commercially safe alternative)

  1. Open Firefly – Text to Image and set a style like 3D plush/clay/vinyl render. Adobe states its models are trained on permitted sources and that outputs can be used for commercial purposes (outside beta). (adobe.comhelpx.adobe.com)
  2. When working with real brands/logos, respect model/property releases (see Adobe Stock policies). (helpx.adobe.com)

Exact prompt for “Range Hood × Chihuahua” (3D render)

Chivaviny Digestiny
  • Character name: Chivaviny Digestiny
  • Prompt (universal):
    “adorable chihuahua merged with a stainless-steel kitchen range hood, 3D toy render, vinyl figure aesthetic, big glossy eyes, soft clay textures, studio softbox lighting, high detail, subtle fogged steel reflections, clean background, aspect ratio 16:10”
  • Negative prompt (if the model supports it): text, watermark, extra limbs, realistic gore, ugly, low-res

Tip: In Midjourney you can combine /blend (photos) + an extra text prompt. In SDXL via IP-Adapter, set weights to 0.6–0.8 for the range hood shape and 0.7–0.9 for the chihuahua depending on the desired “cuteness.” (docs.midjourney.comGitHub)

How to turn an image into a short clip using Google Veo 3

  • What Veo 3 is: a Google DeepMind model for generating 8-second videos with sound (voice, effects, ambience) directly from text or from an image. See DeepMind – Veo and the Gemini API – Generate videos with Veo 3. (Google DeepMindGoogle AI for Developers)
  • Where to run it: access via Gemini (Google AI Pro/Ultra), AI Studio / Gemini API, or Flow/VideoFX in Google Labs (depending on availability by country and plan). (Geminiaistudio.google.comlabs.google)
  • Steps (image-to-video):
    1. Upload the finished 3D image (Chivaviny Digestiny).
    2. Motion prompt: “slow dolly-in, subtle head tilt, tail wag, gentle breathing, stainless-steel gleam flicker, soft studio shadows, loop-friendly motion, maintain 16:10 framing if supported, natively generated cute ambience and soft woof SFX”.
    3. Export the clip; if you want a loop for Reels/Shorts, trim 8 s → 7.8 s and cover the last frame with a transition (crossfade 0.2 s).
  • Safety and monetization: for TTS and music, stick to advertiser-friendly guidelines and licensed libraries (YouTube Audio Library). (support.google.com)

TTS without profanity: quick, legal, and clear

  • Google Cloud TTS (many languages/voices, SDK): good for neutral, clean voices. (Google Cloud)
  • Microsoft Azure Neural TTS (SSML, styles, pace): fine-grained control. (learn.microsoft.com)
  • Gemini API – Speech generation: when you want everything “under one roof” in the Google ecosystem. (Google AI for Developers)

For commercial TTS, always check the terms of use (e.g., bans on profane/hateful content). (ElevenLabs)

Detailed A-to-Z example: creating a clip with Chivaviny Digestiny

  1. Image: Midjourney /blend (2 references: range hood, chihuahua) + the prompt above → output 16:10. Or SDXL + IP-Adapter (2 source photos) in ComfyUI. (docs.midjourney.comGitHub)
  2. Animation: import into Veo 3 (image-to-video) → prompt for a slow camera push-in and a gentle “wag.” (Google AI for Developers)
  3. Audio: if you don’t want Veo’s native audio, add a soft woof + ambience in your editor (YouTube Audio Library). (support.google.com)
  4. Edit and export: 1080p (or platform-native), short captions, no profanity in the first seconds. (support.google.com)

Videos to watch (embed as a URL)

Mini checklist before publishing

  • Hook within 2 s ✔︎ Cut to the beat ✔︎ Readable captions ✔︎
  • Original/legal visuals and music ✔︎ TTS without profanity ✔︎
  • Description includes credits for music/sources ✔︎
  • The video is clearly original (not an auto-compilation) ✔︎ (support.google.com)

Sources

  1. Oxford Languages – Updates (context for the term “brain rot”)https://languages.oup.com/about-us/updates/(languages.oup.com)
  2. YouTube Help – Advertiser-friendly content guidelineshttps://support.google.com/youtube/answer/6162278(support.google.com)
  3. YouTube Help – Use music and sound effects from the Audio Libraryhttps://support.google.com/youtube/answer/3376882 (support.google.com)
  4. YouTube – Channel monetization policies (Reused content)https://support.google.com/youtube/answer/1311392 (support.google.com)
  5. EUR-Lex – Directive (EU) 2019/790 (DSM) – Official Journalhttps://eur-lex.europa.eu/eli/dir/2019/790/oj/eng | summary: https://eur-lex.europa.eu/EN/legal-content/summary/copyright-and-related-rights-in-the-digital-single-market.html (eur-lex.europa.eu)
  6. Midjourney Docs – Blend Images on Discordhttps://docs.midjourney.com/hc/en-us/articles/32635189884557-Blend-Images-on-Discord (docs.midjourney.com)
  7. Stability AI – SDXL/Core modelshttps://stability.ai/news/stable-diffusion-sdxl-1-announcement | https://stability.ai/core-models (Stability AI)
  8. ComfyUI IP-Adapter Plus (repo)https://github.com/cubiq/ComfyUI_IPAdapter_plus (GitHub)
  9. Google DeepMind – Veo 3 (model page)https://deepmind.google/models/veo/ (Google DeepMind)
  10. Gemini API – Generate videos with Veo 3https://ai.google.dev/gemini-api/docs/video (Google AI for Developers)
  11. Google Cloud – Text-to-Speech (docs)https://cloud.google.com/text-to-speech/docs (Google Cloud)
  12. Microsoft Azure – Text to speech documentationhttps://learn.microsoft.com/en-us/azure/ai-services/speech-service/index-text-to-speech (learn.microsoft.com)
  13. Adobe Firefly – commercially safe generative AIhttps://www.adobe.com/products/firefly.html | Firefly FAQ (commercial use): https://helpx.adobe.com/firefly/get-set-up/learn-the-basics/adobe-firefly-faq.html (adobe.comhelpx.adobe.com)
  14. Adobe Stock – property/model release guidelineshttps://helpx.adobe.com/stock/contributor/help/property-release.html |  https://helpx.adobe.com/stock/contributor/help/photography-illustrations.html (helpx.adobe.com)

Jana

I like turning curiosity into words, and writing articles is my way of capturing ideas before they slip away — and sharing them with anyone who feels like reading.

Comments (1)