Photos & Videos AI Tools: How to and Technical Discussion on Creating NSFW Gay Sex Stuff with Stable Diffusion (and others)

Here's a cumshot prompt that worked for me.

Prompt: 1man, naked, hairy, muscular, low angle view from directly below, standing over viewer, looking directly at viewer, legs spread, muscular hair legs, muscular hairy chest, huge erect uncut penis, shaved testicles, masturbating, ejaculating, cumshot, very sweaty, short black haircut, 45yo, mouth open in pleasure, tongue out,
Negative prompt: EasyNegative,
Steps: 35,
Sampler: DPM++ 2M SDE Heun Karras,
KSampler: dpmpp_2m_sde_gpu,
Schedule: karras,
CFG scale: 7,
Seed: 764290048,
Size: 768x1152,
VAE: Automatic,
Denoising strength: 0.9,
Clip skip: 2,
Model: Hard Muscle - Vanilla V3,
LoRA: extreme_bukkake_v0.1-pony:0.80

The 'ejaculating' 'cumshot' seems to work really well.
 
For folks who want to learn more about ControlNet and the various ways to use it here's a great resource that gives a run down on the different options and how they work: ControlNet: A Complete Guide - Stable Diffusion Art

Note: it's a bit focused on automatic1111 users.
 
  • Like
Reactions: StudmeatFL
Can you copy the prompt you used? That might help
Prompt: real life, photograph, high detail, score_9, score_8_up, score_7_up, cinematic, realistic, depth of field, dynamic lighting, candid shot, highres, detailed, shadow, detailed background, 1boy, fit male, flexing biceps, hairy chest, muscular legs, jock, full body picture

Negative: low quality, lowres, bad anatomy, normal quality, worst quality, 3D, render, illustration, drawing, comic, watermark, greyscale, monochrome

Steps: 18,
Sampler: DPM++ SDE,
KSampler: dpmpp_sde_gpu,
Schedule: normal,
CFG scale: 6,
Seed: 2632268172,
Size: 768x1024,
VAE: Automatic,
Denoising strength: 0.25,
Clip skip: 2,
Model: virileStallion_v50Photoreal,
LoRA: f3e344ba-d516-4ae2-a589-a46c68368e2b.TA_trained:0.80,
Hires resize: 768x1024,
Hires steps: 10,
Hires upscaler: 4x-UltraSharp,
ADetailer model: face_yolov8n_v2.pt,
ADetailer prompt: ,
ADetailer negative prompt: ,
ADetailer confidence: 0.5,
ADetailer dilate/erode: 4,
ADetailer mask blur: 4,
ADetailer denoising strength: 0.25,
ADetailer inpaint only masked: true,
ADetailer inpaint padding: 32
 

Attachments

Prompt: real life, photograph, high detail, score_9, score_8_up, score_7_up, cinematic, realistic, depth of field, dynamic lighting, candid shot, highres, detailed, shadow, detailed background, 1boy, fit male, flexing biceps, hairy chest, muscular legs, jock, full body picture

Negative: low quality, lowres, bad anatomy, normal quality, worst quality, 3D, render, illustration, drawing, comic, watermark, greyscale, monochrome

Steps: 18,
Sampler: DPM++ SDE,
KSampler: dpmpp_sde_gpu,
Schedule: normal,
CFG scale: 6,
Seed: 2632268172,
Size: 768x1024,
VAE: Automatic,
Denoising strength: 0.25,
Clip skip: 2,
Model: virileStallion_v50Photoreal,
LoRA: f3e344ba-d516-4ae2-a589-a46c68368e2b.TA_trained:0.80,
Hires resize: 768x1024,
Hires steps: 10,
Hires upscaler: 4x-UltraSharp,
ADetailer model: face_yolov8n_v2.pt,
ADetailer prompt: ,
ADetailer negative prompt: ,
ADetailer confidence: 0.5,
ADetailer dilate/erode: 4,
ADetailer mask blur: 4,
ADetailer denoising strength: 0.25,
ADetailer inpaint only masked: true,
ADetailer inpaint padding: 32
I would increase the Steps to 35, especially since you’re also doing Clip Skips of 2. I never use any number of steps less than 20.
 
How do I do that and make sure the same picture comes out at the end? Thank you for your help I'm so so new at this!

View attachment 141621181
each image have a seed, so use the same one.

Prompt: real life, photograph, high detail, score_9, score_8_up, score_7_up, cinematic, realistic, depth of field, dynamic lighting, candid shot, highres, detailed, shadow, detailed background, 1boy, fit male, flexing biceps, hairy chest, muscular legs, jock, full body picture

Negative: low quality, lowres, bad anatomy, normal quality, worst quality, 3D, render, illustration, drawing, comic, watermark, greyscale, monochrome

Steps: 18,
Sampler: DPM++ SDE,
KSampler: dpmpp_sde_gpu,
Schedule: normal,
CFG scale: 6,
Seed: 2632268172,
Size: 768x1024,
VAE: Automatic,
Denoising strength: 0.25,
Clip skip: 2,
Model: virileStallion_v50Photoreal,
LoRA: f3e344ba-d516-4ae2-a589-a46c68368e2b.TA_trained:0.80,
Hires resize: 768x1024,
Hires steps: 10,
Hires upscaler: 4x-UltraSharp,
ADetailer model: face_yolov8n_v2.pt,
ADetailer prompt: ,
ADetailer negative prompt: ,
ADetailer confidence: 0.5,
ADetailer dilate/erode: 4,
ADetailer mask blur: 4,
ADetailer denoising strength: 0.25,
ADetailer inpaint only masked: true,
ADetailer inpaint padding: 32

ur VAE is missing, try using sdxl.vae
 
I downloaded embeddings and put them in the embeddings file. They don't appear on my textual Inversion tab. What went wrong?

Stable Diffusion 1.5 running Automatic1111
 

Attachments

I downloaded embeddings and put them in the embeddings file. They don't appear on my textual Inversion tab. What went wrong?

Stable Diffusion 1.5 running Automatic1111
What directory on your installation?

Also did you reload after copying them into that directory?
 
stable-diffusion-webui/embeddings

Yes I restarted the GUI.
Don't know what to tell you on that part; I'm not an expert in INSTALLING Stable Diffusion and I've never used Embeddings.

I know some installations do "aliases" or "symlinks" or different directory configurations. My SD 1.5 is on Google Collab from one of those Collab Notebooks and that one deliberately had me put my LoRA and Models (checkpoints) in another directory in my Google Drive. My guess for that is in case you had to ditch the entire automatic1111 installation and that way you didn't lose the downloaded models, etc.

What did you use as your step by step installation instructions?
 
Don't know what to tell you on that part; I'm not an expert in INSTALLING Stable Diffusion and I've never used Embeddings.

I know some installations do "aliases" or "symlinks" or different directory configurations. My SD 1.5 is on Google Collab from one of those Collab Notebooks and that one deliberately had me put my LoRA and Models (checkpoints) in another directory in my Google Drive. My guess for that is in case you had to ditch the entire automatic1111 installation and that way you didn't lose the downloaded models, etc.

What did you use as your step by step installation instructions?

How to use embeddings in Stable Diffusion - Stable Diffusion Art