Photos & Videos AI Tools: How to and Technical Discussion on Creating NSFW Gay Sex Stuff with Stable Diffusion (and others)

Does anyone know of good models or prompts for some POV porn where the guy looks like he’s on top of you fucking you?

I can’t quite seem to get it. I found a LORA on civitai for gay missionary POV which is what I’m after but it’s not great…
Have you tried this one?
Gay Missionary Bottoming POV
 
Thanks for replying! Yes, this is the one I’m referring to in my post :)

It works… It works, but I find it not that great and tends to generate very similar results. Perhaps it’s me though.
bcs its a lora and its being trained on a specific set of images, try to reduce the lora strength
 
  • Like
Reactions: StudmeatFL
To be honest, I'm a bit frustrated with AI Stable Diffusion because I haven't been able to create really realistic penises/dicks so far. Now I want to try something different and create extremely photorealistic handsome men. Which model/checkpoint can you currently recommend? I don't want to use it to create nude photos, I want to create totally realistic faces. Suggestions?
 
can u define what u mean by realistic penises and photorealistic? I think the virile series does a good job no?

To be honest, I'm a bit frustrated with AI Stable Diffusion because I haven't been able to create really realistic penises/dicks so far. Now I want to try something different and create extremely photorealistic handsome men. Which model/checkpoint can you currently recommend? I don't want to use it to create nude photos, I want to create totally realistic faces. Suggestions?
 
To be honest, I'm a bit frustrated with AI Stable Diffusion because I haven't been able to create really realistic penises/dicks so far. Now I want to try something different and create extremely photorealistic handsome men. Which model/checkpoint can you currently recommend? I don't want to use it to create nude photos, I want to create totally realistic faces. Suggestions?
Hey! 😊 If your machine is solid—especially the GPU—you should be able to get great results. If you’re finding limitations with Stable Diffusion, it might not be the model itself, but rather how you’re using it. Don’t worry—there’s a bit of a learning curve, but once you get the hang of it, things improve a lot! 💪

First off, no single model will cover everything perfectly. Models are usually trained to excel in specific areas, and combining them with LoRAs (Low-Rank Adaptations) can really help fine-tune for particular outputs. However, trying to make a model generate something outside its training will almost always lead to frustration. For photorealistic faces (especially avoiding that dreaded uncanny valley), it’s all about patience, experimentation, and the right tools.

If you’re on SD1.5, I’d suggest starting with models like Juggernaut or Realistic Vision. Both are on Civitai and are fantastic for photorealistic results when paired with good prompts. They’re versatile enough to get nice outputs, but you’ll still need to spend time tweaking to get exactly what you want.

That said, if you’ve got a beefy setup, consider trying Flux. It’s known for producing some seriously impressive results with photorealistic outputs—especially when paired with the right tools like LoRAs and upscalers. It might just solve some of the limitations you’re running into. 🚀

To get truly polished and realistic faces, tools like CodeFormer or GFPGAN are must-haves. These are excellent for refining and restoring facial details, taking your results from “almost there” to “wow!” ✨ Seriously, they’re game-changers.

Now, for your earlier frustration with other types of content (like penises): the same principle applies. Combining multiple specialised LoRAs and adjusting their weights can work wonders, but again, it’s all about experimenting with combinations, weights, and prompts. There’s no one-size-fits-all here—it’s trial and error.

Lastly, when writing prompts, keep an eye on the language. Models respond best to clear, descriptive phrasing that matches their training. If something isn’t working, switch up the wording and try again—it’s surprising how much difference that can make. Just keep testing—you’ll get there! 😊

It’s all about patience, persistence, and playing around with your tools. You’ve got this! 💪
 
  • Love
Reactions: StudmeatFL
Hey! 😊 If your machine is solid—especially the GPU—you should be able to get great results. If you’re finding limitations with Stable Diffusion, it might not be the model itself, but rather how you’re using it. Don’t worry—there’s a bit of a learning curve, but once you get the hang of it, things improve a lot! 💪

First off, no single model will cover everything perfectly. Models are usually trained to excel in specific areas, and combining them with LoRAs (Low-Rank Adaptations) can really help fine-tune for particular outputs. However, trying to make a model generate something outside its training will almost always lead to frustration. For photorealistic faces (especially avoiding that dreaded uncanny valley), it’s all about patience, experimentation, and the right tools.

If you’re on SD1.5, I’d suggest starting with models like Juggernaut or Realistic Vision. Both are on Civitai and are fantastic for photorealistic results when paired with good prompts. They’re versatile enough to get nice outputs, but you’ll still need to spend time tweaking to get exactly what you want.

That said, if you’ve got a beefy setup, consider trying Flux. It’s known for producing some seriously impressive results with photorealistic outputs—especially when paired with the right tools like LoRAs and upscalers. It might just solve some of the limitations you’re running into. 🚀

To get truly polished and realistic faces, tools like CodeFormer or GFPGAN are must-haves. These are excellent for refining and restoring facial details, taking your results from “almost there” to “wow!” ✨ Seriously, they’re game-changers.

Now, for your earlier frustration with other types of content (like penises): the same principle applies. Combining multiple specialised LoRAs and adjusting their weights can work wonders, but again, it’s all about experimenting with combinations, weights, and prompts. There’s no one-size-fits-all here—it’s trial and error.

Lastly, when writing prompts, keep an eye on the language. Models respond best to clear, descriptive phrasing that matches their training. If something isn’t working, switch up the wording and try again—it’s surprising how much difference that can make. Just keep testing—you’ll get there! 😊

It’s all about patience, persistence, and playing around with your tools. You’ve got this! 💪
Well, thank you very much for the detailed answer.
 
How would i use a reference photo of a face to generate a realistic looking photo of the person ?? Or is it possible to extend photos to generate a body?
Well, you have a couple of options here:

Faceswap: these are tools like DeepFake, Deepswap - Best AI Face Swap Online for Video & Photo but you can also use libraries that work with Stable Diffusion. like ReActor (How to Face Swap in Stable Diffusion with ReActor Extension - Next Diffusion). That's always been my preferred if I'm going to do Faceswapping, because you create a model reference image and it just counts along the faces in the resulting image and does faceswaps of those. There is also a Discord app that you can invite to your Discord server (like you would with MidJourney). It's called Insight Face Swap: InsightFace • Discord App

If you use Insight, you can't use NSFW images, so you may need to crop your image down to hide the NSFW portions and then only swap in the cropped section. It does a fairly good job as well.

LORA: This would be a method of creating a LORA using reference images to create a model face. I have no experience in this (or I've never been able to get it to work), so I'd defer to someone else to answer in this area.

IMG2IMG with InPaint: You can paste an image with faces, etc and add an Inpainting mask to fill in with other details. It works, it's not great, but it is a way to 'add' stuff to existing pictures.
 
Well, you have a couple of options here:

Faceswap: these are tools like DeepFake, Deepswap - Best AI Face Swap Online for Video & Photo but you can also use libraries that work with Stable Diffusion. like ReActor (How to Face Swap in Stable Diffusion with ReActor Extension - Next Diffusion). That's always been my preferred if I'm going to do Faceswapping, because you create a model reference image and it just counts along the faces in the resulting image and does faceswaps of those. There is also a Discord app that you can invite to your Discord server (like you would with MidJourney). It's called Insight Face Swap: InsightFace • Discord App

If you use Insight, you can't use NSFW images, so you may need to crop your image down to hide the NSFW portions and then only swap in the cropped section. It does a fairly good job as well.

LORA: This would be a method of creating a LORA using reference images to create a model face. I have no experience in this (or I've never been able to get it to work), so I'd defer to someone else to answer in this area.

IMG2IMG with InPaint: You can paste an image with faces, etc and add an Inpainting mask to fill in with other details. It works, it's not great, but it is a way to 'add' stuff to existing pictures.
BTW, I while I do Faceswap stuff for my personal enjoyment, I do not share these images, will not create them for others and I do not condone creating them.

I know, personally, I wouldn't feel great if someone did a faceswap using my face without permission, so doing the same with known actors, or people is walking a fine line.
 
Hey people, looking for some advice. I’ve been running Stable Diffusion 1.5 locally via Automatic 1111 on a Mac with very good results — that is until recently, when I updated both SD and ControlNet. Now I’m having all kinds of issues, wherein ControlNet seems to essentially ignore my inputs (I typically input a reference using the Canny model, along with the same reference image using the Depth Midas model). Does this sound familiar at all to anyone? I’m fumbling around in the dark here, trying to figure this out …
 
Hey people, looking for some advice. I’ve been running Stable Diffusion 1.5 locally via Automatic 1111 on a Mac with very good results — that is until recently, when I updated both SD and ControlNet. Now I’m having all kinds of issues, wherein ControlNet seems to essentially ignore my inputs (I typically input a reference using the Canny model, along with the same reference image using the Depth Midas model). Does this sound familiar at all to anyone? I’m fumbling around in the dark here, trying to figure this out …
I've not had any experience with running SD A111 on a Mac, unfortunately, because none of my Mac's were strong enough to handle it.

Since that's not really specific to getting erotic content, you'll probably get better responses by asking on someone's page who provides a tutorial (like: How to Install AUTOMATIC1111 Stable Diffusion WebUI on M1/M2 Mac (Apple Silicon)). It depends which version you installed and how you installed it.

I'm not as familiar with ControlNet other than just using Depth to get something started. I was running SD on a Google Collab, but haven't touched in a while.

Typically anytime I started running into issues, I'd start from scratch and re-install the whole process, OR search for specific errors I was getting when attempting to initialize, though it sounds like in your case, it's loading without errors but not recognizing models, right?
 
Hey people, looking for some advice. I’ve been running Stable Diffusion 1.5 locally via Automatic 1111 on a Mac with very good results — that is until recently, when I updated both SD and ControlNet. Now I’m having all kinds of issues, wherein ControlNet seems to essentially ignore my inputs (I typically input a reference using the Canny model, along with the same reference image using the Depth Midas model). Does this sound familiar at all to anyone? I’m fumbling around in the dark here, trying to figure this out …
I strongly advise changing to ForgeUI. It's a redoing of A1111, and it's MUCH better...
 
I've not had any experience with running SD A111 on a Mac, unfortunately, because none of my Mac's were strong enough to handle it.

Since that's not really specific to getting erotic content, you'll probably get better responses by asking on someone's page who provides a tutorial (like: How to Install AUTOMATIC1111 Stable Diffusion WebUI on M1/M2 Mac (Apple Silicon)). It depends which version you installed and how you installed it.

I'm not as familiar with ControlNet other than just using Depth to get something started. I was running SD on a Google Collab, but haven't touched in a while.

Typically anytime I started running into issues, I'd start from scratch and re-install the whole process, OR search for specific errors I was getting when attempting to initialize, though it sounds like in your case, it's loading without errors but not recognizing models, right?
Actually, it is giving me errors related to ControlNet. Something about the image mask not matching the output size parameters. Thing is, I’m not using a mask in my inputs. At first I thought it might be that I was using a PNG with a transparent background, but it apparently didn’t make any difference when I switched to a regular flat image, and anyway that was never a problem before. I tried deleting the entire ControlNet subfolder from my SD directory and reinstalling, and that made no difference either so I’m at a loss.
 
I strongly advise changing to ForgeUI. It's a redoing of A1111, and it's MUCH better...
I’ve seen similar suggestions to that effect searching on Reddit, so I may look into this. I just don’t know much about Forge yet, or what a local installation might entail (on a Mac). Installation of Automatic1111 was relatively straightforward, following a walkthrough on YouTube. Ironically, just before things went haywire I upgraded to a new M4 Pro Mac, hoping I would see some meaningful gains in generative AI performance, compared to my M1 Mac. I absolutely did — like, night and day difference — for a short while. But then this ControlNet business went off the rails and it’s just so mysterious to me why all of a sudden it’s not working at all. I even fed the output from the Terminal console into Chat GPT to diagnose, but so far haven’t had much luck troubleshooting.
 
Actually, it is giving me errors related to ControlNet. Something about the image mask not matching the output size parameters. Thing is, I’m not using a mask in my inputs. At first I thought it might be that I was using a PNG with a transparent background, but it apparently didn’t make any difference when I switched to a regular flat image, and anyway that was never a problem before. I tried deleting the entire ControlNet subfolder from my SD directory and reinstalling, and that made no difference either so I’m at a loss.
Does it sound like this?
[Bug]: ControlNet and Inpaint Mask not working for some preprocessors in A1111 · Issue #2148 · Mikubill/sd-webui-controlnet
 
How can use the Airfuck's Brute Mix model? When I click on the link it opens a picture of a man with a cock, but there is no apparent option to reuse the model. When I click to create an image, the model is not in the list either
You have to download that model into your installation of Stable Diffusion. This documentation will help you with understanding Models and Checkpoints:
How to Install Stable Diffusion Checkpoints & Models - Next Diffusion

If you're wanting to regenerate images within CivitAI's version of Stable Diffusion (on their servers), you need an account (which is free) and enough points to run jobs against the server.

You'd scroll down under the Gallery and choose an image you like and hover over the paintbush icon for 'REMIX':
1735476615219.png


Once you click that button, you'll be in your own REMIX queue, a panel will open to the left on your screen with details about how that image was created and allow you to generate your own:
1735476737824.png


I changed that prompt and added 'huge erect penis' and then clicked the generate at the bottom:
1735476996439.png


That took '13 points' and then I clicked at the top of the screen where the gallery is shown to see the 4 new generations using the new prompt:

1735477038974.png


One thing I noticed is you can't use the Airfuck's model as your base model on remixes. It changed to Dreamshaper. You'll need to choose another base model or download and use that model in a local or Google Collab installation.