OP here. Figured I would walk people through the steps I took to start generating stuff.
First step: downloading stable diffusion
Before downloading, make sure you meet the system requirements. Type "dxdiag" in the windows search bar, it will bring up an information window on your system. Make sure you have at least 4gb of vram (integrated graphics won't work).
Here's a link to the tutorial I used: hxxps://www.youtube.com/watch?v=vg8-NSbaWZI
I replaced the "https" to "hxxps" so we don't flood the poor youtuber with visits from pregchan.com in his analytics. This tutorial covers how to use, learn, and navigate the stable diffusion webui, so I won't cover it here.
Second Step: Installing additional models
During the process of downloading and installing stable diffusion, you should have a folder named "models" in the stable diffusion webui directory. This is where the default model goes, and this will be where you put your additional models. You can put in as many or as few as you like, as there is an option in the webui config to choose the model you base your generations on. Finding the models was the hardest part (for me, at least) but it looks like this website has a big fat list of them:
hxxps://rentry.co/sdmodels#zack3d_kinky-v1ckpt-1a75d5c6
Each model is quite large, upwards of 2gb, so be prepared to wait a while to download. My favorite is Zack3D_Kinky, it's what I've been using to generate the images posted here. Try a few, see which one turns out the best results for you! Customization is the name of the game here.
Third Step: Generating Images
You will find that the stable diffusion webui is a bit intimidating at first, but there's a few key things to keep in mind.
1 - Set the step count to around 50-70. I find this generates the best results - too low, and it will be blurry and unrefined, too high, and it will take long and give the same results.
2 - Pick detailed prompts. Start general if you must, but the more specific and detailed your prompts get, the better the output will be. I often start with something like "applejack, equid, equine, anthro" and see where it takes me, eventually ending up with a much more complex prompt.
3 - Use txt2img to generate batches of low-resolution images. This lets you cherrypick the best results and take them to img2img for refinement and upscaling.
4 - When using img2img, you can either create a batch of several images all based off one image, with high "denoising strength" to create a variety of images based loosely off the original prompt, or you can create 1-2 high resolution images at a low "denoising strength" for simple upscaling.
5 - Toy around with it! Flick switches and slide sliders. You may find that something changes in an important way with a slight tweak to a slider. You can also use prompt matrices, styles/artists, upscaling, etc. More information here:
hxxps://github.com/AUTOMATIC1111/stable-diffusion-webui-feature-showcase
Good luck out there. I'm hoping with more eyes on it, stable diffusion can be refined even further.
P.S. - although I haven't tried it yet, I'm certain the inpainting feature could be used to edit images into preg variants. Someone else can give it a shot.