It wasn’t that hard if you kept feeding it quarters. It took a lot of trial and error, but having infinite lives means it was eventually beatable.
It wasn’t that hard if you kept feeding it quarters. It took a lot of trial and error, but having infinite lives means it was eventually beatable.
I don’t really understand the science behind it, but in my experience I’ve had much more success using basic models for training.
Also, I’ve found that LoRAs are generally much easier and faster to train than embeddings. Is there a reason you’re going for an embedding over a LoRA?
Embeddings should generally be trained on base models to improve compatibility with models derived from the base. For SD 1.5, that means using either regular SD 1.5 or the NovelAI leak. You can sometimes get away with using more “basic” models that don’t have many merges, but that can be tough to gauge.
This has been up for so long that I doubt it’s a hack: it’s more likely that Trump stiffed his web developer.
A couple dozen devices maybe. I don’t really need dedicated ranges, but it’s nice to know exactly which device I’m looking at just by the IP when reading logs.
I know they exist and vaguely what they do, but I don’t know how to set them up. What’s their advantage over simple DHCP reservations for a small client list?
I like the range for new devices- hadn’t thought of that!
Not from comicbook.com, but close. Looks like you’re right: just more anti-AI nonsense. I wonder if there was this much vitriol when Photoshop first released.
I use Stable Diffusion daily. I’m vehemently against people spouting nonsensical fear mongering against AI. But I completely agree with the author here: a company using AI-generated images in a published book that they charge money for is despicable. AI should be a tool artists choose to use to enhance their workflow, just like Photoshop and tablets. It cannot and should not replace them entirely.
I had no idea that Hasbro had done this. Have they released a statement trying to justify this, or are they just hoping that nobody will care?
Good. Let Hasbro sink themselves with another failed VTT.
Though Castlevania: Lords of Shadow was a divisive game, its art direction and soundtrack are incredible. Oscar Araujo’s score combined with some great vistas and setpieces elevates the mediocre gameplay to actually make this game one of my favorites.
I also loved the songs from Death Stranding. Low Roar’s tracks fit the atmosphere of the game perfectly, and the few tracks from other artists really stand out in a nice contrast to Low Roar’s calmer feeling.
I’ve found that AI generation is good if you have a vague idea of what you want, but it can be frustrating if have something specific in mind. I try to approach generation with that in mind: I’ll plan out the large points of what I want but keep an open mind on the finer details.
If I wanted to generate a more specific image, I would first try to do a sketch in another program and then feed that into ControlNet. I haven’t actually done this though since I’m usually able to get something close enough that I can work with.
I meant the quantity of generated images, not the number of tokens. I rarely go over 50 tokens now. As you said, too many tokens and things start to interact in really odd ways. That’s why I’m not a fan of massive lists of negative tokens either; they are much more efficient as a textual inversion like badhandv4 or Easynegative.
However, I only use txt2img to get the rough composition of an image; most of my work is done in inpainting afterwards. If you’re looking to have good images just from txt2img then sometimes lots of tokens are necessary.
Just like traditional art though, this is all based on individual style. It’s important to use what works best for you.
I don’t bother with prompt enhancers any more. Stable Diffusion isn’t MidJourney; quantity is far more important than quality. I just prompt for what I want and add negative prompts for things that show up that I don’t want. I’ll use textual inversions like badhandv4 if the details look really bad. If the model isn’t understanding at all then I’ll use ControlNet.
That should be easy enough to do with a cron job. What OS is your seedbox running?
Sounds like it’s time to steal the concept of minions from 4e. Minions are specifically meant to help players feel powerful while still posing a credible threat.
Is this Midjourney? What’s your workflow?
They just want them to pay hundreds of thousands to millions of dollars to do so.
This is the hilarious part to me: some companies might pay these fees, but there will be many more who won’t and will instead use actual web scrapers to get their data anyways. As the number of individuals training LLM models increases in the next couple of years, this will create a much more significant traffic load compared to API calls.
I would assume that Lemmy is not very accessible yet, but Lemmy’s mobile apps are under a month old. They are making fast progress and I would expect that to change very soon.
However, Reddit’s app has been out for years and they have been told about its accessibility problems for just as long. The impression I get is that they didn’t prioritize accessibility since third-party apps handled that for them. When they cut off access to these apps, they made it very clear that they have no alternatives in mind; they consider the visually-impaired userbase to be insignificant and simply don’t care about their issues.
Tuberculosis