On Thanksgiving I Attempted to be a “Prompt Engineer” for my Family

Games” by Tessa Violet, lovelytheband 🎵

When my husband mentioned I should try the Bing Image Creator a week ago, I got really excited to see if I could create “Granny Misty Merryweather” – a D&D character.

This was my first experience with images into what ended up being a deep hole – In ’22 I explored the architectural context of ChatGPT with games infrastructure prompts but had not had time to explore images. I knew after that evening, that I would say something I never in a million years thought I would say:

We are going to Bing on Thanksgiving.

Let me briefly describe this character so there is some context…Granny is a gnome, artificer with a robe of patches. Artificers in Dungeons & Dragons have been through many changes, but Granny had a cloak where the patches could be pulled off to create items, she had a robot as a side companion, and she would make and throw potions (fire). Artificers can do a lot of absolutely random engineering and magic-y things that don’t make a lot of sense together.

Granny lived lifetimes – She has a deck of Tarot Cards which we used in real life. The results in the Tarot Cards were worked into the story to help the party find the “Deck of Many Things.”

I tried to distill this down as best as I could.
Bing Image Creator made interesting assumptions.

For example, when I would say “no beard” Bing still really wanted to give gnomes beards. I replaced the word gnome with “halfling” and the beards stopped appearing. This is a personal preference, but the bias was interesting.

Secondly, the hair, looked like those scary Troll dolls possibly because I said “wild grin.” I was okay with this. I leaned into it. I also noticed almost all of the images of “older women” were of white women – I never specified skin tone. It was when I used “dark hair” when I actually got other skin tones. I want to explore this more – if I say specifically I want to see more racial diversity will I actually get diversity? Or will I have to be explicit?

Could these be used instead of a real artist for work? No. If you start to inspect the images you will see the belt items make little sense. The fingers need work. The Tarot card designs are all over the place. But are they great inspirations? Yes.

If anything AI tools make people who pay artists better at sharing some ideas, but it’s artists who really know what to do with these tools.

The real gems were when I started adding dragons. As there was almost no room for dragons in an image, almost all dragons photobombed the context: Which was fantastic. All and all it took 2 hours to go from start to final image mainly because of indecision and running out of coins.

That’s Where Thanksgiving Started.

Having seen the results, and honestly terrified from the first image of Granny, I asked my brother-in-law if he’d like some help visualizing his Tabaxi while he was visiting on Thanksgiving. Similar to Khajit in Skyrim, Tabaxi in D&D are cat people. His character is a religious figure, intelligent, and walks around in “priestly like robes.”

This is what happened with our first attempts.

I thought, personally, that’s where we should stop. “That’s it. That’s him. We’re done.”

But my brother in law insisted that this is not what his character looks like in D&D 🙂 I briefly pontificated how much better the campaigns would have been had we stopped there. We continued.

Much to our doom. With every prompt our situation seemed to get worse.

It either got cuter but more religiously offensive.

Or more uncomfortable like Cats the movie.

Laughing while crying and feeling like I was absolutely failing him, we almost gave up. That’s when I pointed at one on the right and said “That one looks a bit like Aragorn.” Followed by trying not to choke on my egg nog and the words “If he was a cat.”

I started entering the word masculine and panther instead of “cat.” We also worked to get rid of the Christian iconography. With some tweaks in Photoshop to remove any human skin (yeah I wrote that) we landed here.

There are a lot of people scared about this future and a lot people excited about it. It was built on the backs of others’ work. I hope this post made you laugh and not scared. I hope it made those see the value of working together using these tools. I am not an expert in AI and I don’t pretend to be – it’s not my immediate area of focus by any means. These days I care so much more about how to get to production faster from an infrastructure perspective because I believe going slow causes worse incidents.

But I’m also not going to ignore it.

I hope it showed that these tools still have a lot of bias and more often than not, creating something not generic takes hours and is difficult. It requires real creativity, technique, and experience. I’m sure what real artists make out of these tools far exceeds anyone’s wildest dreams when they feel safe, supported, and empowered to use them and not fearing losing jobs and ignoring the tools – in the same way that for programmers that is also the case when using ChatGPT and Visual Studio GitHub Co-Pilot to speed up development.

I’m interested in how these tools make existing creatives and coders more efficient and powerful – augments them, not holds them back and being prepared for how it, not will, but is impacting all of us. Using AI in your workflow is definitely a different way of working, but more than anything, it surfaces the biases we already have when we are creating anything.

Perhaps that’s an opportunity to look that bias in the face and say “Is this truly interesting enough to be unique in a market that is so crowded?” That’s up to creators to know deeply from experience.

Image Credit(s): Bing Image Creator and everything DALL-E 3 is trained on. Regular disclaimer that this is a personal blog and does not represent the views of Take-Two, its technologies, or tools used. I used this blog to explore leadership paradigms, new technologies, and share stories from the past to continue learning.