30 Days of prompt-based writing, design, and art

A brief retrospective in exploring AI-generated digital art.

Here's a quick snapshot of a few highlights and stats into my recent exploration and ideation using the latest AI tech widely reported, reviewed, and adopted by so many recently. #midjourney #ChatGPT #GenerativeAI

A Midjourney prompt-based image gallery.

 

I'm only a little over a month into prompt-based (both text and image) generative art, with mostly a mix of #midjourney and #ChatGPT, and have generated well over 1,000 pieces in that time (roughly 1,400 grids and 400-ish upscales, and a dozen or so ChatGPT prompts as of 1/5/2023).

One of several goals I had hoped to achieve with this was a newfound inspiration to return to my first creative outlet: drawing and sketching by hand (mostly pencil and ink). Happy to report immediate success and output on that front with more frequent sketching, drawing, etc.

I also sought to expedite multiple side projects through product and logo ideation. Success here, too, as a boon to more than just feeding the machine further; I have also paid a friend a small design fee to explore some logo designs out of some of my initial concepts.

Custom graphic designed logo (designed by a human, not AI).

 

I've also wanted to expand further with some writing interests. I have enjoyed both the inspiration from and the process of writing out the prompts, which in turn feeds into more outputs, be they written, visual, or musical in nature.

Sidenote: my motion graphic and music pursuits have also greatly benefited, as I've also jumped back into several related tools, like #touchdesigner and #logicpro, to render some of my prompt-based concepts.

Iterations upon iterations. Original source video courtesy of beeple’s #glassvein

 

The convo should continue about the impact on art appreciation, creation, and the like, as well as the automation of work replacing many of our jobs to varying degrees. Still, I can't deny the new opportunities, possibilities, and pure delight before us.

I also agree with this post/take below on not leveraging AI (solely) for interface design. However, it can still prove beneficial for some early and rapid ideation and is primarily what I've been leveraging it for so far (you know, #diverge and #converge):

Have you ever used an interface that looks nice on the surface, but is really frustrating to use? That might be because it was “designed” without an understanding of what people would actually be trying to do with it.

I saw this thread of “AI designs” on Twitter with a comment that AIs can do design now:
https://lnkd.in/gemhzifN
🤦‍♂️The images in there are not design *at all*.

 

Admittedly, even that much requires directional goals, guidance, and human input to shape meaningful output. Just use it wisely, intentionally, and respectfully. No tool or process is perfect.

And obviously, this is a much bigger conversation than these thoughts account for. Nevertheless, these are a few initial thoughts as I continue watching, experimenting, and engaging further. I will be following it all further with equal parts curiosity and prudence.

You can follow my ideation journey here on #discord https://midjourney.com/app/users/766068200332853278/… and on #instagram for some curated and evolved pieces: https://instagram.com/jediwright/.

A hagiographic tech temple of thought | Motion design-based version

 

I originally posted on this here on Twitter, so my count and explorations have widened in count and scope since. Reposting here for posterity's sake, given the uncertain future of that platform.