INTRO
Since late 2022, I've been experimenting with AI art as a beta tester for early image generators like Midjourney, Luma AI, and Stable Diffusion. This experience has given me a strong understanding of prompting language and syntax evolution.
I've been at the forefront of this technology since its inception, watching how it has grown and developed in this short period. I've learned the prompt structure and how generate the best results on the best platforms to use.
Bullet points
Generating high-quality, versatile art
Getting a grasp on prompting in general
Building a repository of unique art to use for licensing
Experimenting with mixing time periods and culture, art styles and techniques, I learned how to get the best results from my prompts, generating photorealistic imagery of people, products and scenery, effectively doing away with stock image libraries for the flexibilty and versatility of AI.
Threading The Loop
Using Midjourney and Stable Diffusion, I can create images to use as previsualization for animations to design, shortcutting my time in pre-production and jump starting my ideation process, and cutting the time from concept to completion dramatically.
What I've Learned
There are several tools on the market, but few that give the results of professional level quality art. Knowing how to utilize which platforms to deliver the most effective deliverables is a big key in how the results are produced.
Impact & Results
I have cut production times in half, and shortened the work cycle by 70% by integrating AI generated art into my projects, saving time and money versus using multiple stock footage resources and accounts spread all over the internet. AI has been able to accelerate my process greatly.