Midjourney: Year 1
When we first began, Midjourney was on the bleeding edge and quickly replaced our Unreal Engine Animations, along with our moodboards and storyboards.
Projects that were in concept phase became instantly finished by understanding commands and what Midjourney considered beautiful. 
Any concept - from Ironman to Steelers - was easily previewed before execution became necessary. Production was expidited and Pre-production, in some cases, skipped.
Ideas that we had became a reality and pitching before performing was never easier.
If it was artwork or HDR Aerials, the options became endless - if we knew how to control the outcome.
Controlling it at first was like controlling your dreams, nearly impossible without proper training.
Our childhood favorites could take on new styles as if Mucha painted them or as if they originated in Hokusai.
Camera angles and inspirational lighting became the norm after intentional and studied commands were given at our fingertips, even while on the road to the studio.
Leaving behind the stick-figures and the stale, flat black and white storyboard art, our clients began to peek into our minds to finally fully grasp what we were imagining.
Scenes played out as if we were finished and only wanted to preview screengrabs for our agencies.
As Midjourney advanced, we began to use tools both built in and from Photoshop Generative AI to supplement and correct the various shortcomings of the technology.
Storyboards still had a place when we told Midjourney to create scenes based off of our our artwork.
 hyper-stylized and dramatic moments before the action sometimes seemed to automatically happen the first time we asked for it. Other times, we had to be careful in our wording. tools of Violence, even if perceived as such, are sometimes impossible to depict.
Whole stories seem to be told in one image, but when we couple them and provide examples, the AI gave us fuller and more epic tales.
Even though they are beautiful to look at, there are many failures in the programming that err on the side of modeling instead of common scenes.
Some of our favorite work has been done by providing examples and asking for it to not only reference a face from a real photo, but to learn who that character is so that we become teachers of the AI instead of just commanders.
The whole workflow is so simple that, once a pattern is established for a particular need, almost anyone can be trained on it to yield the results we are after. Scenery in particular has the edge on the market. Nevertheless, there is no doubt that it requires a certain taste in order to develop cinematic looks.
This tool became especially powerful for commercial ads that needed to have an exact length. Using these for storyboards along with creating an animatic made evaluating our timing incredibly fast and useful.
Art concepts and inspiration from which to draw (literally, in this case) has been one of the best uses. Once a unique visual has been offered, the most likely executable version is selected and created without AI.
Clients who struggle to visualize were able to preview examples such as these to get earlier sign-offs and greenlights than ever before in our history as a business. For the first time, projects were so successfully planned (and executed) that in some cases zero feedback or adjustments were required. This wouldn't be possible with great writing.
concepts for logos became another common norm, most of which, if not controlled and commanded with training and taste, became very similar looking, like many of the so-called "beautiful" images of people. For conceptualization, though, midjourney has proven it's ability to give you a springboard.
Back to Top