Entries in fantasticjack (6)
Since Fantastic Jack takes place in the intersection between reality and fantasy, we've been focusing on developing a style for visually blending between the two. A few weeks ago I explained how we plan to blend between different sky backgrounds. It turns out that our sky backgrounds are particularly conducive for this sort of thing, since they're so amorphous. Simply crossfading between two painted skies accomplishes most of what we need, although we will eventually add movement, like the clouds drifting or the sun and moon rising and setting.
However, our game also has vehicles, like the bicycle that becomes a snowmobile on the icy tundra, as you may remember from our initial announcement. While similar in terms of their structure--both have handlebars and seats, as well as two points of contact with the ground--these particular vehicle variations have distinctly different silhouettes. Crossfading would not be sufficient, so I decided to try morphing!
Morphing between two images is a combination of crossfading while distorting the shapes of both images. The objective is to match up similar features in both images so that they end up overlapping even if they were originally in different places.
The first step is to manually add control points on top of both images, where each point on one image corresponds to one point on the other image. Then all we would need to do is run the automatic morphing algorithm! Unfortunately, however, we didn't have a morphing algorithm already in our toolbox, so for us the second step was writing it.
The algorithm needs to generate two displacement fields, one for each image. When an image is fully faded in, it doesn't need to use the displacement, but the more it fades out, the stronger the displacement needs to be. So, after it's fully faded out, the initial image would take roughly the shape of the other image. This displacement field should actually be generated for the fully-faded-out condition. In other words, given a target point on the second image, how far away are the source pixels that we sample from the first image? We also need to smoothly blend the displacement in between the control points. This smooth blending is the hard part.
To generate a smooth displacement field, we start by iterating over every pixel. To mix the influences of the control points that are nearest to this pixel, we need to compute a kind of weighted average. So for each pixel, we iterate over every pair of control points, and for each pair of control points, we subtract the target point from the source point to obtain this pair's displacement offset. Next, to figure out the weight of this pair's influence, we compute the distance from this pixel to the target point. I picked the formula 1/distance^2 to convert this distance to a weight value, and then I add this to a sum of the weight values for all control points, and I also sum all of the displacement offsets multiplied by their weight values. Finally, I divide the offset sum by the weight sum to get the blended displacement offset for this pixel. In pseudocode, the algorithm looks like this:
Vector2 displacementSum = ( 0, 0 )
float weightSum = 0
for each pair (Vector2 sourcePoint, Vector2 targetPoint) from controlPoints:
float distance = length( targetPoint - pixelPos )
float weight = 1 / ( distance * distance )
weightSum += weight
displacementSum += pointDisplacement * weight
If we display the displacement fields generated by this algorithm using red to indicate x-axis offsets and green to indicate y-axis offsets, we end up with this abstract art:
On the other hand, if we skip the crossfading and just apply the displacement to the source images, we get this Dalí-esque surrealism:
Combining the displacement and the crossfade together, we get this cute little animation:
There's definitely a lot of work left to do to clean this up, but I'm pretty happy with my first attempt at morphing, and this technique will surely be useful to us.
We're all shaped by our experiences: where we grew up and what we've gone through, who we've known and when we lived.
So I've thought a lot about the negative experiences that initially inspired me to make Fantastic Jack, but that's for another post. Lately I've been thinking about something positive, something that influenced me profoundly in my childhood: architecture.
It all started with a Google search for reference photos of some of the distinct buildings that populated my childhood. Somehow it became a Googie search.
'Cause I’m, like, totally a Valley Girl.
As a kid I loved shopping with my mom at the Sherman Oaks Galleria. That's where, bizarrely enough, I discovered my love of architecture.
I was somewhere around five years old, and yet, at my favorite place in the world, I was bored.
Father's Day was approaching, so my mom had taken me to pick out a present for my daddy. We'd gone to literally every store in the mall and I was over it. Usually I found myself fascinated by the faux columns and architectural accents of Structure, but I was tired. So, while my mom browsed, I climbed into the overstuffed leather chair near the dressing rooms.
As soon as I'd gotten settled, a large book on the coffee table caught my eye. Fallingwater. Indeed water was perpetually falling on the cover, worn and torn, lush evergreens framing a small waterfall with a modern-looking house perched above. I realized that a person--some guy named Frank Lloyd Wright--actually built a house into a waterfall. I hefted the book onto my lap and flipped past pages of essays, captivated by the photos.
I decided I had to have this book. Unfortunately, the uncooperative sales associate insisted it wasn't for sale.
My mom promised me she'd take me straight to Crown Books in Encino to buy the book to get me to leave the Structure store. After one more glance at the book, we left.
Driving along Ventura, my mom called my daddy from the cellular in her Corvette. I heard the hesitance in her voice as she informed him of the price she'd noticed below the barcode: $55.
So we continued on to Crown Books.
My search today started with the Galleria, and proceeded via streetview down Ventura the other direction. I was looking for iconic, nostalgic buildings and signs. When I got to a carwash where Tyrone meets Beverly Glen, I stopped.
One of my last memories of my dad was a casual conversation while waiting for a carwash at the Handy J on Ventura Boulevard in summer 2005, the summer before he passed away.
I was particularly close to my dad, probably 'cause he took me to school every morning 'til I turned 16.
We'd stop on the way for breakfast: at Lamplighter--originally and now Corky's--for French toast made with cinnamon bread that I'd dip into the maple syrup I swirled with whipped butter, at Mort's for the vanilla rainbow sprinkle cookies at Bea's Bakery next door, at the newly opened Noah's Bagel at the La Reina Center for a cream cheese-slathered blueberry bagel washed down with Stash licorice tea.
I miss those mornings.
Maybe it's a tacky tribute--that is, after all, what architecture critics thought of Googie--but I'm including a remixed recreation of that carwash mashed up with Googie elements in Fantastic Jack.
I'm planning to incorporate a lot of the places we went, ordinary and otherwise, as a reminder of where I'm from, where I've been. Hopefully you'll see the La Reina Center and Lamplighter in some capacity.
I didn't end up becoming an architect in the traditional sense, but I still build: I build experiences and worlds.
As we've discussed before, we're trying to design a system to display transitions back and forth between real and fantastic--that is, imaginary--environments. And whatever system we choose has to fit the pixel art style of the game.
Part of this system must account for changing the sky. Recently while walking to lunch I was watching the sky, trying to figure out how to make our sky morph into different shapes while maintaining the limited color palette, when I remembered Dan Fessler's tutorial for dynamically converting regular Photoshop art into pixel art. I realized that we could adapt that technique to work in our game!
It's actually a really simple technique. Just paint a grayscale image with as much detail or softness as you want, then let the computer automatically flatten it into a few shades of gray. Afterward, you can swap those shades of gray with a custom color palette.
The tutorial focuses on how you can use this to paint images more quickly, but I had a hunch that we could use it to transition between completely different images during gameplay.
I also selected a color gradient for each image. So while the images are blending together, their color palettes blend together too.
Technically, the flattening effect removes definition from the images, arguably reducing their quality. However, I find that the borders between flat colors form interesting and aesthetically pleasing shapes. Moreover, I was pleasantly surprised to find that those shapes morph dynamically if I just let the underlying images crossfade into each other. It feels almost like cheating, and if I turned off the color flattening effect it would be pretty boring to watch images crossfade back and forth. Eventually, we probably will add more moving elements to the sky to make it even more dynamic, but for now I'm pretty happy with the effect.
John set some high expectations with last week's post!
So we mentioned in our first #screenshotsaturday post that we're still working out how to transition between the real and fantasy worlds. Not only is there a shift in the environments, but also in the style in which they're drawn. Some of y'all with a keen eye may have noticed the disparity in that initial gameplay GIF, so I'd like to talk about the differences, and why they're there.
Probably the most obvious inconsistency is that the world of reality is flat-shaded and one-dimensional while the fantasy world seems to take on more dimension through shading. Another visible difference has to do with the colors used in the environments: the colors of the real world are brighter and more discordant while the colors of the fantasy world are more muted and complementary.
It wasn't deliberate, at the outset.
I actually started working on the backgrounds that are a part of Fantastic Jack's reality world back in April 2013. I was feeling really demotivated about Two-Faced and had the idea for the prototype that became Fantastic Jack; it got my mind off of Two-Faced, and gave me a way to focus my creative energy.
Flash forward to August 2014, when we decided that Fantastic Jack should be the next game Adorkable Games would take into development, partnering with Disparity Games to announce Fantastic Jack as part of the Kickstarter for Ninja Pizza Girl. As part of the announcement, we agreed to produce that gameplay GIF.
In retrospect, even the workflow suggested the differences that would emerge between reality and fantasy.
I made a storyboard of the gameplay sequence and a breakdown of the individual assets, then got to work.
I gave myself constraints while creating the real world. I worked inward, starting with the frames of each building, which I blocked out after carefully working out the optimal size of buildings relative to each other; this involved lots of math. After I established the exterior structure, I moved into compartmentalized spaces inside, from the window displays to the back walls, filling in individual items until the space was bursting with detail. I wasn't satisfied with simplicity. Every pixel has intent.
The fantasy worlds feel more expansive, likely from working outward. I collected a lot of references for the fantasy environments but didn't have set boundaries, which enabled the spaces to flow somewhat freeform. By design, the fantasy worlds exist on a larger scale but don't require as much complexity. Without constraints, composition was the key, enabling the environments to breathe. Imagination fills in the details.
Of course, the differences might be attributed to a more basic truth: so much of our reality revolves around indoor spaces, whereas fantasy usually involves the desire to escape and explore.
But I realized that, inadvertently, the inconsistency between worlds carries significance in the context of the gameplay and cemented the contrast as a stylistic decision.
In Fantastic Jack, the real world’s vibrant, clashing colors combined with the presence of detail and lack of depth suggest a shallow exaggeration of our own reality that’s almost overwhelming. Jack’s reality is unwelcoming.
Fantastic Jack’s fantasy worlds are, ironically, more realistic. The lush fantastic worlds evoke the freedom of possibility. The fantasy realm is full of dangers and is challenging to navigate, but is also more exciting. Jack runs faster, jumps higher, as well as acquires other abilities that aren't present in the real world. Yet, as appealing as the fantasy is, Jack can’t exist only in fantasy.
After all, Jack has responsibilities. Like getting home and doing homework.
The way the worlds are drawn mirrors perceptions of reality and fantasy.
Admittedly, I realized a hint of depth snuck into some of the elements of the real world--you can find shading on the streetlights--and some of the elements of the fantasy world are flat--like the snowmobile--but this only reinforces the connection between worlds. The line is blurred.
Plenty of cause to wonder what is reality and what is fantasy.
We're gonna be taking a break from Fantastic Jack next weekend 'cause we'll be celebrating our fourth jammiversary at the Global Game Jam! You can still expect a #screenshotsaturday post from us, but it'll be about our GGJ game instead. Thanks for understanding!
Hello! This is John, and I'm using today's #screenshotsaturday to talk technical.
Something that we've received positive feedback on so far in developing Fantastic Jack is our swirling snow particle simulation, and I'm proud of how that turned out, so I wanted to explain how it works.
Particle systems are a common feature of video games, used for animating dust clouds, raindrops, fire, sparks, and other amorphous phenomena. The technique is simple: just keep track of the center positions of all of the particles, move them all a little bit each frame, and then draw a speck on the screen for each of them.
Usually particles are spawned with their own position, velocity, decay, and other parameters, at which point they move independently for a period of time before evaporating entirely. In a vacuum, that would make sense; but in real life, particles are usually surrounded by air, and air is a fluid that influences their motion. Small particles like snow have very little mass and a lot of air resistance, which means that instead of cutting through the air, they tend to move with the air, revealing the underlying air currents that would otherwise be invisible to the naked eye.
So what do air currents look like? Well, as any meteorologist could tell you, air currents can be extremely complex, chaotic, and hard to predict. But we're not actually trying to represent full weather patterns. We just want a way to move all the particles somewhat randomly but also in a way that's "natural-looking" as they fall to the ground. Fortunately, there exists a simple approximation that looks good enough for our purposes, and it's called "curl noise".
The word "noise" is usually associated with discordant sound effects, so it might seem like a strange connection, but sound isn't the only thing that can be noisy. Noise is just a disturbance of random-ish waves. It just so happens that random sound waves are a defining characteristic of cacophonies, hence the name noise.
Curl noise is an extension of more popular noise patterns like "Perlin noise" and "value noise". I was first introduced to noise patterns with Photoshop's "Clouds" filter. Similar noise patterns are often used to make artificial mountain ranges in computer graphics by using the noise as an elevation map. The introductory videos for Shadershop happen to be a great demonstration of how a program can generate such noise.
We can write a program that maps each snow particle's current position to somewhere in the noise field and use the noise value at that position to determine where the snow should go, but it's not immediately clear how to do that. The noise only gives us one numerical value at each position, but we need both a speed and a direction for each particle.
The first trick we use to solve this problem is to notice that, in addition to a value, every position in the noise field also has a slope. We can check neighboring positions to the north, east, south, and west to calculate the slope, and that slope has both direction and magnitude. If you think about the noise as an elevation map with water particles on it, you would expect water to run downhill and the simulation could use the slope of the noise to determine how to move the water particles.
But there's a big difference between water moving downhill and particles in the air. Water collects into pools in the valleys and stops there. Air currents, however, maintain a relatively even distribution of air pressure, constantly moving without collecting in any one place. We need a somewhat random direction to move the snow particles in that won't eventually leave them all clumped together before they even hit the ground.
Fortunately, there's one more trick we can use to model air currents more accurately: rotate the slope 90 degrees before letting particles move along it. In other words, instead of using (dx, dy) as the direction, use (-dy, dx).
This is actually really elegant, but to understand why, we need to go back to the elevation interpretation. The slope reveals exactly which direction to move a particle in if we wanted it to go uphill or downhill, but we know that if we keep doing either of those things, it'll eventually end up getting stuck at a valley or peak. However, as long as the particle never goes up or down, it can keep going forever. Imagine drawing the elevation as a topographic map with curved lines drawn at each elevation level, looping around peaks and valleys. Moving perpendicular to the slope means following along one of those lines, circling endlessly.
This is what curl noise looks like, but there's still one more problem. The snow particles never cross paths. Our falling snow uses one last trick to look more convincing, which is simply to use a 3D noise field instead of a 2D one. The game may be 2D, but real air currents are 3D and we expect to see a bit more parallax as snow in the foreground and background move in different directions. It's a lot harder to visualize the interior of a 3D noise field, and the slope rotation is a little more complicated in 3D, but the idea of moving perpendicular to the slope while maintaining an elevation is the same. For reference, the 3D slope rotation is (dz - dy, dx - dz, dy - dx). Of course, if you want to be confused, feel free to check out the Wikipedia article on this curl operation!
Thanks for reading!