Realflow to image sequence for ingame use

Hello fellow tech artists!

I’m currently doing research on how to turn a baked simulation from realflow (or any other fluid sim tool that yields the result I’m looking for) into a short image sequence that can be used in our particle engine.

So far I came up with a pipeline that produces visually pleasing results, but it’s a pain in the arse to get there.
After finding and insteresting fluid motion/shape in realflow, I import the baked mesh sequence into 3dsmax, run a few scripts to make sure the mesh is in the camera view at all times and uses the maximum space available in the final image.
Then I render out alpha, normal and object thickness for about 30 frames.
Not too shaby until here - but now comes the painstaking part - the 80% of work for the last 20% of quality:
I load these image sequences into virtual dub to run a video stabilizer plugin over them, to reduce remaining shakyness in the movement.
Then I store that video to an avi and load into another video editing tool that can store image sequences.
Same process for alpha, normal and thickness channel.
In the end I compose these image sequences into a big atlas, containing all 30 frames, which our particles engine can read in as an animation.

Especially the last part where I have convert to video and back seems avoidable to me, but I haven’t been able to find a programm that can read/write image sequences and stabilize a video.

So the big question: did any of you guys ever do something similar?
If yes, what were your experiences - what dirty tips and tricks can you share?
Even you didn’t do exactly that - if anybody has got some ideas on howto simplify the process of turning a fluid simulation into an image sequence with smooth movement and maximum image space usage, please share your wisdom!

Thanks a lot!
-Sascha

Why are you making the render Best-fit your particle simulation? Without knowing much about the process, this seems like the source of the instability that you correct with the video software. Would it be reasonable to use an authored camera, or a fixed size render with a focus that is in the center of your bounding box? These would leave you with stable renders. Even with the scaling, if you have a fixed or authored focus, you should be stable, the problem is likely introduced by the extents of your bounding box changing so quickly that the implied focus jitters.

Oh, and personally, if Ai were dead-set on absolute best pixel usage. ( which generally, I’m not.) I would do the cropping in image-space. First, render everything way too big with a ton of border space. Then go through every image and find the min/max X and Y. Find the center of this bounding box, then crop every image using this as the center. I would personally just use those extents to crop every image, but you could find tighter boxes with that center, if scale insability is desired. Square them, mosaic them, and you’re done.

I was in a similar situation a while back and ended up finding a relatively painless solution, but it won’t be super helpful for you because it relied on some lucky engine architecture that we just happened to have in place. Still, maybe it will be nice to know you’re not alone! I went digging for info on this a while back and found diddly-squat.

So, in my case I didn’t need to wrestle with the image stabilization step you described, but I did have to stitch a lot of images together into an atlas. And more importantly I wanted to be able to iterate on those stitched images.

My first attempt was to do the stitching in python. I still think this could be a great general approach, but I ran into trouble getting python to read/write uncompressed dds files. I’m sure there’s a python module that handles this, but I wasn’t able to track it down.

In any case, after describing my problem to one of our programmers, he mentioned that our engine already had support to stitch images together on the fly - turns out this tech was required for our heightfield rendering at the time.

After he hooked me up with the magic code tweaks i needed, I was able to set up a custom after fx exporter that would fire out my frames to the appropriate directory, setup a proxy “text” file that referenced the various single frames, and then point my particle editor at that proxy text file.

Then whenever I tweaked my animation in after fx, I could re-export and just have the engine stitch it back up for me.

No doubt this is about zero help for you, but I couldn’t help responding because in the end this turned out to be a great workflow for me and absolutely worth the programmer time if it’s available.

Alright, so for whatever reason I can’t get this thread off my mind. So I made a python script that does the image-space cropping method I described, and then composites an atlas automatically. ( The atlas reads top left to bottom right like an english speaker )

It requires The Python Image Library (PIL), so if you don’t have that package, you will need to download it.

# particleMosaic.py
import Image
import os
import sys
import getopt
from math import sqrt, ceil, floor

def boxUnion(A, B):
    axmin, aymin, axmax, aymax = A
    bxmin, bymin, bxmax, bymax = B
    xmin = min( axmin, bxmin )
    ymin = min( aymin, bymin )
    xmax = max( axmax, bxmax )
    ymax = max( aymax, bymax )
    return ( xmin, ymin, xmax, ymax )

def getNextPowerOfTwo( upper ):
    last = 0
    exp = 0
    count = 0
    
    while( last < upper ):
        exp = count
        val = 2 ** exp
        last = val
        count = count + 1

    return ( exp, last )

def totalToTile( count ):
    exponent, powerOfTwo = getNextPowerOfTwo( count )
    half = exponent / 2.0
    x = int(ceil(half))
    y = int(floor(half))
    return ( 2 ** x, 2 ** y )

def generateTiles( counts, sizes ):
    boxes = []
    for y in range(counts[1]):
        for x in range(counts[0]):
            minx = x * sizes[0]
            miny = y * sizes[1]
            maxx = (x + 1) * sizes[0]
            maxy = (y + 1) * sizes[1]
            boxes.append( (minx, miny, maxx, maxy) )
            
    return boxes
            

def MosaicFolder( folder, output, tileDim=(128,128) ):
    images = []
    for infile in os.listdir( folder ):
        
        try:
            path = os.path.join( folder, infile )
            im = Image.open( path )
            images.append( im )
            print ( "Atlasing: " + path )
            
        except IOError, e:
            #Not an image file we can read
            pass
        
    return Mosaic( images, output, tileDim )


def Mosaic( images, output, tileDim=(128,128) ):
    # Get all Bounding boxes
    boxes = [ im.getbbox() for im in images ]

    # Get the Bounding box that encompasses all other bounding boxes
    extents = reduce( boxUnion, boxes )

    # Pull cropped sections into memory
    regions = [ im.crop( extents ) for im in images ]
    regions = [ im.resize( tileDim, Image.ANTIALIAS ) for im in regions ]

    # X and Y number of Tiles.  This will be a power of two
    tileCount = totalToTile( len( regions ) )
    # Bounding boxes for every tile in the atlas.
    tiles = generateTiles( tileCount, tileDim )

    # Generate the Atlas
    atlasDim = [ count * dim for count, dim in zip( tileCount, tileDim ) ]
    atlas = Image.new( images[0].mode, atlasDim )

    # Copy cropped images into proper slot in the atlas
    for i in range( len(regions) ):
        atlas.paste( regions[i], tiles[i] )
    
    # Make sure the directory exists to save
    if not os.path.exists( os.path.dirname( output ) ):
        os.path.makedirs( os.path.dirname( output ) )

    # Write
    atlas.save(output)
    print ( "Saved Atlas to " + output )

def printHelp():
    print '''
        This Python Script will take a folder full of images,
        and composite them into a single large "atlas".  If no
        overrides are provided, every tile is 128x128 pixels.

        The flags this program excepts are:
         -f "Folder/Path" ( Required )
         -o "output/File.format" ( Required )
         -x The x dimension of each tile in the atlas
         -y The y dimension of each tile in the atlas
         -h shows this dialog

        A typical call looks like:

        python particleMosaic.py -f "c:/folder" -o "c:/atlas.png" -x 256 -y 256
    '''

if __name__ == "__main__":
    try:
        opts, args = getopt.getopt( sys.argv[1:], "f:o:x:y:h" )
    except getopt.GetoptError, err:
        print str(err)
        sys.exit(2)

    if ( len(opts) == 0 ):
        printHelp()
        sys.exit(2)

    inFolder = None
    outFile = None
    x = 128
    y = 128
    for flag, value in opts:
        if flag == "-f":
            inFolder = str(value)
        elif flag == "-o":
            outFile = str(value)
        elif flag == "-x":
            x = int(value)
        elif flag == "-y":
            y = int(value)
        elif flag == "-h":
            printHelp()
            sys.exit(2)
        else:
            assert False, "unhandled option"

    if not inFolder:
        assert False, "No input folder provided."
    if not outFile:
        assert False, "No output file specified."

    MosaicFolder( inFolder, outFile, ( x, y ) )

and usage looks something like this from the command line:

python particleMosaic.py -f "c:/folder" -o "c:/atlas.png" -x 256 -y 256

and you can of course import it as a module

import particleMosaic
particleMosaic.MosaicFolder( "c:/folder", "c:/atlas.png" )

As a VFX artist, I do this sort of thing a lot. My personal pipeline is like this:

  1. Render out >100 frame sequence at high res. Get the most data I can so I don’t have to go back to the render.

  2. Bring this into After Effects, where it is scaled/cropped/retimed to fit in a game-ready resolution.

  3. I have an After Effects script which builds an image atlas (or Filmstrip). The great thing about this workflow is that it’s all non-destructive. If you want to make minor changes at any point in the pipe you can. You can decide the shapes are wrong so you rerender something new, and the AE project updates automatically all the way down to the output texture with zero work. It’s very fast and efficient. You can also recolor and retime your stuff any way you please. Another advantage to doing it this way, is that if you bring your image into photoshop and want to add something like a glow, you run the risk of the glow bleeding into the other frames (which is annoying!)

The key is changing the way you think about filmstrips. They shouldn’t be considered static textures -after all, they are a hack. They are a cheat for getting video on your particles. Given this, they should be treated as animated video for the whole process.

I’ll put together a short example video and post up the script later tonight.

I couldn’t agree more that the power you get from going directly from a compositing package into your game/particle editor is phenomenal. Now if they would just start releasing effect footage libraries that stay in frame and have proper alpha channels!! (i.e. libraries targeted for games)

Also, thanks for that great script sample Lithium. Hawt. I’m keeping that in my back pocket for the next time i’m on an engine that doesn’t automate the stitching for me.

Caveat… I realized that I didn’t do a uniform scale and pad… So it can squish your image right now. I’ll try to update it with that option this weekend. And no trouble, I really like working on problems. I’m just happy when anyone gives me an excuse to post code. :stuck_out_tongue:

Updated code to support uniform scaling:

#particleMosaic.py

#PIL
import Image
from ImageOps import fit

#builtin
import os
import sys
import getopt
from math import sqrt, ceil, floor

def boxUnion(A, B):
    axmin, aymin, axmax, aymax = A
    bxmin, bymin, bxmax, bymax = B
    xmin = min( axmin, bxmin )
    ymin = min( aymin, bymin )
    xmax = max( axmax, bxmax )
    ymax = max( aymax, bymax )
    return ( xmin, ymin, xmax, ymax )

def getNextPowerOfTwo( upper ):
    last = 0
    exp = 0
    count = 0
    
    while( last < upper ):
        exp = count
        val = 2 ** exp
        last = val
        count = count + 1

    return ( exp, last )

def totalToTile( count ):
    exponent, powerOfTwo = getNextPowerOfTwo( count )
    half = exponent / 2.0
    x = int(ceil(half))
    y = int(floor(half))
    return ( 2 ** x, 2 ** y )

def generateTiles( counts, sizes ):
    boxes = []
    for y in range(counts[1]):
        for x in range(counts[0]):
            minx = x * sizes[0]
            miny = y * sizes[1]
            maxx = (x + 1) * sizes[0]
            maxy = (y + 1) * sizes[1]
            boxes.append( (minx, miny, maxx, maxy) )
            
    return boxes

def getScaledUniform( targetSize, region ):
    size = ( region[2] - region[0], region[3] - region[1] )
    idealRatio = targetSize[0] / float(targetSize[1])
    actualRatio = size[0] / float(size[1])

    dims = targetSize
    if idealRatio > actualRatio:
        # Y Major
        dims = ( int(dims[1] * actualRatio), dims[1] )
    else:
        # X Major
        dims = ( dims[0], int(dims[0] * ( 1 / actualRatio ) ) )

    return dims

def boxFromCenterSize( center, size ):
    half = ( size[0] / 2.0, size[1] / 2.0 )
    minx = int( center[0] - half[0] )
    miny = int( center[1] - half[1] )
    maxx = int( center[0] + half[0] )
    maxy = int( center[1] + half[1] )
    return ( minx, miny, maxx, maxy )

def MosaicFolder( folder, output, tileDim=(128,128), uniform=False ):
    images = []
    for infile in os.listdir( folder ):
        
        try:
            path = os.path.join( folder, infile )
            im = Image.open( path )
            images.append( im )
            print ( "Atlasing: " + path )
            
        except IOError, e:
            #Not an image file we can read
            pass

    return Mosaic( images, output, tileDim, uniform )
                                      

def Mosaic( images, output, tileDim=(128,128), uniform=False ):
    # Get all Bounding boxes
    boxes = [ im.getbbox() for im in images ]

    # Get the Bounding box that encompasses all other bounding boxes
    extents = reduce( boxUnion, boxes )

    # Pull cropped sections into memory
    regions = [ im.crop( extents ) for im in images ]

    # Get Atlas-Tiles out of the regions
    if uniform:
        collector = []
        cropBox = getScaledUniform( tileDim, extents )
        half = ( tileDim[0] / 2.0, tileDim[1] / 2.0 )
        pasteBox = boxFromCenterSize( half, cropBox )
        
        for region in regions:
            blank = Image.new( region.mode, tileDim )
            scaledRegion = region.resize( cropBox, Image.ANTIALIAS )
            blank.paste( scaledRegion, pasteBox )
            collector.append( blank )
            
        regions = collector

    else:
        regions = [ im.resize( tileDim, Image.ANTIALIAS ) for im in regions ]

    # X and Y number of Tiles.  This will be a power of two
    tileCount = totalToTile( len( regions ) )
    # Bounding boxes for every tile in the atlas.
    tiles = generateTiles( tileCount, tileDim )

    # Generate the Atlas
    atlasDim = [ count * dim for count, dim in zip( tileCount, tileDim ) ]
    atlas = Image.new( images[0].mode, atlasDim )

    # Copy cropped images into proper slot in the atlas
    for i in range( len(regions) ):
        atlas.paste( regions[i], tiles[i] )
    
    # Make sure the directory exists to save
    if not os.path.exists( os.path.dirname( output ) ):
        os.path.makedirs( os.path.dirname( output ) )

    # Write
    atlas.save(output)
    print ( "Saved Atlas to " + output )

def printHelp():
    print '''
        This Python Script will take a folder full of images,
        and composite them into a single large "atlas".  If no
        overrides are provided, every tile is 128x128 pixels.

        The flags this program excepts are:
         -f "Folder/Path" ( Required )
         -o "output/File.format" ( Required )
         -x The x dimension of each tile in the atlas
         -y The y dimension of each tile in the atlas
         -u forces uniform scaling instead of scale to fit.
         -h shows this dialog

        A typical call looks like:

        python particleMosaic.py -f "c:/folder" -o "c:/atlas.png" -u -x 256 -y 256
    '''

if __name__ == "__main__":
    try:
        opts, args = getopt.getopt( sys.argv[1:], "f:o:x:y:uh" )
    except getopt.GetoptError, err:
        print str(err)
        sys.exit(2)

    if ( len(opts) == 0 ):
        printHelp()
        sys.exit(2)

    inFolder = None
    outFile = None
    uniform = False
    x = 128
    y = 128
    for flag, value in opts:
        if flag == "-f":
            inFolder = str(value)
        elif flag == "-o":
            outFile = str(value)
        elif flag == "-x":
            x = int(value)
        elif flag == "-y":
            y = int(value)
        elif flag == "-u":
            uniform = True
            print value
        elif flag == "-h":
            printHelp()
            sys.exit(2)
        else:
            assert False, "unhandled option"

    if not inFolder:
        assert False, "No input folder provided."
    if not outFile:
        assert False, "No output file specified."

    MosaicFolder( inFolder, outFile, ( x, y ), uniform )

usage now looks like:

python particleMosaic.py -f "c:/folder" -o "c:/atlas.png" -u -x 256 -y 256

I guess I should pyDoc it better, but this does practically nothing. Fun little script though. I feel like there was a more elegant way to do uniform scaling, ( Maybe create a crop region, force it to be within the bounds of the image… etc… but that might not work in the case of the tile being a shape that is out of the range source image ). Similarly, there is little safeguarding against images being of different sizes. I leave that as an exercise to the reader. (Obvious choices are to pre-scale everything to a uniform size, or change all operations to be based on percentages) Given the problem-space this was originally created for, that is a bit extraneous. It would be fun to try and create a general module for all your atlasing needs though.

Hey all!

I just wanted to release a quick thanks for all the replies as I’m reading through them.
I had to work on some more important stuff for a few weeks, but I will hopefully get back to this soon, so your help is very much appreciated.

Cheers!

[QUOTE=floatvoid;4439]
3) I have an After Effects script which builds an image atlas (or Filmstrip). [/QUOTE]

That’s what I want to create this week, need to brush on my after effects scripting skills. For yours, is it an export script, or do you have it make a new composition with 25 duplicated layers (for a 5x5) and offset each by 1, 2, etc frames?

Thence, wow powerleveling can puddle the disagreement wow gold for you by adding the required skills in ordering for your dimension to survive and arrive the highest construction of the job. A WoW relationship will come along with the needed reputation, itemisation, skills and riches and all these wow gold features ill alter your persona into a very sound and nonabsorptive one.

[QUOTE=mathes;5566]That’s what I want to create this week, need to brush on my after effects scripting skills. For yours, is it an export script, or do you have it make a new composition with 25 duplicated layers (for a 5x5) and offset each by 1, 2, etc frames?[/QUOTE]

I’m interested in this stuff too. I think the python script is nice too but the way the workflow with After Effects was described sounded really efficient so i might want to create something similar.

Also, can anyone recommend any other software for sprite creation. I looked at Realflow, Blender and i know my way around PFlow in 3DSMax but are there any hidden gems for this sort of stuff?


SUZUKI GSX-R SERIES HISTORY