Does anyone know if it is possible in native photoshop (CS1 or greater) to extract the Hue and Saturation from the Value or Brightness into two separate files?
I have been using Filter Forge to do this, but I was curious if Photoshop has it’s own filter or operation for this.
The reason for this is to store Value at a much higher resolution (256x256 versus 32x32) than Color, and then multiply them together in a material.
Thanks.
There is a plugin on the Photoshop CD (something in Extras, I think) that has an HSL<->RGB<->HSL conversion filter.
Maybe I’m misunderstanding the problem, but if you’re storing colors and luminousities separately, couldn’t you just create a new layer with your image, put that above a white background, then set that layer’s blending mode to luminousity or color depending on which one you are doing? If you’re not doing that, and actually doing HSB colorspace in the shader, I am incredibly curious how we’ll that is working for you.
(I may be wrong about exactly which blend mode and colors to use, but I think I’m in the ballpark.)
I’ll try that Lithium.
I am simply Multiplying the two textures together in the material, nothing fancy. I am working on the Wii and one of the big handicaps is memory and texture formats. DXT1 with pre-multiplied alpha or 16-bit color are my only options. So my plan is to store Value as a single 8-bit channel at a high resolution and store the color separately as a DXT1 at a lower resolution and then multiply the two together. This ends up being lower memory overall and gives me sharper details.
So, I may have led you astray. The photoshop color/lum blend modes operate in HSL ( Conical Color Space ) which makes it difficult to get the correct color values back from a multiply alone. So, here’s a python script that separates the images using HSV ( Cylindrical Color Space ) instead. This is likely redundant as you said you can already do this using FilterForge, but it was fun.
This requires the Python Image Library.
You can import it or run it from the command line. Help is in the file.
#colorPartition.py
# PIL
import Image
# Builtin
import os
import sys
import getopt
# Builtin Functions
from colorsys import rgb_to_hsv, hsv_to_rgb
def SeparatePixel( pixel ):
# Switch from PIL Ints to 0-1 HSV Space
pixel = tuple([ val / 255.0 for val in pixel ])
hue, saturation, value = rgb_to_hsv( *pixel )
# Make Color Image Fullbright in HSV Space
color = hsv_to_rgb( hue, saturation, 1 )
color = tuple([ int(val * 255) for val in color ])
lum = int( 255 * value )
lum = ( lum, lum, lum )
return ( color, lum )
def SeparateImage( image ):
pixels = image.getdata()
parts = [ SeparatePixel(pix) for pix in pixels ]
colors, lums = zip(*parts)
colImage = Image.new( "RGB", image.size, None )
lumImage = Image.new( "RGB", image.size, None )
colImage.putdata( colors )
lumImage.putdata( lums )
return ( colImage, lumImage )
def SeparateFile( inFile, outColor="", outLum="", outColorSize=(32,32), outLumSize=(256,256) ):
base, ext = os.path.splitext( inFile )
if not outColor:
outColor = base + "_col.png"
if not outLum:
outLum = base + "_lum.png"
print "Separating %s
col: %s
lum: %s" % ( inFile, outColor, outLum )
inImage = Image.open( inFile )
colImage, lumImage = SeparateImage( inImage )
colImage = colImage.resize( outColorSize, Image.ANTIALIAS )
lumImage = lumImage.resize( outLumSize, Image.ANTIALIAS )
colImage.save( outColor )
lumImage.save( outLum )
print "Complete!"
return ( outColor, outLum )
def printHelp():
print '''
Given an image, this python script will partition it
into colors and luminosity values. These are both
saved as RGB images, so that their Gamma curves are
identical. Multiplying these two images together
in RGB space, should yield the original image.
The flags this program excepts are:
--file / -f
"Path/FileBase.Ext"
( Required )
--col
"Path/ColorImage.format"
( default: "Folder/FileBase" + "_col.png" )
--lum
"Path/LuminosityImage.format"
( default: "Folder/FileBase" + "_lum.png" )
--colX
Color Image Width
( default: 32 )
--colY
Color Image Height
( default: 32 )
--lumX
Luminosity Image Width
( default: 256 )
--lumY
Luminosity Image Height
( defualt: 256 )
A typical call looks like:
python colorPartition.py -f "c:/folder/Image.png" --col "c:/folder/Image_colors/png" --colX 64 --colY 64
'''
if __name__ == "__main__":
flags = ["file=", "col=", "lum=", "colX=", "colY=", "lumX=", "lumY="]
try:
opts, args = getopt.getopt( sys.argv[1:], "hf:", flags )
except getopt.GetoptError, err:
print str(err)
sys.exit(2)
if ( len(opts) == 0 ):
printHelp()
sys.exit(2)
inFile = None
outColor = None
outLum = None
outColorX = 32
outColorY = 32
outLumX = 256
outLumY = 256
for flag, value in opts:
if flag in [ "-f", "--file"]:
inFile = str(value)
elif flag == "--col":
outColor = str(value)
elif flag == "--lum":
outLum = str(value)
elif flag == "--colX":
outColorX = int(value)
elif flag == "--colY":
outColorY = int(value)
elif flag == "--lumX":
outLumX = int(value)
elif flag == "--lumY":
outLumX = int(value)
elif flag == "-h":
printHelp()
sys.exit(2)
else:
assert False, "unhandled option"
if not os.path.exists(inFile):
assert False, "No input file specified."
if (not outColorX > 0) or (not outColorY > 0) or (not outLumX > 0) or (not outLumY > 0 ):
assert False, "Output image cannot have a Negative or Zero Dimension"
SeparateFile( inFile, outColor, outLum, (outColorX, outColorY), (outLumX, outLumY) )
Also, the entire task is this weird blend of cool, and “umm… I’m not sure if that’s a good idea.” It seems like you’re adding an extra tap and Multiply for something that DXT already handles quite well ( Very low frequency color changes and accuracy required mostly in luminosity ). This starts to get very weird, because if you break this down:
DXT1= 8:1 Compression
L8= 4:1 Compression
So by moving something that would have been a 256x256 DXT1 to a 256x256 L8 you will double it’s size, add the memory for the smaller color texture, and add extra shader work. This doesn’t exactly seem like it will pay off in terms of memory or speed, if that is what you actually care about. What you will gain is more accurate luminosities for 2x the memory instead of 4x ( scaling up to the next power of 2 would be roughly equivalent to what you are doing here ), which is kind of cool.
Good luck on your work! I’m really looking forward to when your studio releases screenshots, and I’m very curious how this particular technique turns out for you.
Cheers,
-Lith
it is a technique we’re toying with since a particular aspect our of game is very monochromatic, and we are not happy with how our details are getting munged by the DXT1 compression.
Thanks for the effort put into the python script!
Lithium - that’s another rad script Eventually tech-artists.org should publish a “tech artist’s python cookbook” or something. Great stuff.
What exactly is your aim and problem with dxt munge, there is plenty of info about getting more out of DXT, a good source is the nvidia texture tools wiki on google code.
thanks for the link. I am stuck using DXT1 with pre-multiplied Alpha. Its the only Compressed format supported on the Wii. Some textures just do not look good with DXT1 compression. We’re talking 128x128 and 64x64 textures with a low texel density.