Goose Posted February 3, 2006 Posted February 3, 2006 I an trying to do some basic image processing on the GPU in HLSL for DirectX for speed and have run into an interesting problem. I have figured out how to create a surface to render to (because I don't want to display to the screen) that is the exact same size as my input image. If I create a simple pixel shader that simply takes the color from TEXCOORD0 from the input texture and outputs to the output texture I would expect the input and output images to be identicle (because their dimensions are identicle). However the output image looks slightly blurry compared to the input image making me think that directX did not process the input image on a pixel-by-pixel basis, but rather did some sort of subsampling on the input image and interpolated that to create the output image. Has anyone else dealt with this problem or have any ideas? I'm basically trying to get the video card to do pixel-by-pixel image processing on an input image. Thanks Quote
Goose Posted February 4, 2006 Author Posted February 4, 2006 So I figured out the blurring problem is due to the fact that the texture I read in has dimensions that aren't power of 2. If I change the image to say 512x512 it works perfectly. Does anyone know how to read in a texture with non-power of 2 dimensions and get pixel-by-pixel processing? Thanks Quote
Administrators PlausiblyDamp Posted February 4, 2006 Administrators Posted February 4, 2006 Most (perhaps all even) video cards expect textures to be in a power of 2 format, it's more of a hardware issue than anything else. Quote Posting Guidelines FAQ Post Formatting Intellectuals solve problems; geniuses prevent them. -- Albert Einstein
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.